Job Summary


Salary
$7,278 - $10,278 SGD / Monthly EST

Job Type
Permanent

Seniority
Senior

Years of Experience
At least 10 years

Tech Stacks
ETL
Oracle
QlikView
API
ORC
Analytics
Storm
Spark Streaming
Apache Nifi
Elastic
pySpark
Avro
Shell
UNIX
Apache
Spark
NoSQL
Flink
kafka
SQL
PostgreSQL
Scala
Jenkins
Linux
Python
Bitbucket

Job Description


Apply
We are looking for a Senior Data Engineer, to collaborate with the Technical team in providing good data analysis and technical specifications . In-charge of ensuring strict implementations of activities /processes .Your Responsibilities :
  • Design, Develop, Test, Deploy and Administer Data Integration pipeline developed on ETL / ELT tools on Data Platform.
  • Collaborate with Technical Chapter Lead, architecture, and release teams, providing architectural design recommendations, driving standards for Data Integration subjects.
  • Provide analysis/evaluation of technical solutions recommended by the group to facilitate management decision.
  • Provide support during SIT, UAT roll-out and post-production whenever needed.
  • Ensure the standards per the Wealth management governance are followed at all times.
  • Ensure all aspects of security are respected at all times during the course of work.
  • To engage the other technical stakeholders within WM (e.g. DOMAIN ARCHITECT, Conformity Cell, Security Team, Application Integration and Production team) in the course of work.


Your Profile :


  • Have about 10+ of Total IT Experienced and 5+ years in Data Engineer.
  • Reporting Analytics Database Platform experience creating extracting, transforming, loading, and enhancing objects to ingest structured, semi-structured, and unstructured data sources; implementing, maintaining and optimizing data warehouses; and/or gathering and implementing user requirements.
  • Extensive hands on experience in designing, building and executing data pipeline using ETL/ELT tool Apache Nifi (Source, Extraction and Processors & Sink modules).
  • Designing & Building Source Interfaces constructs in Apache Nifi to source data from API (On-Demand or Batch), Event based streams in Kafka, Direct pull from RDBMS or File Transfer.
  • Strong Hands on experience with complex Transformation based data pipelines designed & developed using Apache Spark & Python Structured data API / Library.
  • Hands on experience on designing, developing, testing ETL/ELT Jobs interacting with multiple storage formats – File (Avro, Parquet, ORC), DB (Postgres, Oracle), Object Storage (S3), NoSQL DB (Mongo, Elastic)
  • Experience and knowledge on Access & Data Security – AD-LDAP-SAML- Kerberos-2FA IDP AuthN plus Data security through encryption, masking, filtering , anonymization
  • Strong hands on experience in Processing Framework - Spark 2.x/3.x (Core, structured API, Streaming, MLLib) Language & Package - Python (Scripting & PySpark), Scala using Spark API, Unix Shell, SQL Query (basic & advanced)
  • Knowledge on Streaming Platform – Apache Kafka, Apache Nifi, Spark Streaming, Flink, Storm
  • Experience on BI Analytics – Tableau Server & Creator, QlikView, Power BI, BO
  • Having a good understanding of Data Sourcing, Integration, Processing, Change Data Capture, Data Warehousing architectural design principles and Data Storage Model approach covering Stage, Raw, Refined, History & Schema Evolution
  • Working experience on Agile Methodology & DataOps tools like JIRA, Jenkins, BitBucket, Serena, Autosys
  • Working knowledge of Linux operating system covering scripting & debugging.
  • Experience in collaborating with software engineers/teams to solve complex application problems
  • Good communication and collaboration skills