Job Summary


Salary
S$5,000 - S$8,500 / Monthly EST

Job Type
Permanent

Seniority
Junior

Years of Experience
At least 2 years

Tech Stacks
ETL
Oracle
Analytics
HDFS
Azure
Hive
Spark
NoSQL
SQL
Scala
Python
AWS
Java

Job Description


Apply
In this role, the Data Engineer will be exposed to many aspects of data collection and data pipeline to serve stakeholders requirements. Stakeholders can range from Business Analysts, Product Analysts, Data Analyst to Data Scientists who need datasets for modelling, visualization and decision making.

The Data Engineer should have a proven track record of delivering data pipeline solutions and architecture. He/She should also understand business requirement and able to build reliable data infrastructure using big data technologies. Ideally, you are someone who enjoys optimizing data pipeline, automating and building from scratch.

Responsibilities:
  • Expanding data collection as well as optimizing data pipelines for cross-functional teams
  • Work closely with data analysts and business end-users to implement and support data platforms
  • Tuning, troubleshooting and scaling identified big data technologies.
  • Analyse, tackle and resolve day-to-day operational incidents related to data provision
  • Build suitable tools to provide data through acquiring, monitoring and analyzing root cause of data issues
  • Identify, design, and implement process improvements and tools to automate data processing with data integrity
  • Work with data scientist and business analytics to assist in data ingestion and data-related technical issues
  • Design, build and maintain the batch or real time data pipeline in production using big data technology
  • Design, build and manage data warehouse such as designing data model
  • Create data views from big data platform to feed into analysis engines or visualization engines

Requirement:
  • Bachelor degree in Computer Science, Computer Engineering, Software Engineering or equivalent
  • Minimum 2 years of relevant working experience in ETL / data integration and data modelling
  • Experience with Data Engineering and Data Quality
  • Cloud experience, ideally with Azure and AWS
  • Understanding of Big data technologies like HDFS, Hive, Spark
  • Experience of relational or NoSQL database (e.g. Oracle) and using database technologies (PL/SQL, SQL)
  • Experience in data warehousing / distributed system
  • Experience in data ingestion, cleaning and processing tools
  • Experience in data acquiring, data processing using Scala/Python/Java
  • Highly organized, self-motivated, pro-active, and desire to learn new technology