Job Summary


Job Type
Permanent

Seniority
Mid

Years of Experience
At least 4 years

Tech Stacks
Analytics
Flow
Pandas
NumPy
Spark
NoSQL
kafka
SQL
Hadoop
Python

Job Description


Apply
What would you be doing?

The Data Engineer will be reporting to our Database Administrator and will be part of our Operations Team. As the Database Engineer, you will be supporting our data warehouse and business intelligence team on data initiatives, as well as ensuring that the data delivery architecture is maintained at a consistent and optimal level throughout all ongoing projects. As part of a lean and agile team, you will be given the opportunity to optimize and re-design our company’s data architecture to support our next generation of products and data initiatives.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using big data technologies in Cloud.
  • Create and maintain optimal data pipeline architecture
  • Create data tools for BI analytics team and other stakeholders and assist them in building and optimizing our product into an innovative industry leader.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into operational efficiency and other key business performance metrics.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesign infrastructure for greater scalability, etc.
  • Deploying machine learning techniques to create and sustain structures that allow for the analysis of data


We would love to hear from you if you have:

  • At least Bachelor’s degree in Data Science / Information Technology or relevant field.
  • Minimum 4 years of experience as Data Engineer.
  • Good SQL knowledge with relational databases, query authoring (SQL) as well as familiarity with a variety of databases.
  • Good to have exposure/experience in building and optimizing ‘big data’ data pipelines, architectures, and data sets.
  • Good working experience in Python scripting language and data management libraries (Pandas, NumPy)
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
  • Knowledge on data tools: Air Flow, Hadoop, Spark, Kafka, etc.
  • Knowledge on NoSQL databases
  • Must be based in Singapore

What would you get?

  • Hybrid work model
  • Learning and Development
  • Discretionary Yearly Bonus & Salary Review
  • Healthcare Coverage based on location
  • 20 days Paid Annual Leave (excluding Bank holidays)

If you would love to experience working in a start-up growing at an accelerated speed, and you think you tick most of the requirements, come join us!

Salaries

There are no salaries from Toku that are similar to this job

View more salaries from Toku