Job Summary


Salary
$6,957 - $12,521 SGD / Monthly EST

Job Type
Permanent

Seniority
Senior

Years of Experience
At least 4 years

Tech Stacks
ETL
Analytics
Spark
NoSQL
SQL
Scala
Hadoop
Python

Job Description


Apply
Join us and experience what it’s like to be with an Employer of Choice*. Together, let’s create a brighter digital future for all. *Awarded at the HR Fest Awards 2020

Key Responsibilities

This role is accountable to expand and optimize our data and data pipeline architecture under Singtel Data & Analytics within Group IT:
  • Design, create and maintain optimal data pipelines
  • Drive optimization, testing and tooling to improve data quality
  • Ensure that proposed solutions are aligned and conformed to the big data architecture guidelines and roadmap
  • Evaluate and renew implemented data pipelines solutions to ensure their relevance and effectiveness in supporting business needs and growth
  • Design and implement data pipelines in Hadoop platform
  • Understand business requirement and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements
  • Fine-tuning of new and existing data pipelines
  • Schedule and maintain data pipelines
  • Drive optimization, testing and tooling to improve data quality
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc
  • Build robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users
  • Work with data scientist and business analytics team to assist in data ingestion and data related technical issues
  • Work with Group IT domains and outsourced vendors to deliver and implement solutions from requirement till post go-live. Ensure vendors to deliver solutions on-time, on-budget that fulfill business requirements and functionality, operational and scalable
The Ideal Candidate Should Possess

  • Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent
  • Minimum 4 years of experience in data warehousing / distributed system such as Hadoop
  • Experience with relational SQL and NoSQL DB
  • Experience in building and optimizing ‘big data’ data pipelines, architectures and data sets
  • Excellent experience in Scala or Python
  • Experience in ETL and / or data wrangling tools for big data environment
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Knowledgeable on structured and unstructured data design / modelling, data access and data storage techniques
  • Experience in DevOps tools and environment
We believe in the strength of a vibrant, diverse and inclusive workforce where backgrounds, perspectives and life experiences of our people help us innovate and create strong connections with our customers. We strive to ensure all our people practices are non-discriminatory and provide a fair, performance-based work culture that is diverse, inclusive and collaborative.