Job Summary


Salary
S$8,500 - S$17,000 / Monthly EST

Job Type
Permanent

Seniority
Lead

Years of Experience
At least 10 years

Tech Stacks
ETL
Analytics
Spark
NoSQL
SQL
Scala
Hadoop
Python

Job Description


Apply
This role is accountable to define the big data architecture, design, build and run data pipelines under Singtel Data & Analytics within Group IT:
  • Define and govern big data architecture
  • Provide DevOps architecture implementation and operational support
  • Manage the automation, design, engineering and development work related to data pipelines
  • Drive optimization, testing and tooling to improve data quality
  • Review and approve solution design for data pipelines
  • Ensure that proposed solutions are aligned and conformed to the big data architecture guidelines and roadmap
  • Evaluate and renew implemented data pipelines solutions to ensure their relevance and effectiveness in supporting business needs and growth

Key Responsibilities

  • Establish big data (data lake) architecture along with standards, guidelines and best practices
  • Develop and maintain big data architecture blueprint for Group IT
  • Build and maintain continuous integration and continuous deployment of data pipelines
  • Understand business requirement and solution design to develop and implement solutions
  • that adhere to big data architectural guidelines and address business requirements
  • Fine-tuning of new and existing data pipelines
  • Drive optimization, testing and tooling to improve data quality
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc
  • Build robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users
  • Provide guidance and direction to project delivery and operation teams in regards to big data architecture and solution design
  • Guide the team in the DevOps development

The Ideal Candidate Should Possess

  • Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent
  • Minimum 10 years of experience in data warehousing / distributed system such as Hadoop
  • Minimum 5 years of experience in solution architect and design of distributed system such as Hadoop
  • Minimum 5 years of hands on experience in DevOps development for big data platform
  • Experience with relational SQL and NoSQL DB
  • Expert in building and optimizing ‘big data’ data pipelines, architectures and data sets
  • Excellent experience in Scala or Python
  • Experience in ETL and / or data wrangling tools for big data environment
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Knowledgeable on structured and unstructured data design / modelling, data access and data storage techniques
  • Experience to do cost estimation and working with external vendors
  • Experience with DevOps tools and environmentWe believe in the strength of a vibrant, diverse and inclusive workforce where backgrounds, perspectives and life experiences of our people help us innovate and create strong connections with our customers.

We strive to ensure all our people practices are non-discriminatory and provide a fair, performance-based work culture that is diverse, inclusive and collaborative.