Lead Data Engineer

Singtel logo

Singtel

View Salaries, Reviews, and more  

Job Description

Responsobilities

  • Manage multiple data engineering delivery / project teams comprise of internal data engineers and IT service providers to ensure that projects or enhancements are delivered within the agreed scope, budget, and schedule
  • Work with business stakeholders to develop and analyze big data needs
  • Design, develop and automate large scale, high-performance distributed data processing systems (batch and/or real-time streaming) that meet both functional and non-functional requirements
  • Design data models for optimal storages across data layers, workload and presentation retrieval to meet critical business requirements and platform operational efficiency
  • Deliver high level & detailed design to ensure that the solution meet business requirementsand align to the data architecture principles and technology stacks
  • Practice high quality data engineering/software engineering towards building data platform infrastructure and data pipelines at scale to deliver Big Data Analytics and Data-Science initiatives
  • Partner with business domain experts, data scientists, and solution designers to identify relevant data-assets, domain data model and data solutions. Collaborate with product data engineers to coordinate backlog feature development of data pipelines patterns and capabilities
  • Own and lead data engineering projects; data pipelines delivery with reliable, efficient, testable, & maintainable artifacts, involves ingest & process data from a large number and variety of data sources
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data products for greater scalability
  • Drive Cloud data engineering practices and Cloud Lake-house re-platform drive to build & scale Modern Data Platform & Infrastructure
  • Build, optimize and contribute to shared Data Engineering Frameworks and tooling, Data
  • Products & standards to improve the productivity and quality of output for Data Engineers
  • Design and build scalable Data APIs to host Operational data and Data-Lake assert in Data Mesh / Data Fabric Architecture
  • Drive Modern Data Platform operations using Data Ops, ensure data quality, monitoring the data system. Also support Data science MLOps platform
  • Drive and deliver industry standard Devops (CI/CD) best practices, automate development and release management
  • Understand data security standards, use data security guidelines & tools to apply and adhere to the required data controls across data platform, data pipelines, applications and access end points
  • Support and contribute to data engineering product and data pipeline documentations, development guidelines & standards for data-pipeline, data model and layer design
Requirements

  • Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent Work Experience
  • Minimum of 10 years of experience in Data Engineering, Data Lake Infrastructure, Data Warehousing, Data Analytics tools or related, in design and developing of end-to[1]end scalable data pipelines and data products
  • Experience in building and operating large and robust distributed data lakes (multiple PBs) and deploying high performance with reliable system with monitoring and logging practices
  • Experience in designing and building data products and pipelines using some of the most scalable and resilient open-source big data technologies; Spark, Delta[1]Lake, Kafka, Flink, Airflow, Presto and related distributed data processing
  • Experience with data modelling for data warehousing
  • Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations
  • Build and deploy high performance modern data engineering & automation frameworks using programming languages such as Scala/Python and automate the big data workflows such as ingestion, aggregation, ETL processing etc
  • Good understanding of data modeling and high-end design, data engineering / software engineering best practices - include handling and logging errors, monitoring the system, fault tolerant pipelines, data quality and ensuring a deterministic pipeline with DataOps
  • Experience working in Telco Data Warehouse and / or Data Lake
  • Excellent experience in using ANSI SQL for relational databases like Postgres, MySql, Oracle and knowledge of Advanced SQL on distributed analytics engines – Databricks SQL, Snowflake, etc
  • Proficiency programming languages like Scala, Python, Java, Go, Rust or scripting languages like Bash
  • Experience on cloud systems like AWS, Azure, or Google Cloud Platform o Cloud data engineering experience in at least one cloud (Azure, AWS, GCP)
  • Experience with Databrick (Cloud Data Lakehouse)
  • Experience on Hadoop stack: HDFS, Yarn, Hive, HBase, Cloudera, Hortonworks
We are committed to a safe and healthy environment for our employees & customers and will require all prospective employees to be fully vaccinated.

banner icon
Interested in common interview questions?
Let's prepare for potential interview questions tailored to your job description and work experience.
Get Started!

Achieve your dream job with our top-notch tools!

Resume Checker Illustration

Resume Checker

Our free resume checker analyzes the job description and identifies important keywords and skills missing from your resume in just a minute!

Check Now
Resume Checker Illustration

Interview Preparation

Utilizing advanced AI, our tool generates tailored interview questions based on your industry, role, and experience. Practice and receive feedback on your answers in real time!

Let's Prepare
Resume Checker Illustration

Resume Builder

Let us show you the differences between a bad, good, and great resume, and guide you in building a resume that helps you stand out to employers, ensuring you land your next position faster!

Build Resume