Get to know our Team:
Grab Data Tech is enabling everyday opportunities through big data innovation in Southeast Asia.The Midas metrics platform team is a team formed recently to build up the Metrics Platform for all of Grab.
Midas will be building out the tools and services to support Grabbers on every part of the metrics lifecycle journey from metrics creation, definition, engineering, certification, storage, serving and more.
Consistent, high quality metrics are going to be key to the continued growth and success of Grab in the coming years, and Midas by providing high quality user experience for everyone involved in the metrics lifecycle, will be a key part of making this a reality..
Get to know the Role:
As a member of this team that is the newest member of Grab’s Data Technology family, you have an unmissable opportunity to shape and build a key component of the data stack in one of the most data driven companies in the South East Asian region.
We operate in a challenging, fast paced and ever changing environment that will push you to grow and learn. You will be involved in various areas of Grab’s Metrics Ecosystem including metrics engineering, reporting & analytics, data infrastructure, and various other data services that are integral parts of Grab’s overall technical stack.
The day-to-day activities:
- Build deploy, manage and have end to end ownership of the metrics platform infrastructure using Terraform,
- Building, scaling, and monitoring data services as well as performing root cause analysis investigations post incidents
- Work with the engineering team to explore and create new design/architectures geared towards scale and performance, including RESTful APIs
- Develop and in depth understanding of the entire metrics lifecycle, including user journeys, and the interplay of the different tools and services in the metrics lifecycle
- Be a champion of consistent high quality metrics at Grab and help drive the technical solutioning and the organizations involved towards these goals.
- Work towards the democratization of metrics ownership enabling Grabbers from different tech families the define, create and publish and own relevant metrics for their respective products.
- Develop automation to automate the metrics lifecycle workflows such as ingestion, aggregation, ETL processing, certification, publication etc.
- Maintain and optimize the performance of our metrics platform infrastructure to ensure accurate, reliable and timely delivery of key insights for decision making.
- Deploy, scale and operate Modern high performance real-time OLAP data stores such as Apache Pinot with Solid understanding of distributed computing.
- Build scalable and reliable data pipeline between our metrics store and data sources from streaming (Kafka, Flink) or batch (Hive, Delta, Hudi, Iceberg etc) data sources.
- Work on processing in stream data on stream processing frameworks such as Apache Flink to generate real time metrics and extract real time insight to power Grab’s business.
- Design an architecture that bridges the real-time and offline data domains to provide a consistent view of metrics across all time spans from seconds to yearly time windows.
- Work with modern large scale data systems such as pinot,Flink, Spark, Trino, Kafka and more.
The must haves:
- A deep passion for data and building high quality and high scale data platforms
- Experience designing and /or building high performance scalable data infrastructure.
- Experience working on data systems problems at a large scale.
- Have a user centric mindset and truly care about building solutions that enable your peers and stakeholders to achieve greater heights.
- Write unit, functional and end-to-end tests consistently and thoroughly.
- Excited about working with new data technologies, and discovering new and interesting solutions to the company’s data needs
- Excellent communication skills to communicate with the product development engineers to coordinate development of data pipelines, and or any new products features that can be built on top of the results of data analysis
- A degree or higher in Computer Science, Electronics or Electrical Engineering, Software Engineering, Information Technology or other related technical disciplines.
- Good experience working on streaming data processing systems such as Kafka, Flink, Spark Streaming and others.
- Experience in handling large data sets (multiple PBs) and working with structured, unstructured and geographical datasets
- Good experience in handling big data within a distributed system and knowledge of data processing in distributed OLAP environments.
- Knowledgeable on cloud systems like AWS, Azure, or Google Cloud Platform
- Familiar with tools within the modern data ecosystem, such as Trino, Spark, Flink, Kafka, and others.
- Good experience with programming languages like Python, Go, Scala, Java, or scripting languages like Bash.
- Design and implement RESTful APIs, and build, deploy performant modern web applications in React, NodeJS and TypeScript.
- Deep understanding on databases and best engineering practices - include handling and logging errors, monitoring the system, building human-fault-tolerant pipelines, understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning and ensuring a deterministic pipeline
We are committed to building diverse teams and creating an inclusive workplace that enables all Grabbers to perform at their best, regardless of nationality, ethnicity, religion, age, gender identity or sexual orientation and other attributes that make each Grabber unique.
Join us today to drive Southeast Asia forward, together.