Company Description
Grab is Southeast Asiaโs leading superapp. We are dedicated to improving the lives of millions of users across the region by providing them everyday services such as deliveries, mobility, financial services, enterprise services and others. More than that, we provide the opportunity for them to have a better life. And that aspiration starts inside Grab because we believe in a seamless blend of work and home life, making every aspect of life better for all.
Guided by The Grab Way, which spells out our mission, how we believe we can achieve it, and our operating principlesโthe 4Hs: Heart, Hunger, Honour and Humilityโwe work to create economic empowerment for the people of Southeast Asia. With our unwavering commitment to our values, we believe that we're more than a service provider; we're agents of positive change.
Job Description
As a Data Engineer, you will be responsible for building, maintaining and optimizing data pipelines and systems to support the organization's data needs. You will collaborate closely with cross-functional teams, including data scientists, analysts, and mobile engineers, to ensure seamless data integration, processing, and delivery. Leveraging your expertise in big data technologies, you will drive the evolution of our data frameworks, ensuring reliability, scalability, and performance.
Key Responsibilities
- Design, develop, and maintain robust, scalable data pipelines and ETL processes to extract, transform, and load modeled data into our data warehouse.
- Collaborate with stakeholders to understand data requirements and translate them into technical solutions, ensuring alignment with business objectives.
- Architect data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data.
- Implement best practices for data quality, governance, and security to maintain the integrity and confidentiality of data assets.
- Optimize performance and scalability of data infrastructure, leveraging cloud services and distributed computing technologies.
- Evaluate and implement new tools, frameworks, and technologies to enhance the capabilities of our data platform.
Qualifications
The Must-Haves
- Bachelor's degree in Computer Science, Engineering, or related field.
- 2+ years of experience in data engineering, with a proven track record of designing and implementing complex data solutions.
- Proficiency in programming languages such as Python or Scala, with experience in data processing frameworks like Spark or Presto.
- Hands-on experience with platforms like Airflow
- Excellent problem-solving skills, with the ability to troubleshoot and debug complex data issues.
- Effective communication skills, with the ability to collaborate with cross-functional teams and present technical concepts to non-technical stakeholders.
Additional Information
The Nice-to-Haves
- Understanding of Go & relational databases like postgresql
- Understanding oPython
- Understanding of data format like Avro, Parquet, Delta, ORC
- Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud
- Understanding of Big Data Systems, Hive, Spark, Presto
- Understanding PySpark or ScalaSpark
- Knowledge of Kafka & Streaming (Flink)
- Familiar with observability tools like Splunk, Kibana and/or DataDog
- Understanding of AWS concepts, terraform.