About the Role
We are seeking a motivated and detail-oriented Junior Cloud Engineer specializing in Data Integration to join our growing engineering team. In this role, you will contribute to the design, development, and maintenance of scalable data pipelines and cloud-based integration solutions leveraging modern streaming and change data capture technologies. You will also play a key role in supporting database backup, recovery, and operational resilience practices. This is an excellent opportunity for an early-career engineer to build foundational expertise across cloud platforms, data engineering, and database administration.
Key Responsibilities :
- Design, build, and maintain data pipelines utilizing Apache Kafka and change data capture (CDC) tools such as Debezium to support real-time and batch data integration across systems.
- Develop and manage CI/CD pipelines to automate the deployment, testing, and monitoring of data integration workflows.
- Work within AWS and GCP cloud environments to implement and maintain cloud-native data solutions.
- Operate and manage workloads within Linux-based operating system environments.
- Research, evaluate, and recommend emerging technologies and tools related to databases, data pipeline architectures, and backup and recovery solutions to support continuous improvement.
- Collaborate with data engineers and data analysts to understand business requirements and develop fit-for-purpose data pipelines.
- Consult with application developers and engineering teams to promote and enforce best practices in database performance, housekeeping, backup management, and optimization.
- Review and analyze database architectures and designs to ensure scalability, reliability, recoverability, and alignment with organizational standards.
- Adhere to best practices in code quality, version control, and technical documentation.
- Explore and adopt AI-powered tools and automation technologies to assist in data pipeline development, monitoring, and operational workflows, supporting efficiency improvements across the team.
Qualifications :
- Bachelor's degree in Computer Science, Information Technology, Engineering, or a related discipline.
- 0–2 years of professional experience in cloud engineering, data engineering, database administration, or a related field.
- Foundational knowledge of Apache Kafka and familiarity with change data capture concepts and tools (e.g. Debezium, AWS DMS).
- Hands-on experience or academic exposure to AWS and/or GCP cloud services (e.g. AWS Redshift, AWS Glue, S3, Lambda, BigQuery, Pub/Sub, Kubernetes).
- Working knowledge of Linux operating systems, including command-line proficiency.
- Proficiency in one or more of the following: Python, Java, SQL, or Shell scripting.
- Understanding of CI/CD principles and experience with tools such as GitHub Actions, Jenkins, GitLab CI, or Cloud Build.
- Familiarity with relational and non-relational database concepts and design principles.
Strong analytical and problem-solving skills with a commitment to producing high-quality, maintainable work.
Benefits :
- Arrangeable Hybrid Working Personal
- Personal MacBook
- Special Housing Loan Rate
- World-Class Development Program
- Group Insurance : Health, Dental & Life Insurance
- Wellbeing : Annual Health Check-up, Mental Health Counselling Service
- Special Discounts e.g. Fitness, Cafe, Eating, Wellness, Clinic, and etc.
- Free Snack
- Opportunity to be a part of team that drives Thailand Digital Economy (The maker of renowned mobile applications for all Thais including Paotang, Krungthai Next, and Tungngern)
Working Location :
The ParQ Office Building, 5th, 9-10th Floor
Near MRT Queen Sirikit National Convention Centre Station, Exit 2