In our' always on' world, we believe it's essential to have a genuine connection with the work you do.
We are looking for a Staff Software Engineer to join our growing team in Singapore. You will work with adynamic and focused team to develop state-of-the-art big data applications in Analytics and Artificial Intelligence (AI). As our Senior Software Engineer, you will be implementing our core software components, and be involved in scalable design for cloud software architecture. This is an exciting opportunity to join our talented team and be involved in the next technology trend: Analytics& AI in Networking.
Key Responsibilities
· Distributed Data Processing: Design, build, and optimize complex streaming and batch pipelines using Scala/Java/Python and Apache Spark (Structured Streaming and RDDs).
· Event-Driven Orchestration: Lead the migration of our cron-based Spark pipelines to modern, event-driven orchestration frameworks(e.g., Argo Workflows/Events, Apache Airflow, or Temporal) running on Kubernetes.
· Streaming Infrastructure: Work extensively with Apache Kafka and Google Protocol Buffers (GPB) to manage high-frequency data ingestion and schema evolution.
· Lakehouse Architecture: Implement and optimize data storage using modern open table formats like Delta Lake and Apache Iceberg (including partitioning strategies and compaction).
· Cloud-Native Deployment: Deploy, scale, and tune workloads on Google Kubernetes Engine (GKE) using Helm and the Spark K8s Operator.
· Experience: 6+ years of software engineering experience focusing on distributed systems, big data, or backend engineering.
· Languages: Experience in Scala(or any JVM language with a willingness to learn Scala). Python/Rust is a plus.
· Big Data Frameworks: Experience with Apache Spark (tuning, partitioning, driver/executor memory management, and streaming).
· Messaging: Solid understanding of Apache Kafka (topics, partitions, consumer groups) and serialization formats like Google Protobuf.
· Infrastructure: Hands-on experience deploying and managing applications on Kubernetes (ideally GKE) and using Helm.
· Orchestration: Experience with modern data orchestration tools (Airflow, Argo, Dagster, Prefect, or Temporal).
· Rust: Experience writing systems-level or high-performance code in Rust (or a strong desire to learn and adopt it for data tooling).
· Modern Table Formats: Production experience with Delta Lake or Apache Iceberg (understanding of transaction logs, meta data, and partition pruning).
· Next-Gen Streaming: Familiarity with emerging streaming engines like Arroyo or Apache Flink.
· Testing at Scale: Experience building distributed load-testing frameworks or chaos engineering tools for data pipelines.
Achieve your dream job with our top-notch tools!
Resume Checker
Our free resume checker analyzes the job description and identifies important keywords and skills missing from your resume in just a minute!
AI InterviewPrep
Utilizing advanced AI, our tool generates tailored interview questions based on your industry, role, and experience. Practice and receive feedback on your answers in real time!
Resume Builder
Let us show you the differences between a bad, good, and great resume, and guide you in building a resume that helps you stand out to employers, ensuring you land your next position faster!