About Exponentia.ai
Exponentia.ai is a fast-growing AI-first technology services company, partnering with enterprises to shape and accelerate their journey to AI maturity. With a presence across the
US, UK, UAE, India, and Singapore, we bring together deep domain knowledge, cloud-scale engineering, and cutting-edge artificial intelligence to help our clients transform into agile, insight-driven organizations.
We are proud partners with global technology leaders such as
Databricks, Microsoft, AWS, and Qlik, and have been consistently recognized for innovation, delivery excellence, and trusted advisories.
Awards & Recognitions
- Innovation Partner of the Year – Databricks, 2024
- Digital Impact Award, UK – 2024 (TMT Sector)
- Rising Star – APJ Databricks Partner Awards 2023
- Qlik’s Most Enabled Partner – APAC
With a team of
450+ AI engineers, data scientists, and consultants, we are on a mission to redefine how work is done, by combining human intelligence with AI agents to deliver exponential outcomes.
Learn more: www.exponentia.ai
About The Role
We are looking for an experienced
Engineer Data + AI with strong expertise in
Azure and
Databricks to design, build, and optimize scalable data pipelines and architectures. The ideal candidate will have hands-on experience with modern data engineering tools and cloud-based platforms, enabling efficient data processing, integration, and analytics.
You will work closely with solution architects, business analysts, AI specialists, and enterprise stakeholders to translate business problems into robust data architectures, pipelines, and analytics-ready environments.
The ideal candidate is hands-on, architecturally strong, cloud-native, and capable of balancing engineering excellence with business outcomes
Key Responsibilities
- Design, develop, and maintain data pipelines and ETL/ELT processes using Azure Data Factory, Databricks, and other Azure services.
- Build and optimize data models, data lakes, and data warehouses to support analytics and reporting needs.
- Build scalable, reliable, and high-performance data pipelines for batch, real-time, and streaming use cases.
- Develop end-to-end data ingestion, transformation, and orchestration frameworks.
- Implement data quality, observability, lineage, and governance practices.
- Optimize data models for analytics, AI, and BI consumption.
- Partner with AI/ML teams to prepare feature stores, training datasets, and inference-ready data layers.
- Enable analytics teams with curated, business-ready data models.
- Support GenAI, AI agents, and advanced analytics initiatives through data architecture.
- Ensure data pipelines are aligned with AI model lifecycle needs.
- Work closely with data scientists, analysts, and business stakeholders to deliver reliable, high-quality data solutions.
- Implement best practices for data governance, data quality, and security.
- Optimize performance of large-scale data processing and storage systems.
- Automate workflows and data operations to improve reliability and reduce manual efforts.
- Monitor and troubleshoot data workflows to ensure data integrity and availability.