Job Role Summary:
We are looking for a Data Engineer with experience in Azure Databricks, PySpark, and SQL to build and maintain ETL/ELT data pipelines. The role involves working with large datasets, performing data transformations, and supporting the migration of data from Oracle systems to the Azure cloud. The candidate will also work with BI and analytics teams to prepare reliable datasets for reporting and insights. Strong skills in SQL, Python, data validation, and performance optimization are required.
Key Responsibilities
1. Cloud & Data Engineering Execution
2. SQL & Database Development
3. Analytics & BI Enablement
4. Cloud Readiness & Performance Awareness
5. Collaboration & Delivery
Required Skills
Preferred (Not Mandatory)
Interview Rounds – 3 rounds
Shift Timings – 2 PM to 11 PM
Location: Bangalore
Work Mode – 2 or 3 days WFO
NP – 0 to 30 days
Regards,
Currently, there aren't any salaries for this role at Sagility shared by other job seekers.
View more salaries from Sagility →Achieve your dream job with our top-notch tools!
Resume Checker
Our free resume checker analyzes the job description and identifies important keywords and skills missing from your resume in just a minute!
AI InterviewPrep
Utilizing advanced AI, our tool generates tailored interview questions based on your industry, role, and experience. Practice and receive feedback on your answers in real time!
Resume Builder
Let us show you the differences between a bad, good, and great resume, and guide you in building a resume that helps you stand out to employers, ensuring you land your next position faster!