Data Engineer

FNZ Group logo

FNZ Group

View Salaries, Reviews, and more  

Job Description

Job Title: Data Engineer — Analytical Warehouse (FNZ)

About FNZ:

FNZ is a global fintech firm transforming the way financial institutions serve their clients. By

combining cutting-edge technology, infrastructure, and investment operations, FNZ

enables wealth management firms to deliver personalized investment solutions at scale.

Operating across multiple regions and supporting over $1.5 trillion in assets under

administration, FNZ partners with leading banks, insurers, and asset managers to create

seamless and innovative wealth platforms that empower millions of investors worldwide.

Job Summary:

We are seeking a hands-on Data Engineer to build and maintain the Analytical Warehouse

on Microsoft Fabric. This role focuses on developing data pipelines that ingest enriched

Gold-layer data from the NRT-ODS streaming platform into OneLake, building

transformation layers using SQL-based transformation frameworks or Fabric notebooks,

and delivering analytical datasets to wealth management clients. You will work at the

intersection of the real-time ODS and the analytical lakehouse, enabling historical

analytics, business intelligence, and client-facing reporting.

Key Responsibilities:

  • Kafka-to-Fabric Ingestion: Build and maintain the Kafka Connect sink connectors

that write Gold topics from the NRT-ODS into Fabric OneLake in Delta/Parquet

format. Ensure near real-time ingestion with automatic schema evolution via Avroto-Delta mapping.

  • Data Pipeline Development: Develop data transformation pipelines within

Microsoft Fabric using Fabric notebooks (PySpark/Spark SQL), Dataflows, and Data

Factory pipelines. Implement Bronze/Silver/Gold layering within the Analytical

Warehouse.

  • Data Transformations: Build and maintain SQL-based transformation models that

convert raw ingested data into analytical datasets. Implement incremental models,

snapshot tables, and materializations optimized for analytical query patterns.

  • OneLake Storage Management: Design and manage OneLake storage structures

— partition strategies (by date, entity type, client), file compaction, retention

policies, and storage optimization for cost and query performance.

  • Batch Extract Modernization: Migrate existing batch extract processes from SQLdriven CSV to Kafka-sourced Parquet via Fabric pipelines. Retain metadata-driven

configuration from CentralHub while outputting to OneLake in Parquet/Delta

format.

  • Semantic Layer Development: Build semantic layer definitions for businessfriendly metrics — AUM, NAV, trade volumes, fee breakdowns, client counts —

ensuring consistent metric definitions across all consumption channels.

  • Data Sharing: Implement Fabric Data Sharing using OneLake shortcuts or Delta

Sharing for clients who consume analytics in their own Fabric tenant. Ensure

governed access where clients see only their own data.

  • Data Quality: Implement data quality checks within the Analytical Warehouse using

Great Expectations or Soda. Validate row counts, null rates, referential integrity, and

freshness against defined data contracts.

  • Performance Optimization: Tune query performance across Fabric SQL endpoints,

optimize Delta table layouts (Z-ordering, partitioning, file sizing), and manage

compute resource allocation.

  • CI/CD & DevOps: Implement CI/CD pipelines for Analytical Warehouse artifacts

(transformation models, Fabric notebooks, pipeline definitions) using GitHub

Actions. Follow GitOps practices for deployment.

Qualifications:

  • Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a

related technical field.

  • Experience: 4+ years of hands-on experience in data engineering with a focus on

analytical/warehouse workloads.

  • Microsoft Fabric / Azure: Demonstrated experience with Microsoft Fabric, Azure

Synapse Analytics, or Azure Data Factory. Familiarity with OneLake, Fabric

notebooks, and Fabric SQL endpoints.

  • SQL Expertise: Strong SQL skills including complex analytical queries, window

functions, CTEs, and query performance tuning.

  • Spark / PySpark: Proficiency in PySpark or Spark SQL for large-scale data

transformations.

  • Data Transformation Frameworks: Experience with SQL-based transformation

frameworks for managing transformation layers — models, tests, documentation,

and incremental materializations.

  • Delta Lake / Parquet: Understanding of Delta Lake table format — ACID

transactions, time travel, schema evolution, partition management, and file

compaction.

  • Kafka Fundamentals: Working knowledge of Apache Kafka — consumer concepts,

Kafka Connect, Avro serialization — sufficient to build and troubleshoot the

ingestion layer from ODS Gold topics.

  • CI/CD: Experience with CI/CD pipelines (GitHub Actions preferred) for data pipeline

deployments.

Preferred Qualifications:

  • Experience working in the Wealth Management or Financial Services industry with

understanding of investment data domains (accounts, portfolios, transactions,

positions).

  • Experience with Apache Iceberg table format for time-travel queries and multiengine access.
  • Familiarity with data quality frameworks such as Great Expectations or Soda

integrated into data pipelines.

  • Experience with semantic layer tools for defining governed business metrics.
  • Exposure to data catalog and lineage tools (Purview, Atlan, or similar).
  • Microsoft Fabric certifications or Azure Data Engineer certifications (DP-203) are a

plus.

About FNZ

FNZ is committed to opening up wealth so that everyone, everywhere can invest in their future on their terms. We know the foundation to do that already exists in the wealth management industry, but complexity holds firms back.

We created wealth’s growth platform to help. We provide a global, end-to-end wealth management platform that integrates modern technology with business and investment operations. All in a regulated financial institution.

We partner with the world’s leading financial institutions, with over US$2.4 trillion in assets on platform (AoP).

Together with our clients, we empower nearly 30 million people across all wealth segments to invest in their future.

Interview Questions of Data Engineer at FNZ Group

Currently, there aren't any interview questions for this role at FNZ Group shared by other job seekers.
View more interview questions of similar roles from other companies →
banner icon
Prepare For Your Interview in 1 Week?
Equip yourself with possible questions that interviewers might ask you, based on your work experience and job description.
Get Started!

Salary Insights of Data Engineer at FNZ Group

Currently, there aren't any salaries for this role at FNZ Group shared by other job seekers.

View more salaries from FNZ Group →

Achieve your dream job with our top-notch tools!

Resume Checker Illustration

Resume Checker

Our free resume checker analyzes the job description and identifies important keywords and skills missing from your resume in just a minute!

Check Now
Interview Preparation Illustration

AI InterviewPrep

Utilizing advanced AI, our tool generates tailored interview questions based on your industry, role, and experience. Practice and receive feedback on your answers in real time!

Check Now
Resume Builder Illustration

Resume Builder

Let us show you the differences between a bad, good, and great resume, and guide you in building a resume that helps you stand out to employers, ensuring you land your next position faster!

Check Now