Role Overview
The Data Modeler is responsible for the structural design of the platform's canonical and derived data assets. You will own the Common Data Model (CDM) at the schema level — the dimension, fact, bridge, and reference tables that hold normalized clinical, claims, and demographic data — and the source-to-target mappings (STTMs) that define how source-system data lands in CDM and how CDM data is reshaped into FHIR resources.
This role is hands-on. You will produce schema designs in collaboration with the architect, write STTM specifications that engineers implement against, and work with clinical informatics and source-system experts to ensure the model accurately reflects the data's clinical and operational meaning. Your output is the contract between source data and CDM, and between CDM and downstream consumers.
Key Responsibilities
- Own the schema-level design of the Common Data Model — DIM (patient, person, encounter, provider, organization), FACT (observation, diagnosis, procedure, medication administration, claim line, encounter), BRIDGE (relationship-qualifier-aware), and REF (terminology and crosswalk) tables. Design columns, types, NULL semantics, hash key composition, SCD2 patterns, and partitioning strategies in coordination with the architect
- Develop and maintain Source-to-Target Mappings (STTMs) for every source-system feed — EPIC HL7v2 ADT/ORU/ORM, Cerner HL7v2, ambulatory EHR feeds, state HIE CCDA, payer claims CSVs, FHIR ingestion. STTMs specify field-by-field source-path → CDM-column mapping with explicit transformation, NULL handling, and validation rules
- Develop and maintain STTMs for FHIR serialization — for each in-scope US Core 6.1 resource (Patient, Encounter, Condition, Observation, Procedure, Practitioner, Organization, AllergyIntolerance), specify how every FHIR element derives from CDM columns, including cardinality, profile constraints, and terminology bindings
- Design data product schemas — the longitudinal patient mart, population analytics aggregations, risk adjustment marts, HEDIS measure pre-aggregations. Each data product has a defined grain, columns, and refresh semantics
- Author and maintain unit specs at the model level under the spec-driven development framework. Each table has a versioned spec; model changes go through the spec review process before implementation
- Collaborate with data engineers to translate model specs into dbt implementations. Review dbt model code for adherence to the spec, including column naming, hash key construction, SCD2 macro usage, and test coverage
- Participate in source-system data profiling — analyze sample data from each source to identify quality issues, edge cases, and modeling implications before specs are finalized. Profile findings drive STTM design
- Define and enforce reference data (REF) management practices — terminology crosswalks (LOINC, SNOMED, ICD-10, RxNorm, CPT), source-code-to-standard-code mappings, and the SCD2 patterns that govern reference data evolution
- Document data dictionaries, lineage, and the canonical model glossary. Engage with Atlan or equivalent governance tooling to publish model documentation for downstream consumers
- Work with clinical informaticists and source-system experts to validate that the model and STTMs accurately represent clinical reality. STTMs are the contract between engineering and clinical informatics; you own that contract
Required Skills and Qualifications
- Bachelor's or Master's degree in Computer Science, Information Systems, or a related quantitative field
- 5+ years of data modeling experience for large-scale data warehouses, data lakes, or data platforms
- Demonstrated ability to design conceptual, logical, and physical data models — dimensional modeling (Kimball), data vault, or hybrid patterns. This project uses a Kimball-influenced canonical model with SCD2 dimensions and per-source row preservation
- Strong SQL proficiency. Experience reading and reviewing dbt SQL models is required; ability to author dbt models is a plus
- Experience modeling for analytical and operational data layers simultaneously — understanding how a normalized canonical model serves both downstream analytics and FHIR API consumers
- Hands-on experience with healthcare data standards — HL7v2 segment-level structure, CCDA document structure, and FHIR R4 resource models. Familiarity with US Core profiles and FHIR Bundle composition
- Experience producing source-to-target mappings (STTMs) at field-level granularity for multi-source data integration projects
- Experience modeling SCD2 patterns and the operational implications of late-arriving data, restatement, and version closure
- Experience with terminology systems used in healthcare — LOINC, SNOMED CT, ICD-10, CPT, RxNorm — and their crosswalk patterns
- Familiarity with Google Cloud Platform data services (Cloud Storage, BigLake, Dataproc) and open table formats. Direct Iceberg experience is a plus
- Strong written communication skills. STTMs and model documentation are read across engineering, QA, clinical, and governance audiences
Nice-to-Have Skills
- Direct experience with the Optum Health Data Engine canonical model or comparable healthcare canonical models (OMOP, PCORnet CDM, Sentinel)
- Experience with master data management modeling — patient matching attributes, source-system identifier preservation, ECI lifecycle
- Experience with Atlan or comparable governance and lineage tooling
- Hands-on dbt authorship including macros, tests, and project structure
- Familiarity with FHIR Implementation Guides beyond US Core (CARIN BB, Da Vinci, IPA)
- Experience with data modeling tools (ER/Studio, Erwin, dbdiagram, SqlDBM) for diagram production