Staff Data Engineer

Job summary

United States

Work model

Fully remote
Only United States
4 days ago
Job description

About Interwell Health

Interwell Health is a kidney care management company dedicated to partnering with physicians and reimagining healthcare. Our mission is to set the industry standard with expertise, scale, compassion, and vision to help patients live their best lives. If our mission resonates with you, join us in making a difference!

The Role

Reporting to the Director of Data Engineering, the Staff Data Engineer is a senior technical leadership position responsible for shaping, scaling, and governing our modern data ecosystem. This role combines architecture, hands-on engineering, platform leadership, and cross-functional partnerships to deliver high-quality data products that drive clinical, operational, financial, and analytical outcomes. Essential qualifications include deep experience with Databricks, Python, dbt, and Microsoft Fabric, along with strong fluency in healthcare data and compliance standards. Your core responsibility will be to collaborate with teams across the organization to deliver governed, high-quality, analytics-ready data at scale.

Our Tech Stack

  • Databricks
  • Delta Lake
  • Unity Catalog
  • Microsoft Fabric (OneLake, Lakehouse, Data Factory)
  • Azure
  • dbt
  • Python
  • PySpark
  • Spark SQL

What You'll Do

Architecture & Strategy

  • Design and evolve a scalable, secure, cloud-native lakehouse platform leveraging Databricks, Microsoft Fabric (OneLake, Lakehouse, Data Factory), and dbt.
  • Define modeling patterns, governance frameworks, and engineering best practices across the data lifecycle.
  • Lead design reviews and guide teams in adopting scalable architectural patterns.
  • Drive long-term platform strategy and evaluate emerging technologies.

Hands-on Engineering

  • Design and implement batch and streaming data pipelines for healthcare data sources (EHR, claims, HL7/FHIR, APIs, flat files, databases).
  • Develop modular ingestion, quality, lineage, metadata, and observability frameworks that scale across domains.
  • Produce clean, analytics-ready datasets and data models for BI, analytics, and machine learning workloads.
  • Implement HIPAA-aligned access patterns and secure handling of PHI.
  • Architect Databricks workloads (clusters, jobs, Unity Catalog, Delta Lake) for reliability, performance, and cost efficiency.
  • Integrate Databricks and Microsoft Fabric with Azure services and enterprise systems.

Leadership & Collaboration

  • Partner with product managers, data scientists, analysts, clinicians, and business stakeholders to translate healthcare data needs into scalable solutions.
  • Lead cross-functional initiatives to modernize and unify the organization's data ecosystem.
  • Mentor senior and mid-level engineers, elevating team capability through technical coaching and standards.
  • Drive roadmap planning, platform evolution, and long-term data strategy.
  • Champion engineering excellence, reliability practices, documentation quality, and governance.

What You'll Need

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 7+ years of experience in data engineering.
  • 2+ years operating in a senior or staff-level engineering role.
  • Deep hands-on proficiency with Databricks, Spark, Delta Lake, dbt, and Python.
  • Proven ability to design and operate large-scale cloud data platforms (Azure preferred).
  • Hands-on experience with Data Engineering, Data Factory, Lakehouse, and OneLake.
  • Advanced data platform architecture and Lakehouse design expertise.
  • Demonstrated ability to design modular, extensible frameworks and guide the long-term evolution of enterprise data platforms.
  • Strong command of distributed data processing and cloud-native engineering.
  • Experience working in HIPAA-regulated environments and handling PHI.
  • Healthcare data fluency, including regulated data handling and compliance.
  • Technical leadership, mentorship, and influence across teams.
  • Strong communication skills with both technical and clinical stakeholders.
  • Experience with platform reliability, CI/CD for data pipelines, and infrastructure as code.
  • 100% remote (ET or CT work hours preferred).

Preferred Qualifications

  • Experience in implementing and supporting Epic integrations, leveraging Cogito Cloud and Caboodle data models, and delivering reliable incremental data pipelines from Caboodle/Clarity.

Our Mission and Values

Our mission is to reinvent healthcare to help patients live their best lives. We live our mission-driven values:

  • We care deeply about the people we serve.
  • We are better when we work together.
  • Humility is a source of our strength.
  • We bring joy to our work.
  • We deliver on our promises.

Commitment to Diversity, Equity, and Inclusion

We are committed to diversity, equity, and inclusion throughout our recruiting practices. Everyone is welcome and included. We value our differences and learn from each other. Our team members come in all shapes, colors, and sizes. No matter how you identify your lifestyle, creed, or fandom, we value everyone's unique journey.

We encourage you to apply even if you don't meet every single requirement. We'd love to consider your application!

Important Notice Regarding Recruitment Scams

It has come to our attention that some individuals or organizations are reaching out to job seekers with fraudulent employment offers. These offers are not associated with Interwell Health. Please note that we will not extend a job offer without prior communication with our recruiting team, hiring managers, and a formal interview process.

Learn more at www.interwellhealth.com.