- Home
- Hybrid Jobs
- Data Engineer -SQL
About The Company
McKesson is an impact-driven, Fortune 10 company that touches virtually every aspect of healthcare. We are known for delivering insights, products, and services that make quality care more accessible and affordable. Here, we focus on the health, happiness, and well-being of you and those we serve - we care.
What you do at McKesson matters. We foster a culture where you can grow, make an impact, and are empowered to bring new ideas. Together, we thrive as we shape the future of health for patients, our communities, and our people. If you want to be part of tomorrow's health today, we want to hear from you.
The Data Engineer - Databricks with CoverMyMeds will support and expand the data platforms that power our commercial data products and analytics offerings, with a focus on building and maintaining scalable data pipelines. This role will contribute to the design and delivery of reliable, reusable data assets that enable both internal teams and external partners to derive value from our data.
You will work across proprietary and third-party data sources to build well-structured, high-quality datasets and pipelines that support commercialization efforts. This role partners closely with Data Systems Analysts, Product, and Analytics teams to translate evolving business concepts into scalable, production-ready data solutions.
- Our preferred candidate would reside in the Columbus, OH area to support a hybrid work schedule, but we may consider a well-qualified full-remote candidate.
This position requires current, unrestricted authorization to work in the United States on a permanent basis. We are unable to support or consider candidates on temporary work authorization, including but not limited to F-1 OPT, STEM OPT, CPT, or any status that requires employer sponsorship now or in the future.
About The Role
The Data Engineer - Databricks is a critical role within our data engineering team, responsible for designing, developing, and maintaining scalable data pipelines that support McKesson's commercial and analytical initiatives. You will work closely with cross-functional teams to understand business needs and translate them into efficient data solutions. Your expertise will help ensure the integrity, performance, and scalability of our data assets, enabling advanced analytics, reporting, and data commercialization efforts.
This role involves hands-on development in cloud-based environments, primarily utilizing Databricks, Snowflake, and related tools. The successful candidate will be instrumental in supporting the growth of our data platform, ensuring high data quality, and fostering innovative data solutions that align with business objectives.
You will also contribute to building reusable data models and pipeline patterns that facilitate future data product expansion. Collaboration with application teams, platform engineers, and business stakeholders is essential to understand data flows and implement robust, efficient pipeline solutions that meet evolving business requirements.
Qualifications
- Bachelor's degree in Computer Science, Information Systems, or a related field, or equivalent practical experience.
- Typically requires 4+ years of experience in data engineering, analytics engineering, or modern data platform environments.
- Hands-on experience building and maintaining data pipelines in cloud environments such as Databricks, Snowflake, or similar platforms.
- Strong SQL skills with experience transforming data for analytical, reporting, or product use cases.
- Proven ability to integrate data from multiple internal and third-party systems.
- Experience working with structured and semi-structured data in batch and streaming environments.
- Working knowledge of data modeling principles and data quality practices.
- Experience supporting analytics, reporting, or externally facing data use cases.
Responsibilities
- Design and develop data pipelines that integrate proprietary and third-party data sources to support commercial data products and proof-of-concept initiatives.
- Build, optimize, and maintain data transformation pipelines with a focus on scalability, reliability, and performance.
- Work with structured and unstructured data to prepare enhanced datasets for internal stakeholders and external use cases.
- Write SQL and/or utilize cloud-based tools such as Databricks or Snowflake to cleanse, standardize, and transform data aligned with business needs.
- Collaborate with Product, Analytics, and external-facing teams to translate commercialization objectives into scalable data solutions.
- Contribute to the development of data models and reusable pipeline patterns that support future data product expansion.
- Partner with application and platform teams to understand upstream data flows and implement efficient pipeline solutions.
- Monitor and support data quality, pipeline performance, and reliability for production data assets.
Benefits
McKesson offers a comprehensive benefits package designed to support our employees' health, well-being, and financial security. Our benefits include competitive health insurance options, dental and vision coverage, retirement plans, paid time off, and wellness programs. We also provide opportunities for professional development and continuous learning to help you grow your skills and advance your career. Our commitment to a healthy work-life balance is reflected in flexible work arrangements and employee assistance programs. As part of our Total Rewards package, we aim to create a supportive environment where our employees can thrive personally and professionally.
Equal Opportunity
McKesson is an Equal Opportunity Employer. We provide equal employment opportunities to all applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability, age, genetic information.