AI Data Engineer

Job summary

United States
Engineering

Work model

Fully remote
Only EU
1 week ago
Job description

AI Data Engineer (Contract, Remote)

We're looking for an experienced AI Data Engineer to build scalable data pipelines and deploy production-ready machine learning solutions within a modern cloud data platform.

The Role:

  • Own end-to-end data and ML workflows, from ingestion and transformation through to model deployment and monitoring, with a focus on reliability and scalability in Snowflake and Python.

What You'll Be Doing:

  • Build and maintain data ingestion pipelines into Snowflake (structured and time-series data)
  • Prepare ML-ready datasets (feature engineering, aggregations, train/test splits)
  • Develop, train, and deploy ML models using Python (scikit-learn, XGBoost, LightGBM)
  • Operationalise ML workflows in Snowflake using Snowpark
  • Write model outputs back to Snowflake for downstream use
  • Monitor pipelines and models, including data quality checks and retraining triggers

Technical Environment:

  • Snowflake (SQL, Streams, Tasks, Snowpipe)
  • Python for data engineering and ML (including Snowpark)
  • ML frameworks: scikit-learn, XGBoost, LightGBM
  • Time-series data processing (desirable)
  • Azure Data Lake / Microsoft Fabric (nice to have)

Contract Details:

  • Initial contract with strong extension potential
  • Fully remote (Has to be European based)
  • €300 - €325 Per Day