[Remote] Staff Data Engineer

Job summary

United States
Engineering

Work model

Fully remote
Only United States
5 days ago
Job description

About Velocity Tech

Velocity Tech are forward-thinking talent acquisition partners specialising in recruiting strategies for AI/Data, Software Engineering, Cloud and Infrastructure and Security. It was founded in 2022, and is headquartered in London, England, GB, with a workforce of 2-10 employees. Its website is https://www.velocity-tech.co/.

  • Company H1B Sponsorship
  • Velocity Tech has a track record of offering H1B sponsorships, with 6 in 2025, 8 in 2024, 1 in 2023. Please note that this does not guarantee sponsorship for this specific role.

Job Overview

Velocity Tech is an innovative, product-led organisation focused on building data-driven systems that power behavioural insights and machine learning at scale. They are seeking a Data Engineer to shape and deliver a GCP-native data platform, enabling rapid experimentation and high-quality data products. The role involves designing scalable infrastructure and delivering impactful data products to support advanced analytics and machine learning models. This is a remote job open to candidates in the USA.

Responsibilities

  • Design and implement scalable, GCP-native data architectures to support machine learning and analytics
  • Build high-quality data products that enable continuous learning from predictions and user behaviour
  • Develop data pipelines and ingestion strategies using tools such as Pub/Sub, Datastream, and Dataflow
  • Enable personalisation and recommendation systems through rich, contextual data
  • Drive rapid iteration by delivering modular, reusable data solutions
  • Collaborate with engineering, product, and data science teams to define and evolve data capabilities
  • Establish metrics and frameworks to measure algorithm performance and continuous improvement
  • Champion best practices across data modelling, governance, and architecture within the squad

Skills

  • Strong background in data engineering, building and scaling production-grade data systems
  • Expert-level proficiency in Python and SQL for data transformation and analysis
  • Deep experience with the GCP data ecosystem, including BigQuery, Dataform, Cloud Dataflow, and Composer (Airflow)
  • Experience designing data products (not just pipelines) with a focus on long-term value and domain ownership
  • Strong understanding of data architecture, including transactional systems, ODS, and data warehouses
  • Experience with real-time and high-throughput data ingestion (e.g. Pub/Sub, CDC pipelines)
  • Ability to navigate ambiguity, make decisions, and operate autonomously within a squad model
  • Proven experience collaborating across cross-functional teams in fast-paced environments
  • Experience with recommendation systems or personalisation engines
  • Exposure to data governance and tooling (e.g. Dataplex)
  • Experience modernising legacy data systems and optimising complex SQL logic