C# Infrastructure Engineer - Data Pipelines

Job summary

Seattle
Engineering

Work model

Fully remote
Worldwide
2 days ago
Job description

C# Infrastructure Engineer --- Data Pipelines (AI Training)

About The Role

What if your C# skills could directly shape the infrastructure powering next-generation AI systems used by millions of people? We're looking for a Senior C# Full-Stack Engineer to design and build the high-performance data pipelines, annotation tooling, and evaluation systems that sit at the heart of cutting-edge AI development.

This is a fully remote, flexible contract role working alongside leading AI labs on real production systems and high-impact engineering challenges. If you're a senior engineer who thrives on performance, scalability, and building things that actually matter --- this is the role for you.

  • Organization: Alignerr
  • Type: Hourly Contract
  • Location: Remote
  • Commitment: 20--40 hours/week

What You'll Do

  • Design, build, and optimize high-performance C# systems supporting AI data pipelines and evaluation workflows
  • Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control
  • Improve reliability, performance, and resilience across existing C# codebases
  • Collaborate with data, research, and engineering teams to support model training and evaluation workflows
  • Identify bottlenecks and edge cases in data and system behavior, and implement scalable, production-ready fixes
  • Participate in synchronous design reviews to iterate on architecture and implementation decisions

Who You Are

  • Native or fluent English speaker with strong written and verbal communication skills
  • Full-stack developer with a deep systems programming background
  • 5+ years of professional experience writing production-grade C#
  • Proven experience building streaming data pipelines using asynchronous streams and reactive programming patterns
  • Skilled at optimizing I/O-bound operations and implementing resilient retry and fault-tolerance strategies for distributed data ingestion
  • Able to commit 20--40 hours per week with reliability and consistency

Nice to Have

  • Prior experience with data annotation, data quality pipelines, or model evaluation systems
  • Familiarity with AI/ML workflows, model training, or benchmarking infrastructure
  • Experience with distributed systems architecture or developer tooling

Why Join Us

  • Work on cutting-edge AI projects alongside leading research labs
  • Fully remote and flexible --- work when and where it suits you
  • Freelance autonomy with the structure and impact of meaningful, production-level work
  • Make a direct, tangible contribution to the infrastructure powering next-generation AI
  • Potential for ongoing work and contract extension as new projects launch