- Home
- Remote Jobs
- Systems Programmer - AI Data Pipelines
Systems Programmer - AI Data Pipelines
Job summary
Work model
About The Role
What if your Rust expertise could directly shape the infrastructure powering the next generation of AI models? We're looking for a senior Rust engineer to design and build the high-performance data pipelines, annotation tooling, and evaluation systems that leading AI labs depend on to train and improve their models.
This is a fully remote contract role working on real production systems --- not toy projects. You'll collaborate directly with data, research, and engineering teams on infrastructure that matters at scale.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 20--40 hours/week
What You'll Do
- Design, build, and optimize high-performance systems in Rust supporting AI data pipelines and evaluation workflows
- Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control
- Improve reliability, performance, and safety across existing Rust codebases
- Collaborate with data, research, and engineering teams to support model training and evaluation workflows
- Identify bottlenecks and edge cases in data and system behavior, and implement scalable fixes
- Participate in synchronous design reviews to iterate on system architecture and implementation decisions
Who You Are
- Native or fluent English speaker with clear written and verbal communication skills
- 3--5+ years of professional experience writing production-grade Rust
- Deep command of Rust lifetimes, ownership mechanics, and idiomatic error handling --- you write code that's safe and maintainable by design
- Experienced building I/O-bound data pipelines with robust retry/backoff logic for production environments
- Self-directed and reliable --- able to commit 20--40 hours per week and deliver without hand-holding
- You think in systems: performance, fault tolerance, and correctness aren't afterthoughts
Nice to Have
- Prior experience with data annotation, data quality, or evaluation systems
- Familiarity with AI/ML workflows, model training, or benchmarking pipelines
- Experience with distributed systems or developer tooling
- Background working alongside research or ML engineering teams
Why Join Us
- Work on cutting-edge AI infrastructure alongside top research labs and engineering teams
- Fully remote --- work from wherever you do your best work
- Meaningful, high-impact work on systems that directly influence model quality at scale
- Freelance autonomy with consistent, structured engagement
- Potential for ongoing collaboration and contract extension as new projects launch