- Home
- Remote Jobs
- Software Engineer (C#) - Internal Tooling
Software Engineer (C#) - Internal Tooling
Job summary
Work model
Software Engineer (C#) --- Internal Tooling (AI Infrastructure)
About The Role
What if your C# skills could directly shape the infrastructure powering next-generation AI models? We're looking for a Senior C# Full-Stack Engineer to build and optimize the data pipelines, annotation tooling, and evaluation systems that leading AI labs depend on every day.
This is a fully remote contract role working on real production systems --- not toy projects or demos. You'll collaborate with research and engineering teams at the frontier of AI development, solving hard infrastructure problems that have genuine impact on how the world's most advanced models are built and evaluated.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 20--40 hours/week
What You'll Do
- Design, build, and optimize high-performance C# systems supporting AI data pipelines and model evaluation workflows
- Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control
- Improve reliability, performance, and safety across existing C# codebases used in production AI research environments
- Collaborate with data scientists, researchers, and engineers to support model training and evaluation infrastructure
- Identify bottlenecks and edge cases in data and system behavior --- then implement scalable, robust fixes
- Participate in synchronous design reviews to iterate on architecture and implementation decisions
Who You Are
- Native or fluent English speaker with clear written and verbal communication skills
- Full-stack developer with a strong systems programming background
- 3--5+ years of professional experience writing production-grade C#
- Experienced in interoperability scenarios --- such as invoking Python ML models from .NET or wrapping native libraries
- Skilled at designing robust harnesses for benchmarking and evaluating system performance
- Able to commit 20--40 hours per week with reliability and consistency
Nice to Have
- Prior experience with data annotation, data quality, or evaluation systems
- Familiarity with AI/ML workflows, model training pipelines, or benchmarking frameworks
- Experience with distributed systems or developer tooling at scale
Why Join Us
- Work directly with leading AI labs on cutting-edge production systems --- not side projects
- Fully remote and flexible --- structure your work around your life
- Freelance autonomy with the substance and scope of meaningful, high-impact engineering work
- Gain deep exposure to AI infrastructure and evaluation workflows at the frontier of the field
- Potential for ongoing contract work and expanded scope as projects evolve