About the Role
Join remote-first teams at AI labs, tech startups, BPOs, and annotation vendors to write production code, evaluation scripts, and data pipelines that power LLM training and computer vision systems on Rex.zone.
Online coding jobs is a search-recognizable job entity covering remote software engineers, ML coders, and data labeling developers who build, test, and evaluate AI systems. On Rex.zone, applicants contribute code for RLHF pipelines, prompt evaluation, QA evaluation, named entity recognition, computer vision annotation, content safety labeling, and LLM training pipelines. The intent is to hire talent to improve training data quality, enforce annotation guidelines compliance, implement evaluation harnesses, and drive model performance improvement, including large language model evaluation. Explore roles, compare employers, and apply directly on Rex.zone for freelance, contract, or full-time opportunities across NLP, computer vision, and safety engineering.

Join remote-first teams at AI labs, tech startups, BPOs, and annotation vendors to write production code, evaluation scripts, and data pipelines that power LLM training and computer vision systems on Rex.zone.
Implement APIs and microservices; author Python/TypeScript SDKs; create dataset validators; build labeling tools; run QA evaluation suites; automate prompt evaluation; design benchmarking for large language model evaluation; maintain CI/CD; document workflows.
Strong algorithms and data structures; Python, JavaScript/TypeScript, or Go; ML fundamentals (PyTorch/TensorFlow, Hugging Face); data engineering; prompt engineering; familiarity with RLHF, dataset curation, annotation guidelines compliance, content safety policies; Git, Docker, and cloud.
Contribute to RLHF loops, training data quality checks, data labeling pipelines, evaluation harnesses, and model performance improvement using Airflow or Prefect, LangChain, OpenAI or Anthropic APIs, Kubernetes, PostgreSQL, S3, and experiment tracking.
Roles include remote, contract, freelance, full-time, entry-level, and senior positions across NLP, computer vision, content safety, and LLM training. Flexible schedules, competitive compensation, and global teams on Rex.zone.
Build features for AI labs, fast-moving tech startups, BPOs scaling annotation operations, and specialized annotation vendors; contribute to model evaluation, dataset tooling, red-teaming, and safety policy enforcement.
Typical ranges: contract $25–$80 per hour, freelance task-based payouts, and full-time $60k–$180k depending on seniority, domain (NLP, CV, safety), and location; bonuses for high-impact evaluation and data quality improvements.
Create a profile on Rex.zone, upload your GitHub and portfolio, pass coding challenges, complete sample prompt evaluation or labeling tasks, and match with remote teams seeking online coding professionals.
It is a remote-first role where you write software, build data and evaluation pipelines, or develop labeling tools that support AI training, RLHF workflows, and large language model evaluation for employers on Rex.zone.
Openings span NLP, computer vision, content safety, data labeling, prompt evaluation, QA evaluation, named entity recognition, and LLM training pipelines.
Most roles are remote, with some hybrid options. You can filter for remote, contract, freelance, full-time, entry-level, and senior positions on Rex.zone.
Entry-level candidates should know Python or JavaScript and Git; senior roles expect strong system design, ML fundamentals, data engineering, and experience with evaluation harnesses and RLHF.
Expect coding assessments, take-home projects, data validator tasks, prompt evaluation exercises, and QA suites that measure reliability, documentation quality, and model performance improvement impact.
Not always. Many product engineering roles focus on APIs and tooling. For ML-heavy roles, familiarity with PyTorch, TensorFlow, Hugging Face, and RLHF is beneficial.
Git, Docker, Python or TypeScript, SQL, cloud services, ML frameworks, labeling platforms, and evaluation tools such as LangChain and experiment tracking.
Contract roles typically pay $25–$80 per hour; freelance tasks are per-deliverable; full-time salaries range from $60k to $180k based on seniority, domain, and location.
Sign up at Rex.zone, complete your profile, link GitHub, share portfolio projects, and submit to openings. You may receive coding or evaluation tasks before interviews.
AI labs, tech startups, BPOs, and annotation vendors use Rex.zone to find developers and evaluators for training data quality, annotation guidelines compliance, and model evaluation.

We believe exceptional intelligence deserves exceptional pay. Our platform consistently offers rates above the industry average, rewarding experts for their true value and real impact on frontier AI. Here, your expertise isn't just appreciated—it's properly compensated.

No office. No commute. No constraints. Our fully remote workflow gives experts complete flexibility to work at their own pace, from any country, any time zone. You focus on meaningful tasks—we handle the rest.

AI trainers are the heart of our company. We treat every expert with trust, humanity, and genuine appreciation. From personalized support to transparent communication, we build long-term relationships rooted in respect and care.
Ready to Shape the Future of Software Development & AI Training?
Apply Now.