About the Role
This role is designed for remote workers who specialize in creating, labeling, and evaluating high-quality datasets for modern AI systems. You will collaborate with cross-functional teams to ensure annotation guidelines compliance, execute structured evaluation of large language models, and deliver measurable model performance improvement. Projects include RLHF preference ranking, instruction and response evaluations, prompt evaluation and red-teaming, named entity recognition (NER) for NLP corpora, computer vision annotation for detection and segmentation, and content safety labeling for trust and safety workflows. Remote workers on Rex.zone deliver consistent throughput, maintain data integrity, and use annotation tools to support LLM training pipelines and enterprise AI deployments.



