About the Role
These positions leverage mathematical rigor to build and evaluate AI systems. You will design sampling plans, verify training data quality, calculate inter-annotator agreement, and optimize labeling throughput. Roles span NLP, computer vision, and content safety, connecting statistical validation to model performance improvement and large language model evaluation. Work with AI labs, tech startups, BPOs, and annotation vendors through Rex.zone.



