About These Roles
Our full remote jobs span the data and evaluation stack: RLHF raters who compare and critique model outputs; data annotators who tag text, images, audio, and video; QA analysts who validate edge cases and adversarial prompts; and project leads who enforce annotation guidelines compliance. You’ll work asynchronously across time zones, using secure tooling to maintain training data quality and accelerate model performance improvement across LLMs, speech recognition, and computer vision systems.



