Key Responsibilities
Execute consistent data labeling across NLP, computer vision, and multimodal tasks; apply RLHF criteria to score and rank model outputs; perform QA evaluation, spot checks, and inter-annotator agreement reviews; follow annotation guidelines and taxonomies with high precision; conduct prompt evaluation for LLMs and document qualitative feedback; identify edge cases, bias, and ambiguity; flag content safety risks and enforce policy; improve training data quality through audits, error analysis, and feedback; collaborate with leads to refine guidelines; meet throughput and quality SLAs while working remotely.



