Online STEM Jobs in India (Remote, Full Time)

Online STEM jobs in India focus on remote engineering work that supports modern AI/ML products and large language model training pipelines on Rex.zone. In this role, you will contribute to applied STEM workflows such as RLHF evaluation, prompt evaluation, training data quality checks, annotation guidelines compliance, and model performance improvement. You will collaborate with distributed teams, follow QA evaluation processes, and help ensure reliable outcomes across NLP, computer vision, and content safety labeling tasks. Explore full-time remote opportunities aligned to STEM skills and measurable delivery standards, and apply through Rex.zone.

Job Image

Job Heading: Online STEM Jobs in India (Remote, Full Time)

LinkedIn Job Metadata: Title: Online STEM Jobs in India (Remote, Full Time) | Date: 25-02-2026 | Company: Rexzone | Country: US | Remote Type: Remote | Employment Type: FULL_TIME | Experience Level: Mid-Senior | Industry: Technology | Job Function: Engineering | Skills: STEM engineering, Python, SQL, data analysis, machine learning, NLP, computer vision, RLHF evaluation, prompt evaluation, QA evaluation, data labeling, named entity recognition, content safety labeling, annotation guidelines compliance, training data quality, LLM training pipelines | Salary Currency: USD | Salary Min: 63360 | Salary Max: 126720 | Pay Period: YEAR

About the Role

You will work on remote STEM-aligned engineering tasks that connect real-world product requirements to AI/ML training workflows. Typical work includes evaluating model outputs, improving training data quality, performing QA evaluation against annotation guidelines, and contributing to RLHF and prompt evaluation initiatives. You may support NLP, computer vision annotation, named entity recognition, and content safety labeling projects depending on client needs on Rex.zone.

What You Will Do

Responsibilities include: (1) Execute structured evaluations of LLM responses for accuracy, safety, and helpfulness, (2) Perform RLHF-style ranking and preference judgments to improve model performance, (3) Validate data labeling outputs and enforce annotation guidelines compliance, (4) Perform QA evaluation and error analysis to identify systematic issues, (5) Support NLP tasks such as named entity recognition and taxonomy alignment, (6) Support CV tasks such as bounding boxes, segmentation, and attribute labeling when required, (7) Document decisions, edge cases, and feedback loops that improve large language model evaluation consistency, (8) Collaborate asynchronously with remote stakeholders and meet delivery SLAs.

Required Qualifications

Requirements include: (1) Mid-senior experience in engineering, data, analytics, or applied ML workflows, (2) Strong reasoning and written communication for prompt evaluation and rubric-based scoring, (3) Practical ability with Python and SQL for analysis, sampling, and QA checks, (4) Experience with QA evaluation, test case design, or data quality review, (5) Familiarity with ML concepts, model performance improvement practices, and dataset iteration cycles.

Preferred Qualifications

Preferred: (1) Experience with RLHF evaluation, human preference data, or LLM evaluation harnesses, (2) Hands-on exposure to data labeling platforms and reviewer workflows, (3) NLP experience (NER, classification, summarization evaluation) and/or CV annotation experience, (4) Knowledge of content safety labeling and policy-based evaluation, (5) Experience producing clear annotation guidelines and resolving edge cases.

Tools and Workflows

You will use common remote engineering tooling and structured review processes. Work may include rubric-based scoring, golden set validation, inter-annotator agreement checks, audit sampling, and escalation workflows. Deliverables typically connect training data quality to measurable model performance improvement within LLM training pipelines.

Why Rex.zone

Rex.zone connects remote STEM professionals to full-time roles across AI labs, tech startups, and annotation vendors. You will work in a remote-first environment with standardized evaluation frameworks, clear quality bars, and project-based opportunities spanning NLP, computer vision, and content safety labeling.

Compensation and Employment Details

This is a full-time remote role. Compensation is offered in USD with an annual range of 63360 to 126720 depending on scope, skills alignment, and evaluation performance.

How to Apply

Apply via Rex.zone with a resume highlighting STEM projects, Python/SQL experience, QA evaluation or review workflows, and any exposure to RLHF, data labeling, prompt evaluation, NLP, computer vision, or content safety labeling.

Frequently Asked Questions

  • Q: Are these online STEM jobs in India remote?

    Yes. The role is explicitly Remote and is designed for distributed collaboration and asynchronous delivery.

  • Q: What kind of STEM work is included?

    Work commonly includes engineering analysis, QA evaluation, prompt evaluation, RLHF evaluation, training data quality reviews, and support for NLP, computer vision, named entity recognition, and content safety labeling tasks within LLM training pipelines.

  • Q: Is this role full-time or contract/freelance?

    This posting is for FULL_TIME. Rex.zone may also host contract or freelance roles, but this job’s employment type remains full-time.

  • Q: What experience level is expected?

    Mid-Senior experience is expected, including comfort with structured evaluation, quality assurance processes, and technical collaboration.

  • Q: Which skills matter most for ranking and evaluation work?

    Key skills include Python, SQL, data analysis, QA evaluation, annotation guidelines compliance, error analysis, and familiarity with RLHF, prompt evaluation, and large language model evaluation methods.

  • Q: Does the role involve data labeling?

    It can. Depending on the project, you may validate or review data labeling outputs and perform QA evaluation to ensure training data quality.

  • Q: What domains might I work in?

    Projects may span NLP, computer vision, and content safety labeling, including named entity recognition and rubric-based large language model evaluation.

  • Q: How does this work improve AI systems?

    By improving training data quality, enforcing annotation guidelines compliance, and providing RLHF and prompt evaluation signals, your work helps drive model performance improvement in LLM training pipelines.

230+Domains Covered
120K+PhD, Specialist, Experts Onboarded
50+Countries Represented

Industry-Leading Compensation

We believe exceptional intelligence deserves exceptional pay. Our platform consistently offers rates above the industry average, rewarding experts for their true value and real impact on frontier AI. Here, your expertise isn't just appreciated—it's properly compensated.

Work Remotely, Work Freely

No office. No commute. No constraints. Our fully remote workflow gives experts complete flexibility to work at their own pace, from any country, any time zone. You focus on meaningful tasks—we handle the rest.

Respect at the Core of Everything

AI trainers are the heart of our company. We treat every expert with trust, humanity, and genuine appreciation. From personalized support to transparent communication, we build long-term relationships rooted in respect and care.

Ready to Shape the Future of Engineering?

Apply Now.