Demo

Research Engineer – Benchmarking, Evals & Failure Analysis

Mercor
San Francisco, CA Full Time
POSTED ON 4/11/2026
AVAILABLE BEFORE 5/16/2026
About Mercor

Mercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development.

Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.

Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society.

Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our new San Francisco headquarters.

About The Role

As a Research Engineer at Mercor, you’ll work at the intersection of engineering and applied AI research. You’ll own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how we train and improve frontier language models.

Your work will define how we measure tool use, agentic behavior, and real-world reasoning. You’ll design and run evals, build rubrics and scorers, and turn failure analysis into actionable improvements for post-training, RLVR, and data pipelines.

What You’ll Do

  • Benchmarking: Design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning; ensure benchmarks scale with training and stay aligned with product and research goals.
  • Evaluation systems: Build and operate LLM evaluation systems end-to-end runs, scoring, dashboards, and reporting, so researchers and applied AI teams can track model performance and compare runs at scale.
  • Failure analysis: Run systematic failure analysis on model outputs (e.g., wrong tool use, reasoning errors, safety/alignment issues); categorize failure modes, quantify prevalence, and feed findings into reward design, data curation, and benchmark design.
  • Rubrics and evaluators: Create and refine rubrics, automated evaluators, and scoring frameworks that drive training and evaluation decisions; balance rigor with scalability (human vs. model-as-judge, calibration, agreement).
  • Data quality and usability: Quantify data usability, quality, and impact on key benchmarks; use evals and failure analysis to guide data generation, augmentation, and curation.
  • Cross-team collaboration: Work with AI researchers, applied AI teams, and data producers to align evals with training objectives and to prioritize benchmarks and failure analyses that matter most.
  • Ownership in a fast-paced environment: Operate in a high-iteration research setting with strong ownership of benchmarks, evals, and failure-analysis workflows.

What We’re Looking For

  • Strong applied research background, with focus on model evaluation, benchmarking, and/or failure analysis.
  • Strong coding skills and hands-on experience with ML models and evaluation code.
  • Solid grasp of data structures, algorithms, and backend systems.
  • Comfort with APIs, SQL/NoSQL, and cloud platforms for running and storing eval results.
  • Ability to reason about model behavior, experimental results, and data quality from evals and failure analyses.
  • Excitement to work in person in San Francisco five days a week in a high-intensity, high-ownership environment.

Nice To Have

  • Industry experience on a post-training or evaluation/benchmarking team (highest priority).
  • Publications at top-tier venues (NeurIPS, ICML, ACL), especially in evaluation or benchmarking.
  • Experience building or running LLM evaluations, benchmarks, or failure-analysis pipelines.
  • Experience with synthetic data generation, rubric design, or RL-style workflows that use evals for reward shaping.
  • Work samples or code (e.g., eval frameworks, benchmark suites, failure-analysis reports or tooling) that demonstrate relevant skills.

Benefits

  • Generous equity grant vested over 4 years
  • A $10K housing bonus (if you live within 0.5 miles of our office)
  • A $1.5K monthly stipend for meals
  • Free Equinox membership
  • Health insurance

Compensation Range: $130K - $500K

Salary : $130,000 - $500,000

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Research Engineer – Benchmarking, Evals & Failure Analysis?

Sign up to receive alerts about other jobs on the Research Engineer – Benchmarking, Evals & Failure Analysis career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$90,267 - $107,792
Income Estimation: 
$125,799 - $152,617
Income Estimation: 
$110,220 - $132,692
Income Estimation: 
$111,195 - $140,107
Income Estimation: 
$126,558 - $144,904
Income Estimation: 
$114,514 - $140,377
Income Estimation: 
$140,979 - $171,491
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Mercor

  • Mercor Israel, OH
  • About The Job Mercor connects elite creative and technical talent with leading AI research labs. Headquartered in San Francisco, our investors include Benc... more
  • 11 Days Ago

  • Mercor York, NY
  • About The Job Mercor connects elite creative and technical talent with leading AI research labs. Headquartered in San Francisco, our investors include Benc... more
  • 11 Days Ago

  • Mercor San Francisco, CA
  • About Mercor Mercor is at the intersection of labor markets and AI research. We partner with leading AI labs and enterprises to provide the human intellige... more
  • 11 Days Ago

  • Mercor San Francisco, CA
  • About Mercor Mercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI devel... more
  • 12 Days Ago


Not the job you're looking for? Here are some other Research Engineer – Benchmarking, Evals & Failure Analysis jobs in the San Francisco, CA area that may be a better fit.

  • OpenAI San Francisco, CA
  • About The Team The Frontier Evals & Environments team builds north star model environments to drive progress towards safe AGI/ASI. This team builds ambitio... more
  • 15 Days Ago

  • OpenAI San Francisco, CA
  • About The Team The Frontier Evals team builds north star model evaluations to drive progress towards safe AGI/ASI. This team builds ambitious evaluations t... more
  • 22 Days Ago

AI Assistant is available now!

Feel free to start your new journey!