Demo

Research Scientist, Vision-Language-Action Models

Matter
Sunnyvale, CA Full Time
POSTED ON 4/13/2026
AVAILABLE BEFORE 10/5/2026

ABOUT MATTER

Matter is building the AI-native autonomy stack for physical manufacturing in the United States. We operate as a contract manufacturer, deploying software and autonomy in our own factories, which gives us something most AI companies don’t have: a live production environment as a training ground. Our long-term vision is to become the infrastructure layer for American manufacturing, the way AWS became infrastructure for software.

THE ROLE

We are hiring a Research Scientist to lead the development and deployment of Vision-Language-Action (VLA) models for robotic manipulation in live manufacturing work cells. This is not a lab role. You will train models, close the Sim2Real loop, and deploy them on physical robots running production programs.

 

Matter’s Sim2Real pipeline spans NVIDIA Isaac Sim, physics-accurate virtual builds of our modular assembly equipment, and 100% data collection from real factory operations. You will operate at the center of this flywheel design, improving models with every production run.

 

WHAT YOU’LL DO

•   Develop and fine-tune VLA models for precision assembly tasks, including dexterous manipulation, part handling, and test operations

•   Design and manage the Sim2Real training pipeline: domain randomization, synthetic data generation, physics simulation (NVIDIA Isaac Sim, MuJoCo), and sim-to-physical transfer

•   Build evaluation frameworks to benchmark real-world manipulation performance against manufacturing tolerances and repeatability requirements

•   Collaborate with controls and automation engineers to fuse learned policies with traditional control architectures for production safety

•   Contribute to the Physical AI architecture decisions: model selection, data strategy, training infrastructure, and deployment protocols

•   Publish novel research in top tier 1 conferences — though shipping production systems is the primary measure of success

 

WHAT WE’RE LOOKING FOR

•   PhD, Graduate or equivalent research depth in robotics, machine learning, or a related field

•   Hands-on experience training and deploying VLA, VLM, or generalist robot policies on physical hardware (not just simulation)

•   Strong foundation in imitation learning, reinforcement learning, and general machine learning methods

•   Proficiency in PyTorch; experience with NVIDIA Isaac Sim, MuJoCo, or similar physics engines

•   Ability to debug the full stack: model architecture, training data quality, sim calibration, sensor noise, and hardware edge cases

•   Comfort operating in a high-velocity, ambiguous environment where you own systems end-to-end

 

NICE TO HAVE

•   Experience with MARL or multi-robot coordination

•   Background in manufacturing, industrial automation, or robotic assembly

 

WHY MATTER

Most VLA research is validated in a lab or on a tabletop. At Matter, your models run on a production factory floor, handling real parts for real customers. The feedback loop is immediate and grounded. The training data is yours because the factory is yours. No one else in this space has that combination at the stage we’re at.

Salary.com Estimation for Research Scientist, Vision-Language-Action Models in Sunnyvale, CA
$107,100 to $137,395
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Research Scientist, Vision-Language-Action Models?

Sign up to receive alerts about other jobs on the Research Scientist, Vision-Language-Action Models career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$108,245 - $136,486
Income Estimation: 
$136,683 - $171,343
Income Estimation: 
$72,432 - $98,680
Income Estimation: 
$70,600 - $83,423
Income Estimation: 
$91,761 - $124,963
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Matter

  • Matter York, NY
  • About the Role We're seeking a talented Product/Digital Designer with strong mobile UI/UX expertise and Figma mastery to join our team. This role will be p... more
  • 16 Days Ago

  • Matter Dallas, TX
  • Public Relations Account Executive, Tech Boston, MA; Newburyport, MA; Dallas, TX; Rochester, NY; Providence, RI (Hybrid: In office Mon-Thurs) Matter is an ... more
  • 3 Days Ago

  • Matter Newburyport, MA
  • Public Relations Account Manager (Tech and Cybersecurity) Boston or Newburyport, MA; Providence, RI; Rochester, NY; Dallas, TX; Denver, CO #LI-ALLOFFICES H... more
  • 3 Days Ago

  • Matter Chicago, IL
  • We’re on a mission to accelerate the pace of change of healthcare. No small thing. MATTER is a community of entrepreneurs, health systems, life sciences co... more
  • 4 Days Ago


Not the job you're looking for? Here are some other Research Scientist, Vision-Language-Action Models jobs in the Sunnyvale, CA area that may be a better fit.

  • Toyota Research Institute Los Altos, CA
  • At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the hum... more
  • 11 Days Ago

  • Meta Menlo Park, CA
  • Lead, collaborate, and execute on research that pushes forward the state of the art in multimodal reasoning and generation research. Work towards long-term... more
  • 20 Days Ago

AI Assistant is available now!

Feel free to start your new journey!