Demo

Research Engineer, Infrastructure, Inference

thinkingmachines
San Francisco, CA Full Time
POSTED ON 11/28/2025
AVAILABLE BEFORE 1/27/2026

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

We’re looking for an infrastructure research engineer to design, optimize, and scale the systems that power large AI models. Your work will make inference faster, more cost-effective, more reliable, and more reproducible to enable our teams to focus on advancing model capabilities rather than managing bottlenecks.

Our focus is on performant and efficient model inference both to power real-world applications and to accelerate research. This role is responsible for the infrastructure that ensures every experiment, evaluation, and deployment runs smoothly at scale.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do

  • Work alongside researchers and engineers to bring cutting-edge AI models into production.
  • Collaborate with research teams to enable high-performance inference for novel architectures.
  • Design and implement new techniques, tools, and architectures that improve performance, latency, throughput, and efficiency.
  • Optimize our codebase and compute fleet (e.g., GPUs) to fully utilize hardware FLOPs, bandwidth, and memory.
  • Extend orchestration frameworks (e.g., Kubernetes, Ray, SLURM) for distributed inference, evaluation, and large-batch serving.
  • Establish standards for reliability, observability, and reproducibility across the inference stack.
  • Publish and share learnings through internal documentation, open-source libraries, or technical reports that advance the field of scalable AI infrastructure.

Skills and Qualifications

Minimum qualifications:

  • Bachelor’s degree or equivalent experience in computer science, engineering, or similar.
  • Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.
  • Experience with inference serving systems optimized for throughput and latency (e.g., SGLang, vLLM).
  • Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
  • A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.
  • Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases

Preferred qualifications — we encourage you to apply if you meet some but not all of these:

  • Experience training or supporting large-scale language models with hundreds of billions of parameters or more.
  • Understanding of distributed compute systems, GPU parallelism, and hardware-aware optimizations.
  • Contributions to open-source ML or systems infrastructure projects (e.g., SGLang, vLLM, PyTorch, Triton, DeepSpeed, XLA).
  • Track record of improving research productivity through infrastructure design or process improvements.

Logistics

  • Location: This role is based in San Francisco, California. 
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Salary : $350 - $475

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Research Engineer, Infrastructure, Inference?

Sign up to receive alerts about other jobs on the Research Engineer, Infrastructure, Inference career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$60,672 - $83,972
Income Estimation: 
$71,239 - $96,587
Income Estimation: 
$110,446 - $140,217
Income Estimation: 
$134,206 - $155,125
Income Estimation: 
$92,981 - $138,294
Income Estimation: 
$155,477 - $213,492
Income Estimation: 
$161,172 - $205,272
Income Estimation: 
$120,254 - $158,147
Income Estimation: 
$225,010 - $318,974
Income Estimation: 
$208,337 - $274,406
Income Estimation: 
$68,606 - $89,684
Income Estimation: 
$88,975 - $120,741
Income Estimation: 
$68,121 - $81,836
Income Estimation: 
$71,928 - $87,026
Income Estimation: 
$125,958 - $157,570
Income Estimation: 
$82,813 - $108,410
Income Estimation: 
$120,989 - $162,093
Income Estimation: 
$74,806 - $91,633
Income Estimation: 
$71,928 - $87,026
Income Estimation: 
$145,337 - $174,569
Income Estimation: 
$102,775 - $137,396
Income Estimation: 
$153,127 - $203,425
Income Estimation: 
$139,626 - $193,276
Income Estimation: 
$164,650 - $211,440
Income Estimation: 
$130,030 - $173,363
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Not the job you're looking for? Here are some other Research Engineer, Infrastructure, Inference jobs in the San Francisco, CA area that may be a better fit.

  • thinkingmachines San Francisco, CA
  • Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has acc... more
  • 2 Months Ago

  • thinkingmachines San Francisco, CA
  • Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has acc... more
  • 2 Months Ago

AI Assistant is available now!

Feel free to start your new journey!