Demo

Member of Technical Staff - Inference Systems

Liquid AI
Boston, MA Full Time
POSTED ON 4/14/2026
AVAILABLE BEFORE 5/10/2026
Liquid AI Job Description

Role: Member Of Technical Staff, Infrastructure

Department: Research & Engineering

Location: Boston

Location Type: Hybrid

Employment Type: Full-time

About Liquid AI

Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.

The Opportunity

Our inference stack is central to everything we ship. You'll be a core part of the team responsible for the engine layer that runs our models in production and in partner environments, and for the benchmarking infrastructure we use to evaluate our own work and verify what partners bring to us. Day to day, that means working closely with research and product, but also directly with external engineering teams.

What We're Looking For

We need someone who:

  • Can pick up unfamiliar tools quickly and knows how to assess whether they're worth using.
  • Designs AI benchmarks and holds methodology to a high standard.
  • Cares about inference details, understands the tradeoffs, and checks what changed across the board before calling something done.
  • Doesn’t consider a model port finished until you can prove the outputs are correct.

The Work

  • Design and build benchmark suites that cover inference performance, model quality, and knowledge evaluation across different hardware targets.
  • Run external partner verifications: evaluate their solutions against our benchmarks, identify gaps, and clearly deliver findings.
  • Port models like LFM2 onto different runtimes and frameworks, and verify correctness end-to-end.
  • Maintain and extend the inference engine layer built on llama.cpp, ONNX, and MLX as new model architectures emerge from research.
  • Make benchmark results explainable and verifiable, so internal teams and partners can trust and reproduce them independently.

Must-have

Desired Experience

  • Hands-on experience with at least one inference framework like llama.cpp, ONNX Runtime, or MLX, going beyond basic usage into internals and modification.
  • Experience designing and building benchmarking pipelines, including methodology, validation, and reproducibility.
  • Strong C and Python in performance-sensitive contexts.
  • Solid understanding of inference fundamentals: quantization, decoding strategies, memory layout, and how they interact.

Nice-to-have

  • Experience porting models across runtimes and verifying numerical correctness.
  • Prior work with external partners or clients in a technical validation or evaluation capacity.
  • Familiarity with edge inference targets and the constraints that come with them.

What Success Looks Like (Year One)

  • You've ported LFM2 onto multiple runtimes and platforms, you know the model inside out, and new ports take you a fraction of the time they did at the start.
  • You've run multiple partner verifications end-to-end and built enough context to spot weak evaluations quickly and push back with evidence.
  • The benchmark suite covers inference performance and model quality across the platforms we care about, and both internal teams and partners are using it as a reference.

What We Offer

  • Compensation: Competitive base salary with equity in a unicorn-stage company
  • Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
  • Financial: 401(k) matching up to 4% of base pay
  • Time Off: Unlimited PTO plus company-wide Refill Days throughout the year

Salary.com Estimation for Member of Technical Staff - Inference Systems in Boston, MA
$66,716 to $82,163
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Member of Technical Staff - Inference Systems?

Sign up to receive alerts about other jobs on the Member of Technical Staff - Inference Systems career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$36,436 - $44,219
Income Estimation: 
$50,145 - $86,059
Income Estimation: 
$48,515 - $60,705
Income Estimation: 
$82,813 - $108,410
Income Estimation: 
$120,989 - $162,093
Income Estimation: 
$74,806 - $91,633
Income Estimation: 
$71,928 - $87,026
Income Estimation: 
$145,337 - $174,569
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Liquid AI

  • Liquid AI San Francisco, CA
  • Work With Us At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to buil... more
  • Just Posted

  • Liquid AI San Francisco, CA
  • About Liquid AI Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to... more
  • 1 Day Ago

  • Liquid AI San Francisco, CA
  • About Liquid Labs Research has been core to Liquid AI from the beginning. Liquid Labs gives that work a formal home; an internal research accelerator drivi... more
  • 3 Days Ago

  • Liquid AI San Francisco, CA
  • About Liquid AI Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to... more
  • 3 Days Ago


Not the job you're looking for? Here are some other Member of Technical Staff - Inference Systems jobs in the Boston, MA area that may be a better fit.

  • Netpreme Cambridge, MA
  • About The Role We’re looking for a motivated LLM Systems Engineer willing to explore new and unconventional inference systems based on emerging hardware. T... more
  • 5 Days Ago

  • MIT Lincoln Laboratory Lexington, MA
  • The Advanced Capabilities and Systems Group is an agile, innovative, multi-disciplinary team developing novel system and technology concepts that address t... more
  • 8 Days Ago

AI Assistant is available now!

Feel free to start your new journey!