Demo

Member of Technical Staff - Edge Inference Engineer

Liquid AI
San Francisco, CA Full Time
POSTED ON 4/16/2026
AVAILABLE BEFORE 5/23/2026
About Liquid AI

Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.

The Opportunity

Our Edge Inference team compiles Liquid Foundation Models into optimized machine code that runs on resource-constrained devices: phones, laptops, Raspberry Pis, and watches. We are core contributors to llama.cpp and build the infrastructure that makes efficient on-device AI possible. You will work directly with the technical lead on problems that require deep understanding of both ML architectures and hardware constraints. This is high-ownership work where your code ships to production and directly impacts model performance on real devices.

While San Francisco and Boston are preferred, we are open to other locations.

What We're Looking For

We need someone who:

  • Works autonomously: Given a target device and performance goal, you figure out how to get there without hand-holding. You diagnose bottlenecks, prototype solutions, and iterate until you hit the target.
  • Thinks at the hardware level: You understand cache hierarchies, memory access patterns, and instruction-level optimization. You can reason about why code is slow before reaching for a profiler.
  • Bridges ML and systems: You understand how neural networks work mathematically (matrix operations, attention mechanisms, quantization effects) and can translate that understanding into optimized implementations.
  • Ships production code: Our work goes upstream to open-source projects and deploys to customer devices. You write code that others can maintain and extend.

The Work

  • Implement and optimize inference kernels for CPU, NPU, and GPU architectures across diverse edge hardware
  • Develop quantization strategies (INT4, INT8, FP8) that maximize compression while preserving model quality under strict memory budgets
  • Contribute to llama.cpp and other open-source inference frameworks, including new model architectures (audio, vision)
  • Profile and optimize end-to-end inference pipelines to achieve sub-100ms time-to-first-token on target devices
  • Collaborate with ML researchers to understand model architectures and identify optimization opportunities specific to Liquid Foundation Models

Must-have

Desired Experience

  • 5 years of experience in systems programming with strong C proficiency
  • Embedded software engineering experience or work on resource-constrained systems
  • Understanding of ML fundamentals at the linear algebra level (how matrix operations, attention, and quantization work)
  • Experience with hardware architecture concepts: cache hierarchies, memory bandwidth, SIMD/vectorization

Nice-to-have

  • Contributions to llama.cpp, ExecuTorch, or similar inference frameworks
  • Experience with Rust for systems programming
  • Background in custom accelerator development (TPU, NPU) or work at companies like SambaNova, Cerebras, Groq, or Google/Amazon accelerator teams
  • Quantitative degree (mathematics, physics, or similar) combined with engineering experience

What Success Looks Like (Year One)

  • Ship optimizations that achieve measurable latency or memory improvements on at least one target edge device class
  • Successfully upstream at least one significant contribution to llama.cpp (new architecture support, kernel optimization, or quantization improvement)
  • Own a major workstream end-to-end, such as new model architecture support, quantization pipeline for a device constraint, or target platform enablement

What We Offer

  • Rare technical challenges: Work on novel model architectures that require custom optimization strategies. Your code ships to production and runs on real devices.
  • Compensation: Competitive base salary with equity in a unicorn-stage company
  • Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
  • Financial: 401(k) matching up to 4% of base pay
  • Time Off: Unlimited PTO plus company-wide Refill Days throughout the year

Salary.com Estimation for Member of Technical Staff - Edge Inference Engineer in San Francisco, CA
$99,160 to $124,821
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Liquid AI

  • Liquid AI San Francisco, CA
  • Work With Us At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to buil... more
  • Just Posted

  • Liquid AI Boston, MA
  • Liquid AI Job Description Role: Member Of Technical Staff, Infrastructure Department: Research & Engineering Location: Boston Location Type: Hybrid Employm... more
  • 3 Days Ago

  • Liquid AI San Francisco, CA
  • About Liquid Labs Research has been core to Liquid AI from the beginning. Liquid Labs gives that work a formal home; an internal research accelerator drivi... more
  • 3 Days Ago

  • Liquid AI San Francisco, CA
  • About Liquid AI Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to... more
  • 3 Days Ago


Not the job you're looking for? Here are some other Member of Technical Staff - Edge Inference Engineer jobs in the San Francisco, CA area that may be a better fit.

  • Magic San Francisco, CA
  • Magic’s mission is to build safe AGI that accelerates humanity’s progress on the world’s most important problems. We believe the most promising path to saf... more
  • 3 Days Ago

  • Anthropic San Francisco, CA
  • About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and ... more
  • 3 Days Ago

AI Assistant is available now!

Feel free to start your new journey!