Demo

LLM Inference Engineer

Hippocratic AI
Palo Alto, CA Full Time
POSTED ON 1/13/2026
AVAILABLE BEFORE 3/15/2026
About Us

Hippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health.

Why Join Our Team

  • Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.
  • Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.
  • Strategic Investors: We have raised a total of $400 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.
  • World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.

For more information, visit www.HippocraticAI.com.

We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description.

About The Role

We're seeking an experienced LLM Inference Engineer to optimize our large language model (LLM) serving infrastructure. The ideal candidate has:

  • Extensive hands-on experience with state-of-the-art inference optimization techniques
  • A track record of deploying efficient, scalable LLM systems in production environments

Key Responsibilities

  • Design and implement multi-node serving architectures for distributed LLM inference
  • Optimize multi-LoRA serving systems
  • Apply advanced quantization techniques (FP4/FP6) to reduce model footprint while preserving quality
  • Implement speculative decoding and other latency optimization strategies
  • Develop disaggregated serving solutions with optimized caching strategies for prefill and decoding phases
  • Continuously benchmark and improve system performance across various deployment scenarios and GPU types

Required Qualifications

  • Experience optimizing LLM inference systems at scale
  • Proven expertise with distributed serving architectures for large language models
  • Hands-on experience implementing quantization techniques for transformer models
  • Strong understanding of modern inference optimization methods, including:
    • Speculative decoding techniques with draft models
    • Eagle speculative decoding approaches
  • Proficiency in Python and C
  • Experience with CUDA programming and GPU optimization

Preferred Qualifications

  • Contributions to open-source inference frameworks such as vLLM, SGLang, or TensorRT-LLM
  • Experience with custom CUDA kernels
  • Track record of deploying inference systems in production environments
  • Deep understanding of performance optimization systems

Show us what you've built: Tell us about an LLM inference or training project that makes you proud! Whether you've optimized inference pipelines to achieve breakthrough performance, designed innovative training techniques, or built systems that scale to billions of parameters - we want to hear your story.

Open source contributor? Even better! If you've contributed to projects like vllm, sglang, lmdeploy or similar LLM optimization frameworks, we'd love to see your PRs. Your contributions to these communities demonstrate exactly the kind of collaborative innovation we value.

Join a team where your expertise won't just be appreciated—it will be celebrated and amplified. Help us shape the future of AI deployment at scale!

References

  • Polaris: A Safety-focused LLM Constellation Architecture for Healthcare, https://arxiv.org/abs/2403.13313

    2. Polaris 2: https://www.hippocraticai.com/polaris2

    3. Personalized Interactions: https://www.hippocraticai.com/personalized-interactions

    4. Human Touch in AI: https://www.hippocraticai.com/the-human-touch-in-ai

    5. Empathetic Intelligence: https://www.hippocraticai.com/empathetic-intelligence

    6. Polaris 1: https://www.hippocraticai.com/research/polaris

    7. Research and clinical blogs: https://www.hippocraticai.com/research

    • Be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.com email addresses. We will never request payment or sensitive personal information during the hiring process. If anything appears suspicious, stop engaging immediately and report the incident.

Salary.com Estimation for LLM Inference Engineer in Palo Alto, CA
$135,932 to $171,339
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a LLM Inference Engineer?

Sign up to receive alerts about other jobs on the LLM Inference Engineer career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$85,996 - $102,718
Income Estimation: 
$111,859 - $131,446
Income Estimation: 
$110,457 - $133,106
Income Estimation: 
$105,809 - $128,724
Income Estimation: 
$122,763 - $145,698
Income Estimation: 
$178,619 - $225,190
Income Estimation: 
$132,903 - $169,021
Income Estimation: 
$144,671 - $184,917
Income Estimation: 
$136,361 - $179,761
Income Estimation: 
$86,891 - $130,303
Income Estimation: 
$103,114 - $138,258
Income Estimation: 
$118,163 - $145,996
Income Estimation: 
$120,777 - $151,022
Income Estimation: 
$129,363 - $167,316
Income Estimation: 
$86,891 - $130,303
Income Estimation: 
$81,253 - $112,554
Income Estimation: 
$89,966 - $112,616
Income Estimation: 
$95,407 - $122,738
Income Estimation: 
$103,114 - $138,258
Income Estimation: 
$86,891 - $130,303
Income Estimation: 
$129,363 - $167,316
Income Estimation: 
$145,845 - $177,256
Income Estimation: 
$147,836 - $182,130
Income Estimation: 
$154,597 - $194,610
Income Estimation: 
$86,891 - $130,303
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Hippocratic AI

  • Hippocratic AI Akron, OH
  • About Us Hippocratic AI has developed the only safe, safety-focused Large Language Model (LLM) for healthcare, resulting in the only autonomous patient-fac... more
  • 6 Days Ago

  • Hippocratic AI Palo Alto, CA
  • About Us Hippocratic AI has developed the only safe, safety-focused Large Language Model (LLM) for healthcare, resulting in the only autonomous patient-fac... more
  • 7 Days Ago

  • Hippocratic AI Palo Alto, CA
  • About Us Hippocratic AI has developed the only safe, safety-focused Large Language Model (LLM) for healthcare, resulting in the only autonomous patient-fac... more
  • 8 Days Ago

  • Hippocratic AI Palo Alto, CA
  • About Us Hippocratic AI is the leading generative AI company in healthcare. We have the only system that can have safe, autonomous, clinical conversations ... more
  • 10 Days Ago


Not the job you're looking for? Here are some other LLM Inference Engineer jobs in the Palo Alto, CA area that may be a better fit.

  • Capital One San Jose, CA
  • Overview At Capital One, we are creating responsible and reliable AI systems, changing banking for good. For years, Capital One has been an industry leader... more
  • 1 Day Ago

  • Advanced Micro Devices, Inc San Jose, CA
  • WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences – from AI and da... more
  • 1 Month Ago

AI Assistant is available now!

Feel free to start your new journey!