Demo

Lead Engineer, Inference Platform

MongoDB and Careers
Palo Alto, CA Full Time
POSTED ON 12/25/2025
AVAILABLE BEFORE 2/25/2026

We're looking for a Lead Engineer, Inference Platform to join our team building the inference platform for embedding models that power semantic search, retrieval, and AI-native features across MongoDB Atlas.

This role is part of the broader Search and AI Platform team and involves close collaboration with AI engineers and researchers from our Voyage.ai acquisition, who are developing industry-leading embedding models. Together, we're building the infrastructure that enables real-time, high-scale, and low-latency inference — all deeply integrated into Atlas and optimized for developer experience.

As a Lead Engineer, Inference Platform, you'll be hands-on with design and implementation, while working with engineers across experience levels to build a robust, scalable system. The focus is on latency, availability, observability, and scalability in a multi-tenant, cloud-native environment. You will also be responsible for guiding the technical direction of the team, mentoring junior engineers, and ensuring the delivery of high-quality, impactful features.

We are looking to speak to candidates who are based in Palo Alto for our hybrid working model.

What You'll Do

  • Partner with Search Platform and Voyage.ai AI engineers and researchers to productionize state-of-the-art embedding models and rerankers, supporting both batch and real-time inference
  • Lead key projects around performance optimization, GPU utilization, autoscaling, and observability for the inference platform
  • Design and build components of a multi-tenant inference service that integrates with Atlas Vector Search, driving capabilities for semantic search and hybrid retrieval
  • Contribute to platform features like model versioning, safe deployment pipelines, latency-aware routing, and model health monitoring
  • Collaborate with peers across ML, infra, and product teams to define architectural patterns and operational practices that support high availability and low latency at scale
  • Guide decisions on model serving architecture using tools like vLLM, ONNX Runtime, and container orchestration in Kubernetes
  • Provide technical leadership and mentorship to junior engineers, fostering a culture of technical excellence and continuous improvement within the team

Who You Are

  • 8 years of engineering experience in backend systems, ML infrastructure, or scalable platform development, and the ability to provide technical leadership and guidance to a team of engineers
  • Expertise in serving embedding models in production environments
  • Strong systems skills in languages like Go, Rust, C , or Python, and experience profiling and optimizing performance
  • Comfortable working on cloud-native distributed systems, with a focus on latency, availability, and observability
  • Familiarity with inference runtimes and vector search systems (e.g., Faiss, HNSW, ScaNN)
  • Proven ability to collaborate across disciplines and experience levels, from ML researchers to junior engineers
  • Experience with high-scale SaaS infrastructure, particularly in multi-tenant environments
  • 1 years of experience serving as TL for a large-scale ML inference or training platform SW project

Nice to Have

  • Prior experience working with model teams on inference-optimized architectures
  • Background in hybrid retrieval, prompt-based pipelines, or retrieval-augmented generation (RAG)
  • Contributions to relevant open-source ML serving infrastructure
  • 1 years of experience in managing a technical team focused on ML inference or training infrastructure

Why Join Us

  • Be part of shaping the future of AI-native developer experiences on the world's most popular developer data platform
  • Collaborate with ML experts from Voyage.ai to bring cutting-edge research into production at scale
  • Solve hard problems in real-time inference, model serving, and semantic retrieval — in a system used by thousands of customers worldwide
  • Work in a culture that values mentorship, autonomy, and strong technical craft
  • Competitive compensation, equity, and career growth in a hands-on technical leadership role

About MongoDB

MongoDB is built for change, empowering our customers and our people to innovate at the speed of the market. We have redefined the database for the AI era, enabling innovators to create, transform, and disrupt industries with software. MongoDB's unified database platform—the most widely available, globally distributed database on the market—helps organizations modernize legacy workloads, embrace innovation, and unleash AI. Our cloud-native platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available across AWS, Google Cloud, and Microsoft Azure.

With offices worldwide and nearly 60,000 customers—including 75% of the Fortune 100 and AI-native startups—relying on MongoDB for their most important applications, we're powering the next era of software.

Our compass at MongoDB is our Leadership Commitment, guiding how and why we make decisions, show up for each other, and win. It's what makes us MongoDB.

To drive the personal growth and business impact of our employees, we're committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees' wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it's like to work at MongoDB, and help us make an impact on the world!

MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.

MongoDB, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type and makes all hiring decisions without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

REQ ID: 3263228668

Salary : $137,000 - $270,000

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Lead Engineer, Inference Platform?

Sign up to receive alerts about other jobs on the Lead Engineer, Inference Platform career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$153,752 - $200,235
Income Estimation: 
$188,900 - $249,994
Income Estimation: 
$187,890 - $240,773
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at MongoDB and Careers

  • MongoDB and Careers San Francisco, CA
  • Atlas Search is a multi-cloud service that allows users to execute complex full text and vector search queries using the MongoDB Query Language. Our users ... more
  • 13 Days Ago

  • MongoDB and Careers Washington, DC
  • MongoDB Professional Services (PS) works with customers of all shapes and sizes, in all verticals, from tier-1 banks to small web startups, on a variety of... more
  • 14 Days Ago

  • MongoDB and Careers York, NY
  • About the role We're looking for a hardworking, driven leader with superb energy, passion and initiative to lead a highly talented Enterprise Growth sales ... more
  • 14 Days Ago

  • MongoDB and Careers San Francisco, CA
  • MongoDB Atlas is the premier multi-cloud database-as-a-service built and operated by the makers of MongoDB. The Cloud Operations Engineering team at MongoD... more
  • 14 Days Ago


Not the job you're looking for? Here are some other Lead Engineer, Inference Platform jobs in the Palo Alto, CA area that may be a better fit.

  • Modular Los Altos, CA
  • About Modular At Modular, we’re on a mission to revolutionize AI infrastructure by systematically rebuilding the AI software stack from the ground up. Our ... more
  • 15 Days Ago

  • cerebrassystems Sunnyvale, CA
  • Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens ... more
  • 25 Days Ago

AI Assistant is available now!

Feel free to start your new journey!