What are the responsibilities and job description for the Senior Applied Research Engineer position at Zep AI (YC W24)?
Zep is the memory and context layer for AI agents. As a Senior Applied Research Engineer, you'll explore novel approaches to memory, context, and context generation, then own those ideas all the way to production.
This is a research role with a hard applied bent. We're not hiring ML researchers chasing publications. We're hiring engineers who can run rigorous experiments, train and evaluate models, and ship the result as production code our customers depend on.
What You'll Do
This Role Is Probably NOT a Fit If
This is a research role with a hard applied bent. We're not hiring ML researchers chasing publications. We're hiring engineers who can run rigorous experiments, train and evaluate models, and ship the result as production code our customers depend on.
What You'll Do
- Explore novel approaches to memory, context, and context generation. Define the problem, run the experiments, ship the result.
- Own research to production end-to-end: dataset creation and curation, experiment design, evaluation, training and finetuning, and production deployment.
- Train, finetune, and evaluate models on Zep's domain. Build the eval harnesses that catch regressions before they ship.
- Work with our model serving stack to operate inference at low latency and reasonable cost on AWS.
- 6 years of production engineering with a strong backend systems background. You've shipped services with real throughput and latency requirements.
- Master's in Computer Science or equivalent.
- Strong research skills: methodology, dataset creation and curation, experiment design, and evaluation. You can frame an open problem and design experiments that actually answer the question.
- Hands-on experience with model finetuning. Working familiarity with transformer architectures, training and finetuning workflows, and evaluation. PyTorch and OpenAI Triton for experimentation.
- Working experience with model serving technologies: vLLM, SGLang, or Triton Inference Server. You've operated inference in production.
- Python, plus high proficiency in one of Rust, C , or Go. You can work in critical-path code and on performance. Python-only is not enough.
- Hands-on AWS experience in production: deployments, monitoring, scaling, cost and reliability tradeoffs.
- Published or open-source work in retrieval, memory systems, or LLM evaluation.
This Role Is Probably NOT a Fit If
- You're an ML researcher or model trainer who hasn't shipped research to production.
- Your background is primarily Python application work without lower-level systems experience.
- You haven't operated production backend systems with real latency or throughput requirements.
Salary : $180,000 - $250,000