Demo

Senior AI Engineer — Inference & Agent Systems

Arcana
York, NY Full Time
POSTED ON 4/7/2026
AVAILABLE BEFORE 5/8/2026

Title: Senior AI Engineer — Inference & Agent Systems

Location:

- SF/NY/Remote


What We're Building

Arcana is building AI agents that synthesize information across heterogeneous sources and deliver structured, reasoned answers in real time. The product only works if the agents are fast, reliable, and correct, not approximately correct.

Our stack: Go Temporal for orchestration, a Plan-Execute-Synthesize agent architecture, and an evaluation harness we use to measure every regression. The problems are hard. The latency bar is aggressive. The accuracy requirements are unforgiving.

The Work

Inference Optimization

- Drive TTFT below 400ms for multi-step agent pipelines

- Streaming optimization: first token to user while sub-agents are still running

- KV cache strategy, prompt compression, dynamic context window management

- Multi-provider routing: model selection by latency, cost, and task type across OpenAI, Anthropic, Gemini, and open-weight models

Agent Architecture

- Design and implement Plan-Execute-Synthesize pipelines that run sub-agents in parallel DAGs, not sequential chains

- Build reliable orchestration on top of Temporal: retries, timeouts, partial failure recovery, idempotency

- Structured output enforcement: JSON schema validation, retry loops on malformed LLM output, graceful degradation

- Tool call design: schema design that LLMs actually follow reliably across providers

Evaluation & Harness

- Own the eval framework end to end: ground truth datasets, automated scoring pipelines, regression detection on every PR

- LLM-as-judge pipelines for qualitative output assessment

- Latency regression testing - p50/p95/p99 tracked across every deployment

- Adversarial test case design: ambiguous queries, missing data, conflicting sources, malformed tool responses


Infrastructure

- Model serving and cold start optimization

- Async worker architecture for parallel sub-agent execution

- Observability: trace every token, every tool call, every synthesis step

What We're Looking For

You've built something that runs in production at a meaningful scale and you understand why it's fast (or why it isn't).

Strong signal:

- You've worked on inference pipelines where TTFT was the primary metric and you moved it meaningfully

- You've built multi-step agent systems and you know where they break not from reading papers but from watching them fail in production

- You've written eval harnesses from scratch and you have opinions about what makes a ground truth dataset actually useful

- You've debugged LLM non-determinism in production and built systems resilient to it

- You've worked with streaming LLM responses and built infrastructure around partial output handling

Weaker signal (but not disqualifying):

- You've fine-tuned models but haven't shipped inference systems

- You've used LangChain/LlamaIndex but haven't built the layer underneath

- Strong ML research background without systems exposure

Stack familiarity (we care more about depth than match): Go, Python, Temporal, Kafka, PostgreSQL, Docker

Why This Role

The problems here don't have blog posts about them yet. Parallel agent DAG execution under hard latency budgets, streaming synthesis across partial sub-agent results, eval harnesses for non-deterministic multi-step systems: these are genuinely unsolved at production quality. Small team. High ownership. Every engineer's decisions ship to production.

Who We Want to Hear From

You've shipped inference systems at:

- A real-time AI product (search, coding assistant, chat at scale)

- A model serving infrastructure company

- An agent platform (any domain)

Or you've built eval/harness infrastructure that a team of 10 engineers actually trusted to catch regressions.


Apply

Send to: [careers@arcana.io]

Include:

1. One system you built where latency was the primary constraint what you measured, what you changed, what moved

2. Link to anything public (code, writing, talks)

3. No cover letter required

We respond to every application.

Salary.com Estimation for Senior AI Engineer — Inference & Agent Systems in York, NY
$129,592 to $166,692
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Senior AI Engineer — Inference & Agent Systems?

Sign up to receive alerts about other jobs on the Senior AI Engineer — Inference & Agent Systems career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$119,030 - $151,900
Income Estimation: 
$149,493 - $192,976
Income Estimation: 
$101,387 - $124,118
Income Estimation: 
$119,030 - $151,900
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Not the job you're looking for? Here are some other Senior AI Engineer — Inference & Agent Systems jobs in the York, NY area that may be a better fit.

  • Mathpix Brooklyn, NY
  • About Mathpix: We are a small (but quickly-growing) and dedicated team committed to pushing the boundaries of what is possible in computer vision and AI-po... more
  • 30 Days Ago

  • BREAKFAST Brooklyn, NY
  • About Us BREAKFAST Studio is a pioneering team creating kinetic sculptures that seamlessly merge engineering, motion, and creativity. Our projects, celebra... more
  • 6 Days Ago

AI Assistant is available now!

Feel free to start your new journey!