What are the responsibilities and job description for the Founding Full-Stack Engineer position at TechTree?
About The Company
AfterQuery is helping push the frontier of LLMs and AI Agents through novel datasets and experimentation. We work on building the most complex infrastructure that powers frontier data creation for agentic and hard reasoning workflows. We work with all 5 of the leading AI labs and are becoming the go-to partner for data infrastructure for YC companies. We have a sharp hockey stick growth rate and are extremely talent-dense, with most of our founding team coming from top IB and quant firms.
About The Role
End-to-end ownership over AI evaluation projects, designing RL environments, building platforms, and solving scalability challenges in a fast-paced, high-growth team.
As part of AfterQuery’s engineering team, you will have end-to-end ownership over projects that push the frontier of AI evaluation. You will work on a mix of research engineering, which includes designing novel reinforcement learning (RL) environments, agentic systems, and evaluation harnesses, as well as platform engineering, which involves building human-in-the-loop platforms, scaling data infrastructure, and designing annotator workflows.
This role is not narrow. One week you might prototype a new RL environment from a research paper, the next you will be deploying distributed experiments on Kubernetes, and the week after you will be improving the reliability of our Next.js dashboards or building a Kafka pipeline for annotator analytics.
What You’ll Do
AfterQuery is helping push the frontier of LLMs and AI Agents through novel datasets and experimentation. We work on building the most complex infrastructure that powers frontier data creation for agentic and hard reasoning workflows. We work with all 5 of the leading AI labs and are becoming the go-to partner for data infrastructure for YC companies. We have a sharp hockey stick growth rate and are extremely talent-dense, with most of our founding team coming from top IB and quant firms.
About The Role
End-to-end ownership over AI evaluation projects, designing RL environments, building platforms, and solving scalability challenges in a fast-paced, high-growth team.
As part of AfterQuery’s engineering team, you will have end-to-end ownership over projects that push the frontier of AI evaluation. You will work on a mix of research engineering, which includes designing novel reinforcement learning (RL) environments, agentic systems, and evaluation harnesses, as well as platform engineering, which involves building human-in-the-loop platforms, scaling data infrastructure, and designing annotator workflows.
This role is not narrow. One week you might prototype a new RL environment from a research paper, the next you will be deploying distributed experiments on Kubernetes, and the week after you will be improving the reliability of our Next.js dashboards or building a Kafka pipeline for annotator analytics.
What You’ll Do
- Design and build scalable systems, including RL environments, APIs, and human-in-the-loop platforms.
- Collaborate across research, product, and design to ship features quickly.
- Write clean, maintainable code and contribute to documentation.
- Participate in code reviews and design discussions.
- Solve real-world scalability and reliability challenges.
- Contribute to the core infrastructure powering data and evaluation for leading AI labs.
- New graduate or up to 3 years of experience
- US-undergrad (visa sponsorship available)
- Canadian undergrad from UBC/Waterloo/McGill/UofT
- Experience from tier 1 trading firms as SWE/quant (Citadel/JS/HRT/Old Mission/IMC/Optiver)
- Experience from high-growth startup (series A-B, tier 1 investors)
- Experience from late-stage successful startups (Figma, Scale AI)
- Ex-YC CTOs from failed startups
- Ivy League or T10 school in the US
- Preferably US/Canada/UK/AUS high-school
- Research engineering
- Platform engineering
- Designing RL environments
- Building human-in-the-loop platforms
- Scaling data infrastructure
- Designing annotator workflows
- Prototyping RL environments
- Deploying distributed experiments on Kubernetes
- Improving reliability of Next.js dashboards
- Building Kafka pipelines
- Designing and building scalable systems
- Collaborating across research
- product
- and design
- Writing clean
- maintainable code
- Participating in code reviews
- Solving scalability and reliability challenges
Salary : $160,000 - $220,000