What are the responsibilities and job description for the Member of Technical Staff - Research Intern position at Architect?
What You'll Do
As a Research Intern at Architect, you will spend 3 months working alongside the founding team to push the boundaries of how AI models explore and optimize hardware designs. This is a high-impact role where your experiments will directly influence our core modeling roadmap.
Qualifications & Skills
As a Research Intern at Architect, you will spend 3 months working alongside the founding team to push the boundaries of how AI models explore and optimize hardware designs. This is a high-impact role where your experiments will directly influence our core modeling roadmap.
- Responsible for co-designing and implementing the Reinforcement Learning experiments (GRPO/PPO/DPO), training data mixes and reward signal explorations.
- Contribute to research on post-training techniques, running ablation studies to improve model reasoning and alignment capabilities.
- Implement and test new algorithms for model fine-tuning and evaluation, helping to translate research papers into working prototypes.
- Analyze experimental results and debug model behavior to help establish best practices for our training recipes.
Qualifications & Skills
- Education: Currently pursuing a PhD or Master’s degree in Computer Science, Machine Learning, Mathematics, or a related field. Exceptional undergraduates with strong research experience are also encouraged to apply.
- RL Knowledge: Strong academic understanding or project experience with Reinforcement Learning (e.g., PPO, DPO, GRPO). You should be comfortable reading and implementing concepts from recent research papers.
- Coding Proficiency: Strong proficiency in Python and deep learning frameworks (PyTorch). You should be able to write clean, efficient research code.
- Research Mindset: A fast learner who is comfortable navigating ambiguity. You enjoy analyzing complex problems and iterating quickly on experiments.
- LLM Familiarity: Experience with training or fine-tuning Large Language Models (LLMs) or familiarity with the modern NLP stack (Transformers, HuggingFace, etc.).
- Previous internship experience at frontier AI labs or research organizations.
- Publications (or submissions) in top ML venues (NeurIPS, ICLR, ICML) or EDA venues (DAC, ICCAD).
- Familiarity with hardware design concepts (Verilog, RTL, EDA tools), though not required.
- Competitive internship stipend
- Mentorship from a team of researchers and engineers from Anthropic, DeepMind, Meta, and Stanford
- Opportunity to work on 0→1 problems in AI-driven chip design