Demo

Red Teaming Lead, Responsibility

Google DeepMind
Mountain View, CA Full Time
POSTED ON 12/21/2025
AVAILABLE BEFORE 2/13/2026
Snapshot

This role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content.

As Red Teaming Lead in Responsibility at Google DeepMind, you will be working with a diverse team to drive and grow red teaming of Google DeepMind's most groundbreaking models. You will be responsible for our frontier risk red teaming program, which probes for and identifies emerging model risks and vulnerabilities. You will pioneer the latest red teaming methods with teams across Google DeepMind and external partners to ensure that our work is conducted in line with responsibility and safety best practices, helping Google DeepMind to progress towards its mission.

About Us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The role

As a Red Teaming Lead working in Responsibility, you'll be responsible for managing and growing our frontier risk red teaming program. You will be conducting hands-on red teaming of advanced AI models, partnering with external organizations on red teaming exercises, and working closely with product and engineering teams to develop the next generation of red teaming tooling. You'll be supporting the team across the full range of development, from running early tests to developing higher-level frameworks and reports to identify and mitigate risks.

Key Responsibilities

  • Leading and managing the end-to-end responsibility & safety red teaming programme for Google DeepMind.
  • Designing and implementing expert red teaming of advanced AI models to identify risks, vulnerabilities, and failure modes across emerging risk areas such as CBRNe, Cyber and socioaffective behaviors.
  • Partnering with external red teamers and specialist groups to design and execute novel red teaming exercises.
  • Collaborating closely with product and engineering teams to design and develop innovative red teaming tooling and infrastructure.
  • Converting high-level risk questions into detailed testing plans, and implementing those plans, influencing others to support as necessary.
  • Working collaboratively alongside a team of multidisciplinary specialists to deliver on priority projects and incorporate diverse considerations into projects.
  • Communicating findings and recommendations to wider stakeholders across Google DeepMind and beyond.
  • Providing an expert perspective on AI risks, testing methodologies, and vulnerability analysis in diverse projects and contexts.

About You

In order to set you up for success in this role, we are looking for the following skills and experience:

  • Demonstrated experience running or managing red teaming or novel testing programs, particularly for AI systems.
  • A strong, comprehensive understanding of sociotechnical AI risks from recognized systemic risks to emergent risk areas.
  • A solid technical understanding of how modern AI models, particularly large language models, are built and operate.
  • Strong program management skills with a track record of successfully delivering complex, cross-functional projects.
  • Demonstrated ability to work within cross-functional teams, fostering collaboration, and influencing outcomes.
  • Ability to present complex technical findings to both technical and non-technical teams, including senior stakeholders.
  • Ability to thrive in a fast-paced environment, and an ability to pivot to support emerging needs.
  • Demonstrated ability to identify and clearly communicate challenges and limitations in testing approaches and analyses.

In addition, the following would be an advantage:

  • Direct, hands-on experience in safety evaluations and developing mitigations for advanced AI systems.
  • Experience with a range of experimentation and evaluation techniques, such as human study research, AI or product red-teaming, and content rating processes.
  • Experience working with product development or in similar agile settings.
  • Familiarity with sociotechnical and safety considerations of generative AI, including systemic risk domains identified in the EU AI Act (chemical, biological, radiological, and nuclear; cyber offense; loss of control; harmful manipulation).

The US base salary range for this full-time position is between $174,000 - $258,000 bonus equity benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy.

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

Salary : $174,000 - $258,000

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Red Teaming Lead, Responsibility?

Sign up to receive alerts about other jobs on the Red Teaming Lead, Responsibility career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$81,561 - $110,340
Income Estimation: 
$91,204 - $122,222
Income Estimation: 
$149,493 - $192,976
Income Estimation: 
$184,796 - $233,226
Income Estimation: 
$77,900 - $95,589
Income Estimation: 
$101,387 - $124,118
Income Estimation: 
$101,387 - $124,118
Income Estimation: 
$119,030 - $151,900
Income Estimation: 
$119,030 - $151,900
Income Estimation: 
$149,493 - $192,976
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Google DeepMind

  • Google DeepMind Cambridge, MA
  • Snapshot Join Google DeepMind's Robotics team as a Technical Program Manager and be at the forefront of building the Embodied AI powering the generation of... more
  • 12 Days Ago

  • Google DeepMind Mountain View, CA
  • Snapshot We’re looking for a talented Events Manager to join our Events and Experiences team supporting the GenAI Unit, at an exciting time in our history.... more
  • 12 Days Ago

  • Google DeepMind Mountain View, CA
  • Snapshot Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine lear... more
  • 13 Days Ago

  • Google DeepMind Mountain View, CA
  • The Role As a Technical Program Manager for Reinforcement Learning (RL) Infrastructure & Reliability, you will focus on a critical, rapidly evolving area: ... more
  • 14 Days Ago


Not the job you're looking for? Here are some other Red Teaming Lead, Responsibility jobs in the Mountain View, CA area that may be a better fit.

  • deepmind Mountain View, CA
  • Snapshot This role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content. As Red Team... more
  • 24 Days Ago

  • PaloAlto Networks Santa Clara, CA
  • Job Details Company Description Our Mission At Palo Alto Networks everything starts and ends with our mission: Being the cybersecurity partner of choice, p... more
  • 1 Month Ago

AI Assistant is available now!

Feel free to start your new journey!