What are the responsibilities and job description for the Offensive Security Analyst (Structured / Non-Exploit) position at Alignerr?
About The Role
What if your red team instincts and adversarial mindset could directly shape how the world's most advanced AI systems understand cyber threats? We're looking for Offensive Security Analysts to analyze real-world attack paths, model adversary behavior, and help build AI that genuinely understands how breaches happen — and how to reason about them.
This role is built around structured adversarial reasoning, not exploit development. You'll bring your knowledge of how attacks unfold in production environments and translate that expertise into data that trains and evaluates frontier AI systems.
What if your red team instincts and adversarial mindset could directly shape how the world's most advanced AI systems understand cyber threats? We're looking for Offensive Security Analysts to analyze real-world attack paths, model adversary behavior, and help build AI that genuinely understands how breaches happen — and how to reason about them.
This role is built around structured adversarial reasoning, not exploit development. You'll bring your knowledge of how attacks unfold in production environments and translate that expertise into data that trains and evaluates frontier AI systems.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 10–40 hours/week
- Analyze attack paths, kill chains, and adversary strategies across realistic, real-world system scenarios
- Identify weaknesses, misconfigurations, and defensive gaps and explain how they compound into risk
- Review and evaluate red team-style scenarios and intrusion narratives for accuracy and depth
- Generate, label, and validate adversarial reasoning data used to train and evaluate AI systems
- Clearly articulate attack chains, business impact, and security tradeoffs in structured formats
- Work independently and asynchronously on task-based assignments
- 2 years of hands-on experience in penetration testing, red team operations, or a blue team role with deep offensive knowledge
- Strong understanding of how real attacks unfold across modern production environments — from initial access to lateral movement to impact
- Able to think like an adversary and communicate that thinking clearly and precisely in writing
- Comfortable breaking down complex attack narratives into structured, well-reasoned analysis
- Detail-oriented and methodical — you notice the gap that others miss
- Familiarity with frameworks like MITRE ATT&CK, kill chain modeling, or threat intelligence workflows
- Experience writing red team reports, threat models, or adversary emulation plans
- Background in cloud security, Active Directory attacks, or network intrusion analysis
- Prior exposure to AI tools, security research, or data labeling projects
- Work directly on frontier AI systems alongside leading AI research labs
- Fully remote and flexible — work when and where it suits you
- Freelance autonomy with the structure of meaningful, impactful work
- Apply your offensive security expertise to a genuinely novel and high-impact problem
- Potential for ongoing work and contract extension as new projects launch
Salary : $40 - $60