What are the responsibilities and job description for the AI Security Engineer position at capital.com?
We are looking for an AI Security Engineer to secure our AI-driven systems, including LLM-based applications, machine learning models, and AI-enabled automation tools.
This role will focus on identifying, assessing, and mitigating security risks across the AI lifecycle — from model development and training to deployment and runtime monitoring.
The ideal candidate combines strong security engineering experience with a deep understanding of machine learning systems and emerging AI-specific threats (e.g., prompt injection, model poisoning, data leakage, adversarial attacks).
- Design and implement security controls for AI/ML systems across development, training, and production.
- Secure LLM integrations, RAG pipelines, and AI APIs.
- Conduct threat modeling for AI systems and data pipelines.
- Define secure-by-design patterns for AI-powered features.
AI Threat Detection & Mitigation
- Identify and mitigate AI-specific threats: prompt injection and jailbreak techniques, model poisoning and data contamination, adversarial attacks, training data leakage, insecure model serialization, excessive permissions in AI agents.
- Develop guardrails, content filters, and output validation mechanisms.
- Implement monitoring for anomalous AI behavior.
Secure Development & DevSecOps
- Integrate AI security checks into CI/CD pipelines.
- Perform security reviews of ML code and AI-related infrastructure.
- Secure model registries and artifact storage.
- Collaborate with other engineers and platform teams to enforce security standards.
Data Protection & Compliance
- Ensure AI systems comply with: GDPR and data privacy regulations, financial industry regulatory requirements, implement controls for sensitive data used in training and inference, perform AI risk assessments aligned with internal risk methodology.
Governance & Policy
- Contribute to AI security standards and internal policies.
- Define AI risk classification and control frameworks.
- Support security reviews for new AI initiatives / tools.
- 3–5 years in cybersecurity engineering or application security.
- Hands-on experience with ML/AI systems (LLMs, NLP models, or similar).
- Strong understanding of: OWASP Top 10, Secure SDLC, Cloud security (AWS/Azure/GCP).
- Experience with: Python, API security, Containerization (Docker, Kubernetes).
- Knowledge of AI-specific security risks and mitigations.
- Experience conducting threat modeling and risk assessments.
- Experience securing LLM-based applications (OpenAI, Anthropic, Azure OpenAI, etc.).
- Familiarity with: RAG architectures, Vector databases, ML pipelines (MLflow, Kubeflow, SageMaker).
- Experience in fintech or regulated environments.
- Knowledge of AI governance frameworks (e.g., EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Experience with AI red teaming.
- Strong analytical and problem-solving skills.
- Ability to translate technical risk into business impact.
- Able to explain AI security risks and mitigations to non-security teams.
- Cross-functional collaboration with ML, data, and product teams.
- Clear documentation and communication skills.
Be a key player at the forefront of the digital assets movement, propelling your career to new heights! Join a dynamic and rapidly expanding company that values and rewards talent, initiative, and creativity. Work alongside one of the most brilliant teams in the industry.