What are the responsibilities and job description for the Principal Information Security Engineer – AI Security position at AppsTek Corp?
Principal Information Security Engineer – AI Security
Only W2
Location: Remote (U.S. preferred) | O’Fallon, MO or New York acceptable
Client: Mastercard
Engagement: Contract (W2 – No flexible)
Duration: 12 Months
Openings: 2 Positions
Target Start: Immediate
Area Minimum Experience:
Overall cybersecurity 8–12 years
AI security / ML security 2–4 years
Cloud/app security 3–6 years
Governance & frameworks (NIST/ISO/GDPR) 2–4 years
Hands-on tooling / red-team integration Recent active experience required
We are seeking an experienced Principal Information Security Engineer – AI Security to join Mastercard’s newly formed Data Protection & AI Security team within the Emerging Corporate Security Solutions Program. This role will be instrumental in shaping and securing AI systems across the enterprise by developing standards, evaluating AI implementations, and deploying security guardrails and tooling.
This is a highly hands-on technical role focused on architecting secure AI frameworks, validating AI model security, governance, red-team testing integration, and compliance with emerging regulatory and industry standards.
Key Responsibilities- AI Security Architecture:
- Design and implement security frameworks for AI and ML systems including secure design principles and secure coding practices.
- AI Risk & Vulnerability Assessment:
- Evaluate AI implementations for security weaknesses, conduct model risk assessments, and integrate red-team testing methodologies.
- Tooling & Vendor Evaluation:
- Assess, implement, and operate third-party AI security tools including guardrails, model evaluation platforms, threat detection, and audit tooling.
- Secure AI Development Lifecycle (SAI-SDLC):
- Partner with data scientists and engineers to embed security into the AI lifecycle including model training, validation, deployment, and monitoring.
- Compliance & Governance:
- Map AI security practices to industry frameworks and regulations including NIST, ISO, GDPR, and emerging government standards.
- Standards Development:
- Author security standards, governance processes, and control frameworks for enterprise AI systems.
- Research & Threat Intelligence:
- Stay ahead of emerging threats and vulnerabilities across AI technologies (GenAI, agentic AI, LLM risks, data poisoning, model leakage, etc.).
- Documentation & Reporting:
- Prepare SOPs, security reports, assessment findings, and remediation recommendations.
- Advisory & Enablement:
- Provide subject-matter expertise and technical mentoring to security teams and application stakeholders.
- POCs & R&D:
- Design proof-of-concept initiatives to validate evolving threats and next-generation AI security solutions.
- 8 years of hands-on experience in Information Security with at least 2–4 years focused on AI / ML security
- Proven experience securing AI systems or ML platforms including:
- Model security reviews & evaluations
- AI penetration / red-team testing
- Guardrails & governance tooling
- Strong working experience with industry frameworks and compliance standards such as:
- NIST (AI RMF / 800-series)
- ISO 27001 / ISO AI Standards
- GDPR and data privacy controls
- Solid technical background in:
- Cloud security (AWS, Azure, or GCP)
- IAM, encryption, access controls
- DevSecOps or secure software development practices
- Hands-on experience with third-party security tools and platforms
- Ability to operate independently and deliver end-to-end program initiatives
- Excellent communication skills for documentation and improving cross-team collaboration
- Security certifications such as CISSP, CEH, OSCP, CCSP
- Experience assessing GenAI, LLMs, and Agentic AI systems
- Strong interest in emerging AI regulatory standards
- Mentoring experience or technical leadership background
Thank
siva kumar
spampana@appstekcorp.com