What are the responsibilities and job description for the Applied Scientist position at AppGate?
About AppGate
AppGate secures and protects an organization's most valuable assets with its high performance Zero Trust Network Access (ZTNA) solution and Cyber Advisory Services. AppGate is the only direct-routed ZTNA solution built for peak performance, superior protection and seamless interoperability. AppGate Cyber Advisory services harden your security posture and ensure business continuity. AppGate safeguards enterprises and government agencies worldwide.
As we expand our platform, we are building a Data AI team to unlock the next generation of security solutions ranging from network observability to AI-driven threat detection, insider risk monitoring, and securing emerging Agentic AI systems.
This is a rare opportunity to join a small, private, high-impact company where your work directly shapes the strategy, architecture, and core products that will define the future of security.
The Role
We’re looking for an Applied AI Scientist who will design, prototype, and operationalize AI systems that make security autonomous, explainable, and resilient.
You’ll bridge research and production, working on the hardest problems in AI for Security and Security for AI from model robustness to AI agent safety while shaping foundational capabilities across our product line.
What You’ll Work On
Your research and engineering work will directly enable next-generation capabilities, including:
- Network Policy Analyzer: LLMs and graph reasoning to understand, simulate, and optimize security policies.
- Threat Anomaly Detection & Breach Prediction: Deep models for early detection using behavioral, contextual, and temporal signals.
- AI Agents for Root Cause Analysis: Self-learning systems that explain and act on complex security incidents.
- Insider Risk Detection: Multi-modal AI for behavioral patterning and intent inference.
- Securing Agentic AI Systems: Pioneering research into trust, alignment, and policy enforcement for autonomous AI.
What You’ll Do
- Design AI Security Systems: Develop algorithms and architectures for detection, reasoning, and defense — spanning supervised, unsupervised, and generative AI paradigms.
- Prototype & Validate Models: Build and evaluate experimental AI models using real-world security data (network telemetry, identity logs, threat indicators).
- Operationalize Research: Collaborate with engineers to take proof-of-concept AI systems from notebooks to production environments.
- Advance AI Security Science: Publish and present cutting-edge findings in leading conferences and journals.
- Build Foundations for Safe AI: Research and develop methods for model robustness, privacy-preserving learning, and secure deployment of LLMs and agents.
- Collaborate Cross-Functionally: Partner with engineers, product teams, and leadership to align AI advancements with AppGate ’s strategic vision.
What We’re Looking For
- Ph.D. in Computer Science, Electrical Engineering, or a related field specializing in AI, Machine Learning used in the security domain.
- Research Excellence: Publications in top-tier venues (e.g., NeurIPS, ICML, ICLR, USENIX, CCS, IEEE).
- Technical Expertise:
- Deep understanding of modern ML (LLMs, GNNs, RL, anomaly detection).
- Experience applying AI to security, threat intelligence, or autonomous systems.
- Strong background in AI safety, robustness, or trustworthy ML.
- Skilled in Python, PyTorch/TensorFlow, and data frameworks for experimentation and deployment.
- Mindset: Curious, rigorous, and impact-driven. You enjoy moving from theory to production and thrive at the intersection of AI and real-world security challenges.
Why AppGate
AppGate is a dynamic, innovative, and friendly place to work. Whether it’s taking ideas from our varied past experiences and applying them in different ways, or creating something completely new, we are all innovative team players who think big and want to make an impact. We strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to collaborate and contribute to the team.
- Impact without bureaucracy: Your decisions will directly influence product direction and company success.
- Small team, big mission: Work with world-class engineers and security experts in an entrepreneurial environment.
- Cutting-edge domain: Be at the forefront of securing the AI era, from Zero Trust to autonomous agents.
- Growth opportunity: Define best practices, shape culture, and grow into broader leadership roles.
- Location: NYC-based, with flexibility for hybrid work.
- We offer a competitive compensation and benefits package:
- Competitive salary, bonus and equity
- 401k including company match
- Full benefits including medical, dental, vision, short and long term disability, and life insurance
- Flex time off Policy
- Remote Work / Home office setup stipend
- Mobile Phone Stipend
- Certification assistance program
This is your chance to design AI systems that secure networks, users, and even autonomous AI agents.
If you’re a Ph.D.-level applied scientist passionate about AI Security we want to hear from you.