What are the responsibilities and job description for the AI QE Architect and Governance Leader (NO C2C) position at Atinfo Technology Inc?
Job Details
NO Sponsorship, No C2C, Onsite from day one.
Job description:
We are seeking an experienced AI Quality Engineering (QE) Architect & Governance Leader to drive end-to-end quality, automation, and governance for AI, ML, and GenAI solutions across mission-critical and highly regulated environments. This role combines deep expertise in automation, AI/ML technologies, and agentic workflows with strong leadership in governance, risk, and compliance especially for Energy/Utilities/Nuclear domains.
You will architect future-state AI QE frameworks, establish testing standards for LLMs and agents, automate validation pipelines, enforce model governance, and lead enterprise-wide assurance initiatives.
Core Experience & Qualifications
- 10 14 years as a Technology Architect, SDET, or QE Leader with strong coding skills in Python, TypeScript, or Java.
- Proven experience designing automation frameworks or developer tools for large engineering organizations.
- Hands-on expertise with Large Language Models (LLMs), prompt engineering, and safety evaluation techniques.
- Exposure to Agentic AI systems and orchestration tools such as LangGraph, AutoGen, CrewAI, or similar agent frameworks.
- Experience implementing Model Context Protocol (MCP) for real-time automation, autonomous workflows, or CI/CD integrations.
- Experience working in highly regulated industries Energy, Utilities, Nuclear, Healthcare, BFSI, or similar.
AI / ML Technologies
- Practical experience with frameworks and ecosystems:
- LangChain, Hugging Face, GPT models, vector databases
- Working knowledge of ML/DL libraries:
- Scikit-learn, TensorFlow, Keras, PyTorch, HuggingFace Transformers, OpenCV, NLTK and BART
- Understanding of RAG architectures, embeddings, and semantic search (bonus).
GenAI & AI Agent Development
- Expertise in designing, developing, validating, and deploying Generative AI solutions.
- Experience building AI agents, multi-agent workflows, or autonomous decision systems.
Ability to define governance for:
- LLM drift detection
- Prompt quality standards
- Agent monitoring & observability
- Data lineage & model versioning
Automation & Quality Engineering
- Strong experience building test automation frameworks using Python, PyTest, Selenium, Playwright, and Requests.
- Ability to create automated tests covering:
Functional
- API
- Integration
- Performance
- Security
- AI/ML validation (LLM testing, model accuracy, hallucination detection)
- Proficiency in API testing and validation of RESTful services.
- Plus: Experience with performance/load testing tools K6 or JMeter.
AI Governance & Compliance
- Establish AI/ML quality standards, testing guidelines, and risk controls.
Define governance around:
- Data security & privacy
- Model evaluation KPIs (accuracy, bias, toxicity, hallucination rates)
- Regulatory alignment for Energy/Utility operations
- Continuous monitoring & drift alerts
- Experience working with IRB, compliance, or audit teams.
Cloud, DevOps & CI/CD
- Knowledge of deploying AI and automation solutions on AWS.
- Experience implementing CI/CD pipelines for ML and LLM models:
- Model versioning & lifecycle
- Retraining workflows
- Automated evaluation gates
- Infrastructure-as-code (IAC) familiarity
- Experience implementing observability frameworks for AI/ML systems.
SDLC & Collaboration
- Strong understanding of end-to-end SDLC and QE methodologies.
- Work closely with developers, product managers, data scientists, and business stakeholders.
- Ability to provide clear communication around test strategy, risks, coverage, and governance readiness.
- Skilled in defect triage, risk-based testing, and quality strategy leadership.