What are the responsibilities and job description for the AI QA/Testing Engineer position at Pozent Corporation?
Skill Set - Model Validation, Bias/Fairness Testing, Automation
Role Overview
Ensure quality, fairness, and reliability of AI/ML models through comprehensive testing, validation, and automation. Focus on model performance, bias detection, and ethical AI practices.
Responsibilities
Role Overview
Ensure quality, fairness, and reliability of AI/ML models through comprehensive testing, validation, and automation. Focus on model performance, bias detection, and ethical AI practices.
Responsibilities
- Design and execute test strategies for AI/ML models and systems
- Perform model validation including accuracy, robustness, and performance testing
- Conduct bias and fairness testing to identify and mitigate discriminatory outcomes
- Develop automated testing frameworks for continuous model evaluation
- Test data pipelines, feature engineering, and model inference systems
- Create test cases for edge cases, adversarial inputs, and model behavior analysis
- Monitor model drift and performance degradation in production
- Collaborate with ML engineers and data scientists to ensure model quality standards
- Bachelor's degree in Computer Science, Engineering, Data Science, or related field
- Strong experience in model validation and ML testing methodologies
- Proven expertise in bias and fairness testing for AI systems
- Hands-on experience with test automation frameworks and tools
- Understanding of ML model evaluation metrics and statistical testing
- Proficiency in Python and testing libraries (pytest, unittest)
- Knowledge of AI ethics, responsible AI principles, and regulatory requirements
- Experience with CI/CD pipelines and version control (Git)
- Experience testing various ML models (LLMs, computer vision, NLP, recommender systems)
- Familiarity with fairness metrics (demographic parity, equalized odds, disparate impact)
- Knowledge of adversarial testing and model robustness evaluation
- Experience with A/B testing and experimental design
- Understanding of data quality testing and validation
- Familiarity with ML frameworks (PyTorch, TensorFlow, scikit-learn)
- Experience with monitoring tools (MLflow, Weights & Biases, Evidently AI)
- Knowledge of GDPR, AI Act, or other AI governance frameworks