What are the responsibilities and job description for the Machine Learning Engineer with GEN AI experience position at Compunnel Inc.?
Please find the position details below:
Job Title: Machine Learning Engineer with GEN AI experience
Location: Durham, NC or Boston MA, Merrimack NH & Smithfield RI ( 2 weeks remote , 2 weeks onsite)
Duration: Long term Contract with possibility of Conversion
Interview: 2 rounds, 1st – 60 mins Technical Panel, 2ND - 30 mins Manager call
What the Client is Looking For:
Core Technical Fit
- Software Engineering Strength (10 yrs) — APIs, microservices, cloud deployments.
- Machine Learning Engineering (3–5 yrs) — experience building, deploying, and maintaining ML or GenAI solutions.
- RAG Expertise — must have built and deployed Retrieval-Augmented Generation pipelines.
- Vector Database Experience — FAISS, Pinecone, Weaviate, Milvus.
- Agent Frameworks — LangChain, CrewAI, LangGraph, AutoGen.
- Cloud Native Skills (AWS) — S3, Lambda, ECS, SageMaker.
- DevOps — Docker, Kubernetes, GitHub Actions, CI/CD.
- Observability — Prometheus, Grafana, OpenTelemetry (nice-to-have).
- Data & Knowledge Graphs — Snowflake, Oracle, Neo4j, RDF/SPARQL.
- AI Ethics & Governance — understanding of Responsible AI.
What the Project Is About:
This is a Machine Learning Engineering project within AI/ML division, focusing on integrating and productionizing AI/GenAI solutions.
Team Context:
- 15-member team: 6 are AI/ML specialists; 9 are on the BI (Business Intelligence) side.
- This is the first AI delivery team in the organization — meaning they are defining AI/ML standards, pipelines, and best practices for future teams.
Project Focus:
The goal is to:
- Deploy and scale AI/ML models (especially Generative AI and RAG-based solutions) into production.
- Integrate agentic or multi-agent systems using frameworks like LangChain, CrewAI, LangGraph, or AutoGen.
- Build cloud-native ML/GenAI pipelines on AWS (Lambda, ECS, S3, SageMaker).
- Establish data retrieval and augmentation systems (RAG) using vector databases (FAISS, Pinecone, Weaviate, Milvus).
- Develop monitoring, observability, and CI/CD practices for deployed ML models.
- Promote Responsible AI practices across AI ecosystem.
Essentially, this is a hands-on engineering role (not research-oriented) focused on:
“Turning research and experimental models into scalable, production-grade AI systems.”