What are the responsibilities and job description for the AI Operations Platform Consultant position at ClearBridge Technology Group?
Job Details
Our client, located in Charlotte, NC and Jersey City, NJ, is currently in need of an AI Operations Platform Consultant for a 6 month contract. The consultant will work a hybrid schedule, 3 days onsite and 2 remote, out of either location.
Responsibilities:
Required Skills:
ClearBridge Technology Group is an Equal Opportunity Employer.
We offer excellent benefits and compensation packages.
The expected hourly rate range for this role is: $75 - 110 per hour
The posted range is an estimate, the actual compensation offer will be based on the candidate's experience, skills, qualifications and will be in line with internal equity.
Responsibilities:
- Deploying, managing, operating, and troubleshooting containerized services at scale on Kubernetes for mission-critical applications (OpenShift)
- Deploying, configuring, and tuning LLMs using TensorRT-LLM and Triton Inference server.
- Managing MLOps/LLMOps pipelines, using TensorRT-LLM and Triton Inference server to deploy inference services in production
- Setup and operation of AI inference service monitoring for performance and availability.
- Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.
- Operation and support of MLOps/LLMOps pipelines, using TensorRT-LLM and Triton Inference server to deploy inference services in production
- Deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.
- Standard processes for operation of a mission critical system - incident management, change management, event management, etc.
- Managing scalable infrastructure for deploying and managing LLMs
- Deploying models in production environments, including containerization, microservices, and API design
- Triton Inference Server, including its architecture, configuration, and deployment.
- Model Optimization techniques using Triton with TRTLLM
- Model optimization techniques, including pruning, quantization, and knowledge distillation
Required Skills:
- Ability to pass an in-depth background check
- Ability to work onsite out of either Jersey City, NJ or Charlotte, NC 3 days per week
- Experience deploying, managing, operating, and troubleshooting containerized services at scale on Kubernetes for mission-critical applications (OpenShift)
- Experience with deploying, configuring, and tuning LLMs using TensorRT-LLM and Triton Inference server.
- Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.
- Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.
- Experience with standard processes for operation of a mission critical system - incident management, change management, event management, etc.
ClearBridge Technology Group is an Equal Opportunity Employer.
We offer excellent benefits and compensation packages.
The expected hourly rate range for this role is: $75 - 110 per hour
The posted range is an estimate, the actual compensation offer will be based on the candidate's experience, skills, qualifications and will be in line with internal equity.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
Salary : $75 - $110