What are the responsibilities and job description for the Generative AI Engineer position at Turing?
Please note that this position requires employees to work onsite (WFO) from Alpharetta, GA. Candidates must be located in or willing to relocate, as remote work is not available for this role.
About Turing
Based in Palo Alto, California, Turing is the world’s first AI-powered tech services company. It has reimagined tech services from the ground up with AI by offering AI-vetted and matched talent, AI-accelerated development, and access to AI transformation experts who have built many of the most iconic Silicon Valley companies.
Founded in 2018, the company has experienced tremendous growth with over two million global developers on its Talent Cloud and 900 clients. Turing has received numerous awards, including Forbes’s 2022 “One of America’s Best Startup Employers,” being ranked #1 in The Information’s 2021 Annual List of most promising B2B Companies and Fast Company’s “Annual List of the World’s Most Innovative Companies.”
The company’s leadership team comprises both AI technologists from leading organizations including Meta, Google, Microsoft, Apple, Amazon, Twitter, Stanford, Caltech, MIT as well as tech consulting veterans from Accenture, Cognizant, Capgemini, McKinsey, Bain, and more.
About the role:
Turing is looking for people with LLM experience to join us in solving business problems for our Fortune 500 customers. You will be a key member of the Turing GenAI delivery organization and part of a GenAI project. You will be required to work with a team of other Turing engineers across different skill sets. In the past, the Turing GenAI delivery organization has implemented industry leading multi-agent LLM systems, RAG systems, and Open Source LLM deployments for major enterprises.
Required skills
● 7 years of professional experience in building Machine Learning models & systems.
● 1 years of hands-on experience in how LLMs work & Generative AI (LLM) techniques particularly prompt engineering, RAG, and agents.
● Expert proficiency in programming skills in Python, Langchain/Langgraph and SQL is a must.
● Understanding of Cloud services from various cloud services from Azure, GCP, or AWS for building the GenAI applications.
● Excellent communication skills to effectively collaborate with business SMEs.
Roles & Responsibilities
● Develop and optimize LLM-based solutions: Lead the design, training, fine-tuning, and deployment of large language models, leveraging techniques like prompt engineering, retrieval-augmented generation (RAG), and agent-based architectures.
● Codebase ownership: Maintain high-quality, efficient code in Python (using frameworks like LangChain/LangGraph) and SQL, focusing on reusable components, scalability, and performance best practices.
● Cloud integration: Aide in the deployment of GenAI applications on cloud platforms (Azure, GCP, or AWS), optimizing resource usage and ensuring robust CI/CD processes.
● Cross-functional collaboration: Work closely with product owners, data scientists, and business SMEs to define project requirements, translate technical details, and deliver impactful AI products.
● Mentoring and guidance: Provide technical leadership and knowledge-sharing to the engineering team, fostering best practices in machine learning and large language model development.
● Continuous innovation: Stay abreast of the latest advancements in LLM research and generative AI, proposing and experimenting with emerging techniques to drive ongoing improvements in model performance.