What are the responsibilities and job description for the Sr. GenAI Engineer- Dallas, TX- Only Locals position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Keylent, is seeking the following. Apply via Dice today!
Job Title: Sr. GenAI Engineer
Location: Dallas, TX
Required Skills: Strong expertise in knowledge graph, Graph DB, Vector DB, Neo4j, RAG, and similar technologies.
About the Role:
One of the finance clients is seeking a Sr Gen AI Engineer(11 years of exp.) with strong expertise in knowledge graph, Graph DB, Vector DB, Neo4j, RAG, and similar technologies. This engineer will design and implement data infrastructure that enables efficient fine-tuning and deployment of large language models (LLMs) on client servers for low-latency inference. The role demands a hands-on technologist who can architect, build, and optimize data systems serving enterprise-grade AI use cases.
Key Responsibilities:
Job Title: Sr. GenAI Engineer
Location: Dallas, TX
Required Skills: Strong expertise in knowledge graph, Graph DB, Vector DB, Neo4j, RAG, and similar technologies.
About the Role:
One of the finance clients is seeking a Sr Gen AI Engineer(11 years of exp.) with strong expertise in knowledge graph, Graph DB, Vector DB, Neo4j, RAG, and similar technologies. This engineer will design and implement data infrastructure that enables efficient fine-tuning and deployment of large language models (LLMs) on client servers for low-latency inference. The role demands a hands-on technologist who can architect, build, and optimize data systems serving enterprise-grade AI use cases.
Key Responsibilities:
- Design and implement GraphDB and VectorDB solutions to store, query, and retrieve structured and unstructured financial data.
- Build knowledge graph pipelines integrating multiple data sources to support LLM fine-tuning and retrieval-augmented generation workflows.
- Set up scalable data pipelines for model training, embedding generation, and data preprocessing
- Collaborate with AI researchers and ML engineers to prepare data and infrastructure for fine-tuning open-source or proprietary LLMs.
- Deploy and optimize model hosting for fast inference on on-prem or cloud GPU servers.
- Ensure data governance, lineage and compliance with internal and regulatory standards.