What are the responsibilities and job description for the Gen AI Architect position at Recfront?
About Our Client
Our client is a global enterprise operating at the intersection of data, analytics, and AI, supporting large-scale digital transformation initiatives for leading organizations across multiple industries.
With a strong international presence and delivery footprint spanning India, the Middle East, and North America, the organization combines deep domain expertise with advanced technology capabilities to drive measurable business outcomes. Their operating model leverages a blend of onshore, nearshore, and offshore teams, enabling scalable and efficient execution across geographies.
Job Overview
As a Data Architect, you will play a critical role in designing and building scalable, high-performance data ecosystems that enable advanced analytics, AI, and business intelligence.
This is a strategic role that bridges business requirements and technical execution, ensuring data platforms are optimized for performance, governance, and AI-readiness across distributed environments.
Key Responsibilities
- Define and lead the end-to-end data architecture strategy aligned with business and analytics goals
- Design conceptual, logical, and physical data models for high-performance analytics and reporting
- Architect modern data platforms (Lakehouse / Medallion architecture) ensuring data quality, lineage, and scalability
- Establish data governance frameworks, including metadata management, security standards, and MDM practices
- Enable AI and advanced analytics use cases, including support for LLMs, vector databases, and RAG pipelines
- Collaborate across data engineering, business, and leadership teams to translate requirements into scalable architectures
Technical Expertise
- 8 years of experience in Data Architecture / Senior Data Engineering roles
- Strong expertise in data modeling techniques (3NF, dimensional modeling, Data Vault 2.0)
- Experience in modern data platforms on Azure, AWS, or Snowflake ecosystems
- Hands-on exposure to Spark-based processing and Delta Lake architectures
- Strong knowledge of SQL (PostgreSQL, SQL Server) and NoSQL systems
- Familiarity with vector databases (e.g., Pinecone, Milvus) and AI-ready data systems
- Experience with data modeling tools such as ER/Studio, Erwin, or Lucidchart
- Ability to modernize legacy systems through reverse engineering and architecture redesign
Work Model
- Global delivery environment with collaboration across India, Middle East, and North America
- Opportunity to work on cross-border data platforms and transformation programs
- Exposure to enterprise-scale data and AI initiatives