What are the responsibilities and job description for the Data Engineer (Snowflake + AI) position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Tanisha Systems, Inc., is seeking the following. Apply via Dice today!
Position: Data Engineer (Snowflake AI)
Location: Sunnyvale, CA (Onsite) relocation as well.
Salary: Market/yr. Benefits
Rate: Market (DOE) - W2 contracting
We need ex-a*** candidates
JC 156232
Job Description:
We are looking for a skilled Data Engineer with strong expertise in Snowflake and AI/ML technologies. The ideal candidate will be responsible for building scalable data pipelines, enabling advanced analytics, and supporting AI-driven solutions.
Key Responsibilities:
Position: Data Engineer (Snowflake AI)
Location: Sunnyvale, CA (Onsite) relocation as well.
Salary: Market/yr. Benefits
Rate: Market (DOE) - W2 contracting
We need ex-a*** candidates
JC 156232
Job Description:
We are looking for a skilled Data Engineer with strong expertise in Snowflake and AI/ML technologies. The ideal candidate will be responsible for building scalable data pipelines, enabling advanced analytics, and supporting AI-driven solutions.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Snowflake
- Implement data ingestion, transformation, and integration processes
- Work with structured and unstructured data for AI/ML use cases
- Collaborate with data scientists to deploy and optimize AI/ML models
- Optimize Snowflake performance, cost, and data architecture
- Ensure data quality, governance, and security standards
- Support real-time and batch data processing frameworks
- Strong experience with Snowflake (data modeling, performance tuning, security)
- Proficiency in SQL and Python
- Experience with ETL/ELT tools (e.g., Airflow, Informatica, dbt)
- Hands-on experience with cloud platforms (AWS/Azure/Google Cloud Platform)
- Understanding of AI/ML concepts and model deployment
- Experience with big data technologies (Spark, Kafka is a plus)
- Experience with MLOps and model lifecycle management
- Knowledge of data lake and lakehouse architectures
- Familiarity with GenAI/LLM integration is an added advantage
- Strong problem-solving and analytical skills
- Good communication and stakeholder management
- Ability to work in a fast-paced, collaborative environment