What are the responsibilities and job description for the Senior Data Ops Engineer position at wellsfargo?
Description
Title: Senior Data Ops Engineer
Location: Irving, TX
Alternative Location: Charlotte, NC, Chandler, AZ, Des Moines, IA, Minneapolis, MN
Duration: 12 months
Work Engagement: W2
Work Schedule: Hybrid 3 days in office/2 days remote
Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits
Summary:
In this contingent resource assignment, you may: Consult on complex initiatives with broad impact and large-scale planning for Specialty Software Engineering. Review and analyze complex multi-faceted, larger scale, or longer-term Specialty Software Engineering challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented factors. Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables. Strategically collaborate and consult with client personnel. Required Qualifications: Specialty Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work or consulting experience, training, military experience, education.
Key Responsibilities:
Implement and operationalize modern AI-enabled data capabilities on Google Cloud to ingest, transform, and distribute data for a variety of big data apps
Leverage AI/Agentic frameworks to automate data management, governance, and data consumption capabilities - data pipelines, data quality, metadata, data compliance, etc.
Work within a matrix org. with principal engineers, product managers, and data engineers to roadmap, plan, and deliver key data capabilities based on priority
Key Requirements:
Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship.
Demonstrable skills (recent) using AI tools such as LangChain, LangGraph/ADK, agentic frameworks, RAG, GraphRAG, and using MCP to build agent-based data capabilities
data engineering including hands-on experience working with Cloud data solutions: creating/supporting Spark based ingestion and processing
Data lakehouse architecture and design, including hands-on experience with Python, pySpark, Kafka, Airflow, Google Cloud Storage, BigQuery, Data Proc, Cloud Composer
Hands-on experience developing data flows using Kafka, Flink, and Spark streaming
Desired Qualifications:
Proven experience using AI to auto-generate data engineering related code, context engineering and prompt engineering
Deep background on cloud-based data lakes and warehouses, and automated data pipelines
Public cloud certifications such as GCP Professional Data Engineer, Azure Data Engineer, or AWS Specialty Data Analytics
Web based UI development using React and Node JS is a plus