What are the responsibilities and job description for the Snowflake Architect position at NuStar Technologies?
Must Have Technical/Functional Skills
We are seeking an experienced Snowflake Data Architect to design, build, and optimize scalable cloud-based data platforms. The ideal candidate will have deep expertise in Snowflake, DBT, Snowpark, ETL/ELT pipelines, Python, and cloud data services (AWS, Azure, or GCP). This role will lead architecture decisions, ensure best practices, and enable analytics and data science teams with high-quality, reliable data solutions.
________________________________________
Key Responsibilities
Architecture & Design
Design and implement end-to-end Snowflake-based data architectures for analytics, reporting, and advanced data use cases
Define data modeling strategies (dimensional, data vault, and analytical models) optimized for Snowflake
Establish standards for data ingestion, transformation, storage, and consumption.
Snowflake Platform Management
Architect and manage Snowflake features including Warehouses, Databases, Schemas, Cloning, Time Travel, Secure Data Sharing, Data Clean Rooms and Resource Monitoring
Optimize performance and cost using warehouse sizing, clustering, caching, and query optimization
Implement security best practices including RBAC, masking policies, row access policies, and data governance.
Data Transformation & ETL/ELT
Lead ELT pipeline development using DBT (models, macros, tests, documentation, and deployments)
Design and implement ETL/ELT pipelines using cloud-native Snowpark and third-party tools. Implement Real time streaming and Batch data Processing.
Ensure data quality, lineage, and observability across pipelines.
Cloud & Big Data Integration
Architect solutions leveraging cloud data services (AWS, Azure, or GCP) such as object storage, messaging, and orchestration services
Integrate Apache Spark (Databricks or equivalent) for large-scale data processing and advanced transformations
Support hybrid and multi-cloud data architectures.
Development & Automation
Develop data processing and automation solutions using Python
Build reusable frameworks for ingestion, transformation, validation, and monitoring
Implement CI/CD pipelines for data workloads and DBT, Snowpark deployments.
Leadership & Collaboration
Partner with business stakeholders, analytics, and data science teams to translate requirements into scalable solutions
Mentor data engineers and analysts on Snowflake, DBT, Snowpark and data engineering best practices
Provide architectural guidance, documentation, and design reviews
Salary Range- $120,000-$130,000 a year
- Strong hands-on experience with Snowflake architecture and performance tuning
- Expertise in DBT (models, testing, macros, documentation, environments)
- Solid experience with ETL/ELT frameworks and data integration patterns
- Proficiency in Python for data engineering and automation
- Experience with Snowpark Implementation
- Strong knowledge of cloud data services (AWS, Azure, or GCP)
- Advanced SQL and data modeling skills
We are seeking an experienced Snowflake Data Architect to design, build, and optimize scalable cloud-based data platforms. The ideal candidate will have deep expertise in Snowflake, DBT, Snowpark, ETL/ELT pipelines, Python, and cloud data services (AWS, Azure, or GCP). This role will lead architecture decisions, ensure best practices, and enable analytics and data science teams with high-quality, reliable data solutions.
________________________________________
Key Responsibilities
Architecture & Design
Design and implement end-to-end Snowflake-based data architectures for analytics, reporting, and advanced data use cases
Define data modeling strategies (dimensional, data vault, and analytical models) optimized for Snowflake
Establish standards for data ingestion, transformation, storage, and consumption.
Snowflake Platform Management
Architect and manage Snowflake features including Warehouses, Databases, Schemas, Cloning, Time Travel, Secure Data Sharing, Data Clean Rooms and Resource Monitoring
Optimize performance and cost using warehouse sizing, clustering, caching, and query optimization
Implement security best practices including RBAC, masking policies, row access policies, and data governance.
Data Transformation & ETL/ELT
Lead ELT pipeline development using DBT (models, macros, tests, documentation, and deployments)
Design and implement ETL/ELT pipelines using cloud-native Snowpark and third-party tools. Implement Real time streaming and Batch data Processing.
Ensure data quality, lineage, and observability across pipelines.
Cloud & Big Data Integration
Architect solutions leveraging cloud data services (AWS, Azure, or GCP) such as object storage, messaging, and orchestration services
Integrate Apache Spark (Databricks or equivalent) for large-scale data processing and advanced transformations
Support hybrid and multi-cloud data architectures.
Development & Automation
Develop data processing and automation solutions using Python
Build reusable frameworks for ingestion, transformation, validation, and monitoring
Implement CI/CD pipelines for data workloads and DBT, Snowpark deployments.
Leadership & Collaboration
Partner with business stakeholders, analytics, and data science teams to translate requirements into scalable solutions
Mentor data engineers and analysts on Snowflake, DBT, Snowpark and data engineering best practices
Provide architectural guidance, documentation, and design reviews
Salary Range- $120,000-$130,000 a year
Salary : $120,000 - $130,000