What are the responsibilities and job description for the Sr. Data Engineer position at Peterson Technology Partners?
Job Description
Job Description:
Looking for only local candidates willing to be at office 5 days a week. No relocation provided.
The Senior Data Engineer will be part of a team designing and implementing our modern data platform to support enterprise-wide analytics, integration, and AI/ML initiatives. Together, the team will build a scalable enterprise data warehouse, enable modern visualization tools, and advance self-service analytics capabilities for business users.
Major Duties and Responsibilities:
Required:
Benefits: Medical Insurance, PTO, 401 (k) and more
Job Description:
Looking for only local candidates willing to be at office 5 days a week. No relocation provided.
The Senior Data Engineer will be part of a team designing and implementing our modern data platform to support enterprise-wide analytics, integration, and AI/ML initiatives. Together, the team will build a scalable enterprise data warehouse, enable modern visualization tools, and advance self-service analytics capabilities for business users.
Major Duties and Responsibilities:
- Data Engineering & Pipeline Development – Design, build, and maintain scalable data pipelines and workflows in Snowflake, leveraging dbt for modular transformations and Fivetran for automated ingestion.
- Enterprise Data Warehouse & Modeling – Architect and optimize the enterprise data warehouse, applying best practices in data modeling, performance tuning, and scalability to support enterprise-wide analytics.
- Data Governance & Quality – Partner with governance teams to document cataloging, lineage, and stewardship, while ensuring data quality, security, and compliance.
- Collaboration & Business Enablement – Work closely with business and analytics teams to translate requirements into scalable data solutions.
- Cloud Integration & Infrastructure – Utilize AWS services (primarily S3) for data storage and integration, ensuring reliability, performance, and cost-effectiveness.
- Additional responsibilities as assigned.
Required:
- Bachelor’s degree in Computer Science, Data Engineering, Information Systems, or a related technical field.
- 5 years of professional experience in data engineering or related roles.
- Proven expertise with Snowflake for data pipelines, development, and optimization.
- Strong experience with dbt for modular transformations and Fivetran for automated ingestion.
- Solid background in data warehousing, dimensional modeling, and performance tuning.
- Hands-on experience with AWS cloud services (especially S3) for data integration and storage.
- Familiarity with data governance tools (Alation or Atlan) for cataloging, lineage, and stewardship.
- Proficiency in SQL and comfort with scripting languages such as Python.
- Master’s degree in a related field.
- Experience with real-time/streaming technologies (e.g., Kafka, Kinesis, Flink).
- Knowledge of data security, compliance, and access control best practices.
- Exposure to AI/ML workflows and integrating data engineering solutions with ML pipelines.
- Background in modern visualization platforms (Power BI, Tableau, Looker) to support self-service analytics.
- Demonstrated ability to A39mentor junior engineers and contribute to engineering best practices.
- Snowflake (Expert) – deep expertise in data pipelines, query optimization, and enterprise-scale implementation.
- dbt (Advanced) – strong ability to design modular, reusable transformations and maintain version-controlled models.
- Fivetran (Advanced) – hands-on skill in managing ingestion pipelines and troubleshooting data connectors.
- Data Warehousing & Data Modeling (Advanced) – proficiency in dimensional modeling, star/snowflake schemas, and performance tuning.
- AWS Cloud (Intermediate) – working knowledge of S3 for storage and integration; familiarity with broader AWS data services a plus.
- Data Governance (Intermediate) – understanding of metadata management, lineage, and cataloging.
- SQL (Expert) – strong command of writing complex queries and optimizing performance.
- Python (Intermediate) – ability to script automation, testing, and lightweight data transformations.
- Collaboration & Communication – skilled in working with cross-functional teams and translating technical concepts to business stakeholders.
- Emerging Tech & AI – awareness of how AI/ML can accelerate data engineering practices; willingness to experiment and learn.
Benefits: Medical Insurance, PTO, 401 (k) and more
Salary : $150,000 - $165,000