What are the responsibilities and job description for the Data Engineer position at High Trail?
Data Engineer
Position Summary
The Data Engineer plays a key role in designing, building, and maintaining scalable data pipelines and architectures using Microsoft Fabric. This position supports the organization’s data strategy by ensuring reliable data availability, optimizing workflows for performance, and enabling effective data visualization through tools like Power BI.
This role involves developing ETL processes, monitoring and troubleshooting data pipelines, and maintaining clear documentation of data systems. The Data Engineer will collaborate cross-functionally to deliver efficient data solutions that support strategic decision-making and operational performance.
The ideal candidate has strong programming experience (Python, Java, or Scala), a solid understanding of data governance and security, and hands-on experience working within modern cloud-based data environments.
Key Responsibilities
Data Pipeline & Architecture
- Design, develop, and maintain scalable data pipelines and architectures using Microsoft Fabric
- Build and manage ETL processes to ensure data accuracy and accessibility
- Optimize data workflows for performance and scalability
Visualization & Analytics Support
- Provide guidance on data visualization tools, particularly Power BI
- Support development of standards and best practices for dashboards and reporting
Monitoring & Troubleshooting
- Monitor data pipelines and resolve issues in a timely manner to ensure reliability
Documentation & Best Practices
- Create and maintain documentation for data processes, systems, and workflows
- Stay current with industry trends and advancements in data engineering and Microsoft Fabric
Qualifications
Education & Experience
- Bachelor’s degree in Computer Science, Data Engineering, Statistics, Mathematics,
- Proven experience working with Microsoft Fabric
- Strong SQL skills, including writing complex queries
- Experience with Azure DevOps Git for version control
- Experience building data pipelines within Fabric (Lakehouse → Warehouse workflows)
- Understanding of data governance and security best practices
Technical Skills
- Proficiency in Python, Java, or Scala
- Experience with Azure or similar cloud platforms
- Familiarity with CI/CD pipelines and automation tools
Salary : $110,000 - $120,000