What are the responsibilities and job description for the Senior Azure Data Engineer position at Interactive Resources - iR?
Senior Azure Data Engineer (Financial Services | Data as a Product)
Direct Hire | Full-Time
Build data products that drive real business impact.
We’re seeking a Senior Azure Data Engineer to help design and scale a modern cloud data ecosystem within a financial services environment. This role is ideal for someone passionate about data as a product, delivering high-quality, governed, and reusable datasets that power critical business decisions.
You’ll play a key role in building scalable pipelines, enhancing data architecture, and enabling secure, compliant data solutions across the enterprise.
Responsibilities:
- Design, build, and optimize scalable data pipelines within the Azure cloud ecosystem
- Contribute to a data-as-a-product mindset, enabling trusted, discoverable, and reusable datasets
- Support modernization of data architecture with a focus on performance, scalability, and governance
- Develop and optimize ETL/ELT pipelines using Python, PySpark, and SQL
- Design and maintain high-performance data models for analytics and reporting
- Implement and support data governance frameworks, including RBAC, lineage, and secure data access
- Build and maintain data ingestion pipelines across APIs, databases, files, and streaming sources
- Utilize orchestration tools such as Apache Airflow for workflow automation
- Enable downstream consumption for BI, analytics, data science, and application use cases
- Write optimized SQL/T-SQL queries, stored procedures, and curated datasets
- Support CI/CD automation, testing, and deployment best practices for data pipelines
- Collaborate with cross-functional teams to align data solutions with business needs
Requirements:
- 5–8 years of experience in data engineering and modern data platforms
- Strong experience within the Azure data ecosystem (e.g., Data Factory, Synapse, ADLS, etc.)
- Experience working in financial services or other regulated environments
- Strong expertise in:
- Python, PySpark, SQL
- ETL/ELT development and optimization
- Data modeling and distributed data systems
- Hands-on experience with:
- Data governance, lineage, and metadata management concepts/tools
- Workflow orchestration (Airflow or similar)
- API-based ingestion and automation
- Experience building scalable, reusable data products
- Ability to work independently while collaborating across teams
Why:
- Opportunity to contribute to a growing data-as-a-product culture
- Work on modern cloud-based data platforms and architecture
- Collaborative, engineering-focused environment
- Competitive compensation, benefits, and long-term growth opportunities
Salary : $130,000 - $140,000