What are the responsibilities and job description for the Data Architect position at Intellias?
Let’s breathe life into great tech ideas! With 3000 people globally, Intellias is a company where benchmark technological solutions are born. Join in and take your part in digitalizing the world.
We are seeking an experienced Data Architect to lead the transformation of our data platform from a traditional Microsoft SQL Server-based architecture to a modern, scalable data platform built on Microsoft Fabric.
This role will be responsible for designing and implementing a Lakehouse architecture, enabling advanced analytics, and establishing best practices for data engineering, modeling, and governance.
Requirements:
- 5 years of experience in Data Architecture / Data Engineering roles
- Experience with Microsoft Fabric (Lakehouse, Warehouse, OneLake) or similar platforms (Databricks, Snowflake, Azure Synapse)
- Strong hands-on experience with Microsoft SQL Server: T-SQL, Data schema design, Stored Procedures, UDFs, Triggers, Constraints
- Proven experience designing and implementing modern data platforms (Lakehouse / Data Warehouse)
- Hands-on experience with: PySpark, Spark SQL, Notebook-based development)
- Strong understanding of ETL/ELT patterns and data pipeline design
- Experience with large-scale data processing and performance optimization
- Proficiency in data modeling (star schema, dimensional modeling)
- Solid understanding of distributed data processing concepts
- Experience working in Agile/Scrum environments
Will be a plus:
- Background in migrating legacy systems to cloud-based data platforms
- Experience integrating C# applications with modern data platforms
- Familiarity with real-time / streaming data processing
- Knowledge of data governance frameworks and tools
Responsibilities:
- Lead the end-to-end migration from Microsoft SQL Server-based solutions (C#, T-SQL, UDFs, Stored Procedures, Triggers, Constraints) to a modern Microsoft Fabric architecture
- Design and implement scalable Lakehouse architecture including Bronze, Silver, and Gold layers
- Architect and oversee data pipelines: data Extraction from source systems, ingestion into Bronze layer using PySpark and Spark SQL, transformation via PySpark / T-SQL notebooks, data modeling and refinement into Silver layer (T-SQL, UDFs, Stored Procedures), and business-ready datasets in Gold layer (T-SQL, curated models)
- Define data modeling standards for both Lakehouse and Warehouse layers
- Optimize performance of Spark and SQL workloads across Fabric
- Establish best practices for: data governance, data quality, security and access control
- Collaborate with engineering, analytics, and business teams to align data architecture with business goals
- Provide technical leadership and mentorship to data engineers and developers
- Evaluate and modernize legacy C# and SQL-based data processing logic
- Drive adoption of ELT patterns and modern data engineering practices