What are the responsibilities and job description for the Data Engineer position at Burtch Works?
Location: Chicago, IL (Hybrid)
About The Company
We are one of the world's largest and most prestigious full-service law firms, with a global footprint spanning more than 20 offices across the Americas, Europe, and Asia Pacific. With a history stretching back over 160 years, we have built a reputation for delivering sophisticated legal counsel to Fortune 500 companies, financial institutions, governments, and leading organizations across virtually every major industry and practice area from corporate transactions and litigation to regulatory matters, finance, and restructuring. We are committed to a culture of excellence, collaboration, and inclusion, where talented professionals at every level are supported, challenged, and empowered to do their best work.
Job Summary
We are looking for a Data Engineer who is eager to roll up their sleeves and help build the data foundation that powers analytics, business intelligence, and machine learning across the organization. This is a role for someone who genuinely enjoys the craft of data engineering including writing clean, efficient pipelines, designing schemas that stand up over time, and producing datasets that teams can actually rely on.
You will work alongside senior engineers and collaborate closely with cross-functional partners to take raw, complex data and turn it into well-structured, well-documented assets that drive smarter decisions and better products. This role reports to the Senior Manager of Data Engineering and offers a strong foundation for growth within a talented, mission-driven team.
Key Responsibilities
About The Company
We are one of the world's largest and most prestigious full-service law firms, with a global footprint spanning more than 20 offices across the Americas, Europe, and Asia Pacific. With a history stretching back over 160 years, we have built a reputation for delivering sophisticated legal counsel to Fortune 500 companies, financial institutions, governments, and leading organizations across virtually every major industry and practice area from corporate transactions and litigation to regulatory matters, finance, and restructuring. We are committed to a culture of excellence, collaboration, and inclusion, where talented professionals at every level are supported, challenged, and empowered to do their best work.
Job Summary
We are looking for a Data Engineer who is eager to roll up their sleeves and help build the data foundation that powers analytics, business intelligence, and machine learning across the organization. This is a role for someone who genuinely enjoys the craft of data engineering including writing clean, efficient pipelines, designing schemas that stand up over time, and producing datasets that teams can actually rely on.
You will work alongside senior engineers and collaborate closely with cross-functional partners to take raw, complex data and turn it into well-structured, well-documented assets that drive smarter decisions and better products. This role reports to the Senior Manager of Data Engineering and offers a strong foundation for growth within a talented, mission-driven team.
Key Responsibilities
- Contribute to the development of end-to-end data solutions on Azure Databricks, including ETL and streaming pipelines built on Apache Spark, Delta Lake, and ADLS Gen2 to support scalable, reliable lakehouse architectures.
- Design and maintain data models and schemas optimized for analytics, reporting, and operational use cases, with attention to performance and downstream usability.
- Implement and iterate on Delta Lake / Lakehouse patterns across Bronze, Silver, and Gold layers, incorporating schema evolution and time travel as needed.
- Write well-crafted PySpark and Spark SQL transformations, with care for join optimization, partitioning strategies, caching, and shuffle management.
- Help build and sustain data quality frameworks including validation logic, monitoring coverage, and alerting so that data consumers can trust what they are working with.
- Work collaboratively with data architects, analysts, BI engineers, and product teams to keep data engineering priorities connected to real business needs.
- Contribute to CI/CD pipelines and engineering workflows supported by version control, linting, automated testing, security scanning, and observability tooling.
- Investigate and resolve pipeline and platform issues in Azure Databricks, minimizing disruption and restoring performance quickly when problems arise.
- Follow and help reinforce team coding standards, participate actively in code reviews, and document your work in ways that make the whole team more effective.
- Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a related technical field.
- Experience: At least 3 years of hands-on experience designing, building, and operating data solutions in a professional environment.
- Skills: Working knowledge of Databricks architecture and core components including Lakehouse, Delta Lake, Databricks SQL, Apache Spark Clusters, Unity Catalog, Workflows/Jobs, and Notebooks; proficiency in Python, SQL, and Apache Spark; proven experience building reusable, metadata-driven ingestion frameworks in Python and Scala; solid foundation in data modeling, schema design, and performance tuning for large-scale systems.
- Platform: Familiarity with cloud data platform components such as object storage, metadata/catalog services, and batch, streaming, and CDC ingestion and processing patterns.
- Collaboration: Experience working alongside AI and BI engineers to deliver polished, high-quality data products to business stakeholders.
- Other: Ability to communicate data findings clearly through visualizations, dashboards, and written documentation; strong analytical thinking and attention to detail; excellent communication skills across both technical and non-technical audiences.
- Hands-on experience building pipelines in an Azure Databricks environment, including integration with Azure DevOps, ADLS Gen2, Azure Key Vault, and Azure Data Factory.
- Familiarity with enterprise data modeling tools such as ERwin, with the ability to interpret and apply logical and physical data models to analytical and lakehouse contexts.
- Exposure to Infrastructure as Code (IaC) concepts and tooling.
- Experience working with regulated or sensitive data, including awareness of relevant controls and compliance considerations.
- Experience delivering within an Agile development model.
Salary : $129,000 - $144,000