What are the responsibilities and job description for the Hybrid Senior Software Engineer - Data & BI position at EDI Specialists, Inc.?
ROLE SUMMARY
The Sr. Software Engineer – Data & BI is responsible for designing, developing, and optimizing scalable cloud based data pipelines and analytics solutions using Azure Data Factory, Databricks, and Lakehouse architecture. This role works extensively with business partners across to understand their business processes, data needs, and operational challenges, and to translate those into high quality, secure, and scalable data solutions.
Although this position is primarily focused on data engineering, it also supports business intelligence initiatives by assisting in building basic Power BI dashboards, supporting said dashboards and ensuring data is structured and ready for reporting and decision making.
CORE RESPONSIBILITIES
Data Ingestion & Storage (ADF, Databricks)
ADDITIONAL RESPONSIBILITIES
REQUIRED SKILLS & PERSONAL QUALIFICATIONS
PREFERRED / IDEAL EXPERIENCE DESIRED
EDUCATION & EXPERIENCE REQUIREMENTS: Bachelor''s degree in Computer Science, Information Systems, Engineering, or related field. 4 to 8 years of experience in data engineering, analytics engineering, or cloud data roles.
The Sr. Software Engineer – Data & BI is responsible for designing, developing, and optimizing scalable cloud based data pipelines and analytics solutions using Azure Data Factory, Databricks, and Lakehouse architecture. This role works extensively with business partners across to understand their business processes, data needs, and operational challenges, and to translate those into high quality, secure, and scalable data solutions.
Although this position is primarily focused on data engineering, it also supports business intelligence initiatives by assisting in building basic Power BI dashboards, supporting said dashboards and ensuring data is structured and ready for reporting and decision making.
CORE RESPONSIBILITIES
Data Ingestion & Storage (ADF, Databricks)
- Design, build, and maintain scalable ETL/ELT pipelines using Azure Data Factory and Databricks.
- Develop complex transformations and large‐scale processing using PySpark.
- Integrate data from on‐prem Oracle systems, Dynamics CRM, connected vehicle telematics, APIs, and other sources.
- Manage data layers within Azure Data Lake and Lakehouse structures.
- Develop robust data models, schemas, and curated datasets optimized for analytics and BI.
- Use Unity Catalog to maintain governance, security, and lineage.
- Optimize data storage, query patterns, transformation logic, and pipeline reliability.
- Collaborate with EDW and Data Science teams to ensure data readiness.
- Work closely and proactively with business partners to understand their processes, data challenges, and reporting requirements.
- Translate business needs into technical data specifications and scalable engineering solutions.
- Communicate technical concepts clearly to non‐technical stakeholders and provide recommendations based on best practices.
- Participate in cross‐functional workshops, requirement sessions, and design discussions.
- Harmonize data from disparate systems into unified, trusted datasets.
- Ensure data quality, consistency, and reliability.
- Contribute to enterprise data governance policies, processes, and frameworks.
- Use GitHub for version control, code reviews, CI/CD pipelines, and release automation.
- Adhere to established coding standards, branching strategies, and deployment best practices.
- Build and maintain simple‐to‐intermediate Power BI dashboards and datasets.
- Assist BI developers with data model optimization and dataset certification.
- Partner with business teams to verify that data supports accurate reporting and KPIs.
- Contribute to Power BI governance (naming conventions, refresh cycles, usage optimization).
ADDITIONAL RESPONSIBILITIES
- Collaborate with cross‐functional teams (Data Science, EDW, Visualization, business units) to deliver end‐to‐end analytics solutions.
- Troubleshoot data pipelines, infrastructure challenges, and performance bottlenecks.
- Provide mentorship to junior engineers and foster a culture of collaboration and continuous improvement.
REQUIRED SKILLS & PERSONAL QUALIFICATIONS
- Strong expertise with Azure Data Factory, Databricks, PySpark, and Azure Data Lake.
- Experience with Lakehouse architecture and modern cloud-based engineering practices.
- Proficiency in Python and SQL.
- Experience integrating diverse data sources across cloud and on‐prem systems.
- Solid understanding of BI concepts and interest in growing Power BI skills.
- Exceptional collaboration and communication abilities, including working directly with business partners, gathering requirements, and mapping business processes to technical solutions.
- Ability to explain complex data engineering concepts to non‐technical audiences.
- Strong problem‐solving skills and ability to work effectively in a cross‐functional environment.
PREFERRED / IDEAL EXPERIENCE DESIRED
- Beginner or intermediate experience creating Power BI dashboards.
- Knowledge of DAX, dataset modeling, and Power Query.
- Experience with Azure Synapse or Microsoft Fabric.
- Familiarity with IaC tools (ARM Templates, Terraform).
EDUCATION & EXPERIENCE REQUIREMENTS: Bachelor''s degree in Computer Science, Information Systems, Engineering, or related field. 4 to 8 years of experience in data engineering, analytics engineering, or cloud data roles.