What are the responsibilities and job description for the Technology System/Platform Engineer position at magnit-xcelenergy?
Position Overview
We are seeking a Platform Engineer with a strong Data Engineering focus to support a large-scale granular load forecasting initiative. This role will be instrumental in building and maintaining the data infrastructure and pipelines that power forecasting and analytics across the organization.
You will work within a highly collaborative environment, partnering with IT, data analysts, and business stakeholders to enable reliable, scalable data ingestion and processing—particularly from non-standard and non-API sources.
This role combines hands-on engineering, cross-team coordination, and platform ownership, with a strong emphasis on Databricks and ETL pipeline development.
Key Responsibilities
Design, build, and maintain data pipelines and ingestion frameworks in Databricks
Develop and manage ETL/ELT workflows to support forecasting datasets
Work with cross-functional teams to ingest non-standard data sources (e.g., reports, manual data inputs, legacy systems)
Partner with IT to ensure alignment with data governance, security, and platform standards
Serve as a technical liaison between engineering, IT, and business teams
Support data discovery efforts by translating business inputs into structured pipeline requirements
Perform data cleanup, validation, and quality assurance to ensure integrity of forecasting data
Manage code deployment, version control (Git), and Databricks asset bundles (DBX)
Monitor and troubleshoot pipelines and platform performance across dev, test, and production environments
Contribute to infrastructure automation, deployment pipelines, and platform optimization (cloud/on-prem hybrid)
Required Qualifications
5–7 years of experience in Data Engineering, Platform Engineering, or DevOps
Strong hands-on experience with Databricks (core platform focus)
Proven experience building ETL/ELT pipelines and data workflows
Proficiency in Python and SQL
Experience working with data ingestion from non-standard or legacy sources
Strong understanding of data quality, validation, and cleanup processes
Experience with Git, CI/CD, and deployment pipelines
Solid knowledge of enterprise data architecture, scalability, and security principles
Excellent communication and stakeholder management skills
Preferred Qualifications
Experience supporting data migration or conversion efforts (e.g., SAP IS-U or similar systems)
Familiarity with data governance and metadata management frameworks
Exposure to integration patterns (API, batch, middleware platforms)
Experience working with data validation and profiling tools
Basic exposure to ML pipeline development (not a primary focus)
Experience in utility, energy, or forecasting domains (nice to have)
Key Skills & Competencies
Databricks Expertise – primary platform ownership and development
Data Pipeline Development – strong foundational engineering skills
Data Integration – especially across ambiguous or non-technical sources
Critical Thinking – ability to operate in ambiguous, discovery-heavy environments
Collaboration & Communication – working across IT and business teams
Problem Solving – troubleshooting across data and platform layers
What Success Looks Like
You can take loosely defined data inputs and turn them into structured, reliable pipelines
You effectively bridge the gap between business users and technical systems
You ensure data entering the platform is accurate, validated, and usable for forecasting
You become a trusted partner to IT and analytics teams in a high-visibility capital project
Interview Process
Single-round panel interview (2–3 interviewers)
Focus on technical depth, real-world pipeline experience, and stakeholder collaboration
Nice-to-Know Context
This is part of a large capital project with high visibility
The environment is highly collaborative and less rigid than traditional IT structures
Strong emphasis on coordination, communication, and practical engineering execution