What are the responsibilities and job description for the ETL / Data Integration Engineer position at Aptino?
Role: ETL / Data Integration Engineer
Location: Columbus, OH (Onsite)
Job Overview:
We are seeking a skilled ETL / Data Integration Engineer to contribute to enterprise data initiatives focused on Process Intelligence and advanced analytics. The role involves designing and building scalable data pipelines, integrating data from multiple enterprise systems, and delivering reliable datasets for process mining platforms.
This position emphasizes strong expertise in SQL/PL-SQL, ETL development, and data modeling. Prior experience with Celonis or similar process mining tools is helpful but not mandatory.
Key Responsibilities:
- Architect, develop, and maintain ETL workflows and data structures that support ingestion into process intelligence environments (e.g., Celonis).
- Work closely with data analysts and stakeholders to gather requirements, define mappings, and translate them into technical ETL solutions.
- Establish and manage secure data connections using technologies such as JDBC, APIs, and Kafka-based integrations.
- Enhance existing ETL pipelines to improve efficiency, scalability, and reliability of data processing.
- Develop validation frameworks, data quality checks, and reconciliation processes to ensure accurate data flow across systems.
- Monitor and support production ETL jobs, resolving issues related to failures, latency, or performance degradation.
- Implement data governance standards, security controls, and best practices for ETL development and data handling.
- Collaborate with cross-functional teams including business stakeholders, project managers, analysts, and infrastructure teams.
- Contribute actively in Agile environments, including sprint planning, estimation, and iterative delivery cycles.
Required Skills & Experience:
- Bachelor’s degree in Computer Science, Engineering, Information Technology, or equivalent practical experience.
- Minimum 7 years of hands-on experience in ETL development, data integration, or related engineering roles.
- Strong expertise in SQL and PL/SQL is mandatory.
- Proven ability to design and implement scalable ETL pipelines and robust data models.
- Experience working with large enterprise systems such as Oracle, PeopleSoft, Maximo, or similar platforms.
- Strong analytical mindset with the ability to work on complex and high-volume datasets.
- Excellent communication skills with proven experience working across multiple teams and stakeholders.
Nice to Have Skills:
- Exposure to Celonis or other process mining platforms (not required).
- Hands-on experience with Apache Kafka or real-time data streaming architectures.
- Knowledge of Python or other scripting languages for automation and data transformation tasks.
- Familiarity with platform administration tasks such as access control, monitoring, and user governance.