What are the responsibilities and job description for the ETL Developer position at ENIN SYSTEMS?
Overview
We are seeking a highly skilled and detail-oriented ETL Developer to join our data management team. The ideal candidate will be responsible for designing, developing, and maintaining robust ETL pipelines to support enterprise data warehousing and analytics initiatives. This role offers an exciting opportunity to work with cutting-edge technologies such as AWS, Azure Data Lake, Hadoop, Spark, and various data integration tools. The ETL Developer will collaborate with cross-functional teams to ensure data quality, integrity, and accessibility for strategic decision-making.
Duties
- Design, develop, and optimize scalable ETL processes using tools like Informatica, Talend, and custom scripting in Python, Bash, or Shell Scripting.
- Integrate data from diverse sources including Oracle, Microsoft SQL Server, Apache Hive, and Linked Data environments into centralized data warehouses.
- Manage large datasets utilizing Big Data technologies such as Hadoop and Spark to facilitate efficient data processing.
- Develop and maintain data pipelines on cloud platforms like AWS (Amazon Web Services) and Azure Data Lake.
- Collaborate with data analysts and business stakeholders to understand requirements and translate them into technical solutions.
- Implement RESTful APIs for seamless data exchange between systems.
- Conduct model training and analysis to improve data accuracy and predictive capabilities.
- Ensure adherence to best practices in database design, data modeling, and performance tuning within Agile project frameworks.
- Perform database maintenance tasks including schema updates, indexing, and troubleshooting performance issues.
Requirements
- Proven experience with ETL development using tools such as Informatica, Talend, or similar platforms.
- Candidate should have 10 years of experience
- Strong proficiency in SQL programming with experience in Microsoft SQL Server, Oracle, Apache Hive, or similar databases.
- Solid understanding of Big Data ecosystems including Hadoop, Spark, and related technologies.
- Hands-on experience with cloud platforms like AWS (including AWS Glue) and Azure Data Lake.
- Programming skills in Python, VBA, Bash (Unix shell), or Shell Scripting for automation tasks.
- Familiarity with data warehouse concepts and database design principles.
- Knowledge of Looker or other BI tools for analytics reporting.
- Experience working within Agile methodologies for software development projects.
- Strong analysis skills to interpret complex datasets and derive actionable insights.
- Excellent communication skills to collaborate effectively across technical teams and business units. This position is ideal for a motivated professional eager to leverage advanced data technologies in a dynamic environment while contributing to enterprise-wide analytics solutions.
Job Type: Full-time
Pay: From $45.00 per hour
Work Location: In person
Salary : $45