What are the responsibilities and job description for the Integration Engineer position at Covetus?
Integration Engineer Opportunity
We are seeking an experienced Integration Engineer with expertise in Airflow, Python, Pyspark, SQL, Databricks, and data warehousing for enterprise-level systems.
Key Responsibilities:
• Design and develop efficient data pipelines for ingestion, transformation, and loading from various sources, ensuring high-quality and reliable data.
• Develop and deploy Spark programs in the Databricks environment to process and analyze large volumes of data, leveraging Delta Lake and data warehousing concepts.
• Utilize Python and SQL programming skills to create optimized solutions for data integration, cloud-based technologies, and dimensional data modeling.
• Collaborate with business users and analysts to understand requirements and deliver tailored solutions.
• Work with event-based/streaming technologies to ingest and process structured, semi-structured, and unstructured data.
• Optimize Databricks job performance and scalability to handle big data workloads, troubleshoot issues, and implement best practices for data management, security, and governance.
• Contribute to the design and development of Enterprise Data Warehouse solutions, performing code reviews to ensure optimal execution patterns and adherence to established standards.