What are the responsibilities and job description for the Data Engineer position at take2it?
Overview
The Data Engineer will partner with a wide range of business teams to implement analytical and data solutions that drive business value and customer satisfaction. He or she will be responsible for collecting, storing, processing, analyzing, and modeling large sets of data, as well as building applications and solutions using data. The primary focus will be on building, maintaining, implementing, monitoring, supporting, and integrating analytical and data solutions within the company’s architecture. This is a hybrid position requiring onsite work from Monday through Wednesday, with remote work available on Thursday and Friday.
Education & Certification Requirements
A bachelor's degree in computer and information science is required; a master's degree is preferred. Snowflake and Python certification are desirable but not mandatory.
Clearance Requirements
None required.
Onsite Requirements
This role is a hybrid position based at the company's location, with employees working onsite Monday through Wednesday, and remotely on Thursday and Friday.
Responsibilities
- Maintain and monitor analytics data warehouses and data platforms.
- Design, implement, test, deploy, and support scalable, secure data engineering solutions and pipelines to facilitate data and analytics projects.
- Integrate new data sources into the central data warehouse and support data movement to applications and affiliates.
- Develop, deploy, maintain, and support cloud and on-premise data solutions and web service infrastructure.
- Automate repetitive data management tasks through scalable and replicable code.
- Collaborate with project managers, data scientists, and business analysts to translate requirements into technical specifications.
- Ensure data infrastructure aligns with business needs and identify gaps in technical strategy for improvement.
- Perform root cause analysis on data processes and optimize data pipelines leveraging scripting languages and ETL tools.
- Build and manage data models, data warehouses, and APIs for data extraction, transformation, and loading of large, disconnected datasets.
Qualifications
- At least 3 years of data engineering or related IT experience.
- Proven experience working with Apache Spark, Hadoop, Java/Scala, Python, and AWS architecture.
- Strong SQL skills, including working with relational databases, query authoring, and data modeling using SQL Server and Snowflake.
- Experience building and optimizing data pipelines and ETL flows in cloud and on-premise environments using Snowpipe, Informatica, Airflow, Kafka, etc.
- Ability to manipulate, process, and extract value from large datasets.
- Skilled in building data models and managing data warehouses.
- Experience with Microsoft .NET technologies such as C# and VB.Net, including developing and deploying web and Windows applications.
Desired Skills
- Snowflake and Python certification.
- Knowledge of API development for data integration.
- Familiarity with unstructured datasets and data analysis.
- Strong organizational skills with an attention to detail.
- Excellent communication, interpersonal, and problem-solving abilities.