What are the responsibilities and job description for the Data Engineer position at Jobs via eFinancialCareers?
Benefits Summary
The Data Engineer will be responsible for designing, building, and optimizing scalable data solutions to support a wide range of business needs. This role requires a strong ability to work both independently and collaboratively in a fast-paced, agile environment. The ideal candidate will engage with cross-functional teams to gather data requirements, propose enhancements to existing data pipelines and structures, and ensure the reliability and efficiency of data processes.
Responsibilities
ETL / ELT tools: Spark, Kafka, Azure Data Factory (ADF)
NoSQL databases: MongoDB, Cosmos DB, DocumentDB or similar
Languages: SAS, Java, Scala, .Net
- Flexible and hybrid work arrangements
- Paid time off/Paid company holidays
- Medical plan options/prescription drug plan
- Dental plan/vision plan options
- Flexible spending and health savings accounts
- 401(k) retirement savings plan with a Roth savings option and company matching contributions
- Educational assistance program
The Data Engineer will be responsible for designing, building, and optimizing scalable data solutions to support a wide range of business needs. This role requires a strong ability to work both independently and collaboratively in a fast-paced, agile environment. The ideal candidate will engage with cross-functional teams to gather data requirements, propose enhancements to existing data pipelines and structures, and ensure the reliability and efficiency of data processes.
Responsibilities
- Develop and maintain scripts and tools using Python, PowerShell, and R
- Design, write, and optimize SQL queries for performance and scalability
- Help modernize 'legacy' solutions to realign with our current code base and tech stack
- Assist in redevelopment, improvement and ongoing maintenance of existing data, analytics, and reporting solutions
- Ensure accurate and efficient data integration of diverse data sources and formats
- Enhance and support database functions and procedures
- Optimize data access and data processing workflows for performance, scalability, and efficiency
- Implement data quality checks and validations to ensure the accuracy, consistency, and completeness of data
- Identify and resolve performance bottlenecks, investigate and troubleshoot data related issues, and provide solutions to address defects
- Seamlessly transition between production support and development tasks based on business needs
- Deploy and manage code utilizing engineering best practices in non-prod and prod environments
- Bachelors Degree in computer science, data science, software engineering, information systems, or related quantitative field.
- Minimum 4 years of experience working as a Python Developer, Solutions Engineer, Data Engineer, or similar roles.
- 4 years of solid continuous experience in Python
- 3 years of solid experience writing SQL and PL/SQL code
- 3 years of experience working with relational databases (solid understanding of Oracle preferred)
- 2 years of experience scripting with PowerShell
- Experience programming in R
- Experience with web application frameworks including Shiny, Dash, Streamlit
- Experience with CI/CD utilizing git/Azure DevOps
- Knowledge of alternative storage formats including Parquet/Arrow/Avro
- Ability to collaborate within and across teams of different technical knowledge to support delivery of solutions
- Expert problem-solving skills, including debugging skills, enabling the determination of sources of issues in unfamiliar code or systems
ETL / ELT tools: Spark, Kafka, Azure Data Factory (ADF)
NoSQL databases: MongoDB, Cosmos DB, DocumentDB or similar
Languages: SAS, Java, Scala, .Net