What are the responsibilities and job description for the Big Data SPARK Engineer ETL position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, SANS, is seeking the following. Apply via Dice today!
Must be able to interview in-person and work 3 days a week in Alpharetta, GA
Please do not send your resume if you are not in the Atlanta, GA area is in-person interview is a MUST.
Big Data SPARK Engineer Primary skills: Python, Big Data & Apache Spark
Job Description
Role Name: Big Data Engineer
Position Title: Consultant
Location: Alpharetta, GA Position Description:
This position is for Big data engineer for *** Wealth Management? Framework CoE team at *** s Alpharetta or New York offices.
CoE team is responsible to define and govern the data platforms.
We are looking for colleagues with strong sense of ownership and ability to drive solutions.
The role is primarily responsible to automate the existing process and bring new ideas and innovation.
The candidate is expected to code, conduct code reviews, and test framework as needed, along with participating in application architecture and design and other phases of the automation.
The ideal candidate will be a self-motivated team player committed to delivering on time and should be able to work with or without minimal supervision.
Responsibilities
Desired Skills
Must be able to interview in-person and work 3 days a week in Alpharetta, GA
Please do not send your resume if you are not in the Atlanta, GA area is in-person interview is a MUST.
Big Data SPARK Engineer Primary skills: Python, Big Data & Apache Spark
Job Description
Role Name: Big Data Engineer
Position Title: Consultant
Location: Alpharetta, GA Position Description:
This position is for Big data engineer for *** Wealth Management? Framework CoE team at *** s Alpharetta or New York offices.
CoE team is responsible to define and govern the data platforms.
We are looking for colleagues with strong sense of ownership and ability to drive solutions.
The role is primarily responsible to automate the existing process and bring new ideas and innovation.
The candidate is expected to code, conduct code reviews, and test framework as needed, along with participating in application architecture and design and other phases of the automation.
The ideal candidate will be a self-motivated team player committed to delivering on time and should be able to work with or without minimal supervision.
Responsibilities
- Design & Develop new automation framework for ETL processing
- Support existing framework and become technical point of contact for all related teams
- Enhance existing ETL automation framework as per user requirements
- Performance tuning of spark, snowflake ETL jobs
- New technology POC and suitability analysis for Cloud migration
- Process optimization with the help of automation and new utility development
- Work in collaboration for any issues and new features
- Support any batch issue
- Support application team teams with any queries Required Skills
- 7 years of Data engineering experience
- Must be strong in UNIX Shell, Python scripting knowledge
- Must be strong in Spark
- Must have strong knowledge of SQL
- Hands-on knowledge on how HDFS/Hive/Impala/Spark works
- Strong in logical reasoning capabilities
- Should have working knowledge of Github, DevOps, CICD/ Enterprise code management tools
- Strong collaboration and communication skills
- Must possess strong team-player skills and should have excellent written and verbal communication skills
- Ability to create and maintain a positive environment of shared success
- Ability to execute and prioritize a tasks and resolve issues without aid from direct manager or project sponsor
- Good to have working experience on snowflake & any data integration tool i.e. informatica cloud
Desired Skills
- Snowflake/Azure/AWS any cloud
- IDMC/any ETL tool