What are the responsibilities and job description for the Data Software Engineer position at Adaptive Solutions Group?
Our top client in the transportation/travel industry is searching for a Data Software Engineer to join their team! This is a unique position, as you will be responsible for creating workflows to ingest data into our client's Enterprise Data Lake and providing initial Tier 1 support for Databricks workflows, Spark streaming applications, and Azure Function applications within their Cloud Data Engineering ecosystem. You will also collaborate with other engineers to develop new data pipelines. If you have a passion for engineering, working with massive amounts of data, and being a part of a shared service organization that supports multiple engineering teams, this is the role for you!
100% remote position
*Candidates MUST currently live in the Greater St. Louis area (relo is not an option)
1 year contract with benefits available on contract
**Cannot work C2C or 3rd party**
Required:
- Must be presently authorized to work in the U.S. without a requirement for work authorization sponsorship by our company for this position now or in the future
- Must reside in St. Louis, Missouri, or the immediate surrounding area
- Must have 3 or more years’ experience in full lifecycle development, end-to-end testing, and data validation
- Apache Spark Fundamentals: Understanding Spark architecture, distributed computing, Data Frames, and Spark SQL
- Experience with relational databases such as Oracle, SQL Server, Postgres, or Teradata.
- Hands-on experience supporting applications deployed on Azure services such as Spark, Azure Functions, and Azure SQL
- Experience writing scalable data pipelines and transformations. (e.g., Python, Scala, or Java)
Preferred:
- Bachelor's degree in Computer Science, Computer Information Systems, Management Information Systems, or related field preferred
- Experience with Big Data Solutions and NoSQL DBMSs
- Experience working with ADLSg2 or other Blob Storage solutions
- Experience with change data capture (CDC) and identifying data deltas for incremental data processing. (e.g. SCD type 1 / SCD type 2)
- Experience with API development and support of Azure functions
- Experience with Azure, AWS, or other Public Cloud environments
- Experience with Microsoft Azure platform technologies like Event Hubs, Data Factory, Delta Lake, Cosmos DB, or DevOps
- Prior experience leading large-scale projects