What are the responsibilities and job description for the Innovation and Automation Specialist position at Bespoke Technologies, Inc.?
BT-162 - Innovation and Automation Specialist
Skill Level: Mid
Location: Chantilly/Herndon
As a Data Engineer Specialist on the Innovation and Automation team, you will serve as a subject matter expert, blending deep data engineering expertise with a passion for automation. You will not build individual data pipelines for business users; instead, you will build the factory that produces them. Your mission is to design, develop, and implement the reusable frameworks, automated patterns, and core tooling that our data engineering teams will use to build their own pipelines faster, more reliably, and more consistently. This is a highly technical, hands-on role for a problem-solver who wants to act as a force multiplier for the entire data organization.
Responsibilities
Skill Level: Mid
Location: Chantilly/Herndon
- MUST HAVE AN ACTIVE TS OR TS/SCI CLEARANCE TO APPLY. Those without an active security clearance will not be considered.**
As a Data Engineer Specialist on the Innovation and Automation team, you will serve as a subject matter expert, blending deep data engineering expertise with a passion for automation. You will not build individual data pipelines for business users; instead, you will build the factory that produces them. Your mission is to design, develop, and implement the reusable frameworks, automated patterns, and core tooling that our data engineering teams will use to build their own pipelines faster, more reliably, and more consistently. This is a highly technical, hands-on role for a problem-solver who wants to act as a force multiplier for the entire data organization.
Responsibilities
- Act as a technical expert on the design and implementation of automated data engineering solutions.
- Develop and maintain a library of standardized, reusable ETL/ELT pipeline templates using Python, SQL, and frameworks like Databricks or Snowflake.
- Engineer and implement robust, automated data quality and testing frameworks (e.g., using tools like Great Expectations) that are embedded within the core pipeline templates.
- Contribute to the development of Infrastructure-as-Code (IaC) modules (Terraform) for the automated provisioning of data infrastructure.
- Enhance and optimize the CI/CD for Data (DataOps) pipelines, ensuring seamless and reliable deployment of data workflows.
- Serve as an escalation point for the most complex data engineering and automation challenges, providing expert-level troubleshooting and guidance to other engineers.
- Mentor other data engineers on automation best practices, code standards, and the use of the frameworks you build.
- Research and prototype cutting-edge data engineering and automation technologies to drive continuous improvement.
- 5 years of hands-on experience in data engineering.
- Expert-level programming skills in Python and advanced SQL.
- Proven, in-depth experience building and optimizing data pipelines in a cloud environment (AWS, Azure) on platforms like Databricks or Snowflake.
- Strong, hands-on experience with Infrastructure-as-Code (IaC) using Terraform.
- Demonstrable experience with CI/CD principles and tools (e.g., GitLab CI, Jenkins, GitHub Actions) applied to data workflows.
- Deep understanding of modern data architecture, data modeling, and software engineering best practices.
- Experience in a DevOps or Site Reliability Engineering (SRE) role.
- Direct experience developing and operationalizing a "pipeline factory" or similar framework.
- Familiarity with data orchestration tools (e.g., Airflow) and containerization (Docker, Kubernetes).
- Proven ability to diagnose and resolve complex performance, data quality, and system-level issues.