Demo

Databricks Engineer

System One
Madison, WI Contractor
POSTED ON 4/3/2026
AVAILABLE BEFORE 4/20/2026
Job Title: Databricks Engineer
Location: Washington, District of Columbia
Type: Contract
Contractor Work Model: Onsite

PROJECT DESCRIPTION:
The Enterprise Data Platform (EDP) empowers the Board to confidently use trusted, standardized, and well-governed data to drive insight and innovation.

BACKGROUND:
The Data Engineer designs, builds, and operates batch and streaming data pipelines and curated data products on the Enterprise Data Platform (EDP) using Databricks and Apache Spark. This role is hands-on in Python and R, enabling scalable engineering workflows while supporting analytics and
research use cases. The engineer partners with product, architecture, governance, and mission teams to deliver secure, performant, observable pipelines and trusted datasets.

REQUIREMENTS:

The candidate shall possess the knowledge and skills set forth in the Technical Services BOA,
Section 3.6.4.2 for labor category Information Data Engineer.

The candidate shall also demonstrate the below knowledge and experience:

• Strong proficiency in Python and R for data engineering and analytical workflows.
• Hands-on experience with Databricks and Apache Spark, including Structured Streaming
(watermarking, stateful processing concepts, checkpointing, exactly-once/at-least-once tradeoffs).
• Strong SQL skills for transformation and validation.
• Experience building production-grade pipelines: idempotency, incremental loads, backfills, schema evolution, and error handling.
• Experience implementing data quality checks and validation for both batch and event streams (late arrivals, deduplication, event-time vs processing-time).
• Observability skills: logging/metrics/alerting, troubleshooting, and performance tuning (partitions, joins/shuffles, caching, file sizing).
• Proficiency with Git and CI/CD concepts for data pipelines, Databricks asset bundling, Databricks application deployments, and proficiency using Databricks CLI
• Experience with lakehouse table formats and patterns (e.g., Delta tables) including compaction/optimization and lifecycle management.
• Familiarity with orchestration patterns (Databricks Workflows/Jobs) and dependency management.
• Experience with governance controls (catalog permissions, secure data access patterns, metadata/lineage expectations).
• Knowledge of message/event platforms and streaming ingestion patterns (e.g., Kafka/Kinesis equivalents) and sink patterns for serving layers.
• Experience collaborating with research/analytics stakeholders and translating analytical needs into engineered data products.
• Strong problem-solving and debugging across ingestion ? transformation ? serving.
• Clear technical communication and documentation discipline.
• Ability to work across product/architecture/governance teams in a regulated
environment.
• Deep Delta Lake expertise including time travel, Change Data Feed (CDF), MERGE operations, CLONE, table constraints, and optimization techniques; understanding of liquid clustering and table maintenance best practices.
• Experience with Lakeflow/Delta Live Tables (DLT) including expectations framework, materialized vs. streaming table patterns, and declarative pipeline design.
• Proficiency with testing frameworks (pytest, Great Expectations, deequ) and test-driven development practices for production data pipelines.
• Data modeling skills including dimensional modeling (star/snowflake schemas), medallion architecture implementation, and slowly changing dimension (SCD) pattern implementation.
• AWS data services experience including S3 optimization, IAM role configuration for data access, and CloudWatch integration; understanding of cost optimization patterns.

Education / Experience/Certifications/Accreditations
• Bachelor’s degree in a related field or equivalent experience.
• 10 years of data engineering experience, including production Spark-based batch pipelines and streaming implementations.
• Desirable Certifications:
? Databricks Certified Apache Spark Developer Associate
? Databricks Certified Data Engineer Associate or Professional
? AWS Certified Developer Associate
? AWS Certified Data Engineer Associate
? AWS Certified Solution Architect Associate The Contractor shall deliver, but not limited to, the following:
• Build and maintain end-to-end pipelines in Databricks using Spark (PySpark) for ingestion, transformation, and publication of curated datasets.
• Implement streaming / near-real-time patterns using Spark Structured Streaming (or equivalent), including state management, checkpointing, and recovery.
• Design incremental processing, partitioning strategies, and data layout/file sizing approaches to optimize performance and cost.
• Develop reusable pipeline components (common libraries, parameterized jobs, standardized patterns) to accelerate delivery across domains.
• Develop and operationalize workflows in Python and R for data preparation, analysis support, and research-ready extracts.
• Package code for repeatable execution (dependency management, environment reproducibility, job configuration).
• Implement data quality controls for batch and streaming (schema enforcement, completeness/validity checks, late/duplicate event handling, reconciliation).
• Build pipeline observability: logging, metrics, alerting, and dashboards; support on-call/incident response and root-cause analysis.
• Create runbooks and operational procedures for critical pipelines and streaming services.
• Ensure secure handling of sensitive data and apply least-privilege principles in pipeline design and execution.
• Contribute lineage notes, dataset definitions, and operational documentation to support reuse and auditability.
• Use version control and CI/CD practices for notebooks/code (code reviews, automated testing where feasible, deployment/promotion across environments).
• Collaborate with stakeholders to refine requirements, define SLAs, and deliver incrementally th measurable outcomes.
• Implement Lakeflow/Delta Live Tables (DLT) pipelines with data quality expectations, materialized views, and streaming tables; design pipeline DAGs and maintain declarative ETL workflows.
• Design and implement medallion architecture patterns (Bronze/Silver/Gold) with appropriate data quality gates, schema evolution strategies, and layer-specific optimization techniques (OPTIMIZE, VACUUM, Z-ordering/liquid clustering).
• Develop and maintain comprehensive testing strategies including unit tests for transformation logic, integration tests for end-to-end pipelines, and data quality validation using frameworks like Great Expectations or deequ.
• Perform data modeling and schema design for dimensional models, slowly changing dimensions (SCD), and analytical structures; collaborate on entity definitions and grain decisions.
• Contribute to Unity Catalog governance by registering datasets with metadata/descriptions/tags, implementing row/column-level security where required, and maintaining accurate lineage
information.

PLACE OF PERFORMANCE:
On-site at FRB locations, Washington, DC

CITIZEN STATUS: US Citizenship Required
INTERVIEW: Selected candidates will participate in a phone screening. Those that pass the phone screening may be invited to an in-person interview. The use of video conference tools (e.g., MS Teams or WebEx) can be used in accordance with agency guidelines.

System One, and its subsidiaries including Joulé, ALTA IT Services, and Mountain Ltd., are leaders in delivering outsourced services and workforce solutions across North America. We help clients get work done more efficiently and economically, without compromising quality. System One not only serves as a valued partner for our clients, but we offer eligible employees health and welfare benefits coverage options including medical, dental, vision, spending accounts, life insurance, voluntary plans, as well as participation in a 401(k) plan.

System One is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, age, national origin, disability, family care or medical leave status, genetic information, veteran status, marital status, or any other characteristic protected by applicable federal, state, or local law.

#M1
#LI-CS1
#DI-CS1

Ref: #851-Rockville-S1

Hourly Wage Estimation for Databricks Engineer in Madison, WI
$77.00 to $92.00
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Databricks Engineer?

Sign up to receive alerts about other jobs on the Databricks Engineer career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$73,120 - $92,318
Income Estimation: 
$91,158 - $113,466
Employees: Get a Salary Increase
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at System One

  • System One Arlington, VA
  • We are seeking an experienced Construction Project Manager to lead and deliver complex industrial construction projects . This role is primarily focused on... more
  • 11 Days Ago

  • System One Point, NY
  • Job Title: Sr. Network Engineer Location: Kings Point, NY (Onsite) Type: W2 Contract Compensation: $50.00 - $60.00 hourly Contractor Work Model: Hybrid (3 ... more
  • 11 Days Ago

  • System One Moscow, ID
  • Mountain Ltd. has an opening for an OSP Engineer working in the Moscow , ID area. Since 1979, Mountain Ltd. has successfully created and supported turnkey ... more
  • 11 Days Ago

  • System One Falls, ID
  • Mountain Ltd. has an opening for an OSP Field Engineer in the Twin Falls, ID area! Since 1979, Mountain Ltd has successfully created and supported turnkey ... more
  • 11 Days Ago


Not the job you're looking for? Here are some other Databricks Engineer jobs in the Madison, WI area that may be a better fit.

  • FullStack Madison, WI
  • About FullStack FullStack is one of the fastest-growing software consultancy companies in the Americas. We deliver transformational digital solutions to to... more
  • 2 Days Ago

  • ACS - Engineer. Integrate. Build. Verona, WI
  • Mechanical Engineer ACS seeks to hire a Mechanical Engineer who is highly knowledgeable and experienced in Industrial HVAC and Mechanical Systems design, p... more
  • 11 Days Ago

AI Assistant is available now!

Feel free to start your new journey!