What are the responsibilities and job description for the Data Engineer Technical Lead position at Brooksource?
Technical Lead – Data Engineering
Charlotte, NC
Hybrid Onsite: 3 Days onsite,2 days remote
W-2 with Brooksource
**Applicants must be authorized to work in the United States. We are unable to sponsor employment visas at this time
Role Overview
We are seeking a Technical Lead – Data Engineering to provide hands-on technical leadership across our data engineering platform. This role combines deep technical expertise with collaborative leadership, architectural stewardship, and delivery accountability.
In this role, you will partner closely with existing technical leadership within the Data & Analytics organization, including other technical leads, architects, and senior engineers, to design and evolve scalable data solutions. You will help drive consistency, share best practices, and collectively shape the technical direction of the platform.
The role remains highly hands-on, with significant contribution in AWS Glue (PySpark), Python, Kafka, AWS DMS, AWS Lambda, AWS Redshift, and Aurora PostgreSQL, while acting as a key technical voice within the broader data engineering leadership group.
Key Responsibilities
Technical Leadership & Collaboration
- Act as a hands-on technical leader for the Data Engineering area, collaborating with existing technical leads and architects to define and align on platform standards and architectural direction.
- Partner with peer technical leaders to ensure consistency across data ingestion, transformation, streaming, and consumption patterns.
- Contribute to shared technical roadmaps, design reviews, and architecture forums.
- Provide technical guidance and escalation support while respecting shared ownership and collective decision-making.
Architecture & Engineering Standards
- Help design, evolve, and maintain scalable, fault-tolerant architectures for batch and streaming data pipelines.
- Establish and reinforce engineering standards for:
- Pipeline design and reliability
- Code quality and testing
- Performance and cost optimization
- Security and operational best practices
- Drive alignment on ingestion strategies, CDC patterns, streaming vs. batch tradeoffs, and data modeling approaches.
Hands-On Data Engineering
- Lead development of complex ETL/ELT pipelines using AWS Glue (PySpark) and Python.
- Build, optimize, and support Kafka-based streaming pipelines, including topic design, partitioning strategies, and consumer patterns.
- Implement and tune AWS DMS pipelines for full-load and CDC ingestion.
- Develop AWS Lambda functions for orchestration, automation, monitoring, and event-driven workflows.
- Design and optimize schemas, queries, and performance for Amazon Aurora PostgreSQL.
Platform Reliability, Performance & Cost
- Share ownership of platform reliability, scalability, and performance with other technical leaders.
- Identify and resolve bottlenecks in Spark jobs, streaming consumers, and database workloads.
- Ensure robust error handling, retries, idempotency, and recovery strategies.
- Partner with cloud/platform teams on cost optimization, capacity planning, and infrastructure improvements.
Data Quality & Trust
- Partner closely with Data Quality Engineers and peer technical leaders to ensure quality validations are embedded throughout pipelines.
- Ensure pipelines support reconciliation, auditability, and observability.
- Enforce data freshness, completeness, and accuracy SLAs for downstream consumers, including Qlik.
Mentorship & Team Enablement
- Mentor data engineers through code reviews, design sessions, and technical coaching.
- Collaborate with other technical leaders to raise the overall engineering maturity of the team.
- Contribute to hiring, onboarding, and skills development efforts.
- Promote knowledge sharing through documentation, demos, and internal forums.
Stakeholder & Cross-Team Engagement
- Work closely with analytics, BI, and business partners to translate requirements into scalable data solutions.
- Partner with BI teams to ensure curated datasets are analytics-ready for Qlik.
- Communicate technical decisions, risks, and tradeoffs clearly and collaboratively.
Required Qualifications
- 8 years of experience in data engineering or backend data platform development.
- 3 years in a senior or technical leadership role with shared architectural ownership.
- Expert-level experience with AWS Glue (PySpark), Python, and distributed data processing.
- Experience with Kafka and streaming architectures.
- Hands-on experience implementing and supporting AWS DMS Batch and CDC pipelines.
- Advanced SQL skills and experience with Amazon Aurora PostgreSQL.
- Experience with AWS Lambda and serverless/event-driven architectures.
- Strong understanding of data modeling and batch vs. streaming design patterns.
- Demonstrated ability to collaborate effectively with peer technical leaders.
- Excellent communication and documentation skills.
Preferred Qualifications
- Familiarity with orchestration tools (Step Functions, Airflow).
- Knowledge of data governance, lineage, and cataloging practices.
- Experience optimizing Spark workloads for performance and cost.
- Experience in enterprise or regulated environments.
What Success Looks Like
- Strong partnership and alignment with existing technical leadership.
- Data engineering standards are well-defined, shared, and consistently applied.
- Pipelines are scalable, reliable, and meet performance and quality SLAs.
- Engineers are well-supported, mentored, and productive.
- Ramp up on product development and implementation within a 30-45 day timeframe