What are the responsibilities and job description for the Staff Software Data Engineer - Credit Karma position at Intuit?
About the Team
The teams own the end-to-end data path at Credit Karma — from service emission to datalake landing to consumer hydration. We build the frameworks, pipelines, and persistence layers that every product team depends on to move data reliably at scale. Our systems ingest hundreds of terabytes per day across Kafka, Pub/Sub, and Dataflow, persist to Spanner and BigQuery, and serve low-latency reads to real-time product experiences through our Unified Consumer Profile (UCP) platform.
This is a foundational engineering role — you will build the frameworks and infrastructure that other engineers across Credit Karma use to ship data-intensive features. If you care about developer experience, system reliability, and solving hard distributed systems problems at scale, this team is for you.
What You'll Do
Design, build, and maintain high-throughput, low-latency data frameworks used across Credit Karma's engineering organization, including ETL templates, persistence libraries, and streaming data pipelines
Develop and extend Scala-based microservices and frameworks built on Finagle, Akka Streams, and gRPC that process petabytes of data daily
Build and optimize cloud-native data pipelines on Google Cloud Platform using Dataflow (Apache Beam), Pub/Sub, BigQuery, and Spanner
Own and evolve our Kafka-based streaming infrastructure — designing producers, consumers, and connectors that handle hundreds of terabytes of events per day with strict latency and durability guarantees
Build persistence frameworks that provide a unified, type-safe API for reading and writing across Spanner, MySQL, and BigQuery
Design and implement encryption, decryption, and fine-grained access control capabilities as reusable framework features, ensuring compliance with data governance requirements
Create self-service developer tooling — CLI tools, templates, and onboarding automation — that reduces the time for other teams to adopt the data platform from weeks to hours
Drive technical design through architecture reviews and Technical Design Documents (TDDs), influencing decisions across the broader Data & AI organization
Participate in on-call rotations and build observability (dashboards, alerting, metrics) into every system you ship
What's Great About the Role
You will own foundational infrastructure — the frameworks you build are the building blocks that every data pipeline and product feature at Credit Karma depends on
You will work at real scale — hundreds of terabytes per day, millions of consumer profiles hydrated in real time, and strict SLAs that demand engineering rigor
You will shape developer experience — designing the APIs, SDKs, and tooling that hundreds of engineers across the company interact with daily
You will solve hard, novel problems — from building MySQL CDC pipelines with cloud-native encryption to designing Apache Beam framework SDKs that support both Java and Python
You will be part of a high-impact, collaborative team with a strong culture of technical ownership and continuous learning
Responsibilities
Minimum Basic Requirements
7 years of professional software engineering experience building backend services and data infrastructure in Scala, Java, or a similar JVM language
7 years of experience designing and operating high-throughput, low-latency distributed systems that process data at petabyte scale
3 years of experience with streaming and messaging platforms such as Apache Kafka, Google Pub/Sub, or equivalent
3 years of experience building data pipelines on a major cloud platform (GCP, AWS, or Azure), including services like Dataflow, BigQuery, Spanner, or their equivalents
Professional experience with RPC frameworks such as Finagle, gRPC, or Akka for building production-grade service-to-service communication
Strong understanding of software engineering best practices including CI/CD, version control (Git), code review, and automated testings
Qualifications
Preferred Qualifications
Experience building reusable frameworks, SDKs, or platform libraries consumed by other engineering teams — you think about developer experience as a product
Experience with Apache Beam (Dataflow) including custom transforms, side inputs, windowing strategies, and pipeline optimization
Experience with Change Data Capture (CDC) patterns, particularly MySQL binlog-based replication to analytical stores
Experience with data encryption at rest and in transit, including key management (KMS/GSM), SPIFFE/mTLS, and certificate authority integration
Experience with schema management, data governance, and data quality frameworks in a large-scale production environment
Familiarity with infrastructure-as-code, Kubernetes (GKE), and container-based deployment models
Track record of mentoring engineers and driving technical alignment across teams through design documents and architecture reviews
Intuit provides a competitive compensation package with a strong pay for performance rewards approach. This position may be eligible for a cash bonus, equity rewards and benefits, in accordance with our applicable plans and programs (see more about our compensation and benefits at Intuit®: Careers | Benefits). Pay offered is based on factors such as job-related knowledge, skills, experience, and work location. To drive ongoing fair pay for employees, Intuit conducts regular comparisons across categories of ethnicity and gender.