What are the responsibilities and job description for the Backend Systems Engineer (Senior, US-Based, Python/FastAPI) (Remote) position at Kernel?
About KernelKernel Intelligence is building the intelligence layer for commercial real estate (CRE) by turning unstructured documentslike leases, invoices, contracts, and diligence reportsinto high-quality, decision-ready data. CRE is a $20T asset class, yet much of its critical information still lives in PDFs and siloed systems. We're fixing that with a modern, AI-powered platform designed for high-stakes, data-heavy workflows.Our team combines deep industry experience (we previously built the leading real-estate compliance platform) with strong distributed-systems and AI infrastructure expertise. We're building the data foundation that will power the next generation of CRE intelligence.About the TeamWe're a small, senior team of builders who value clarity, curiosity, and decisive executionshipping fast and intentionally without sacrificing correctness or reliability.We are an AI-enabled data platform for mid-market companies who value enterprise-grade (multi-tenant) systems that are scalable, secure, reliable, predictable, and accurate.You'll work directly with experienced engineers on well-defined, high-impact backend systems that form the foundation for next-generation AI and data products. Unlike many startups, build fast and break things is our anti-motto! And unlike many large companies, every line of code that every developer contributes has a meaningful impact on our business.Tech SnapshotPython 3.12 , AsyncIOFastAPI / StarletteSQLAlchemy 2.x (async)Pulsar or Kafka (typed events, producers/consumers)OIDC/JWT, multi-tenant authKubernetes, fully cloud-native architectureGitOps/Terraform, Infrastrucuture and CI/CDOpenTelemetry, LGTM, structured logs, traces, and observabilityDistributed systems fundamentals: idempotency, retries, distributed locks, backpressure, consistency, etc.About the RoleAs a Backend Systems Engineer, you'll design, develop, and deliver the distributed systems and infrastructure that power our AI-driven platform. You'll work directly with our CTO to architect observable backend services, evolve core data and AI pipelines, harden and isolate multi-tenant workloads, and build integrations across internal and customer systems.This is an engineering-led, application-development-focused systems role ideal for someone who enjoys software development just as much as software delivery. The ideal candidate is someone who can take ownership across the full software lifecyclefrom shaping system architecture and writing production-grade services, to defining CI/CD workflows, provisioning infrastructure through GitOps/Terraform, and ensuring their services remain healthy, scalable, and observable in production. You should be as comfortable designing and reviewing technical approaches as you are implementing, deploying, and operating them.ResponsibilitiesDesign and build distributed backend services and apis in Python (3.12 ) using FastAPI SqlAlchemy and modern async patternsImplement event-driven pipelines (typed events, queues, idempotent handlers) that are observable and resilientModel, evolve, and scale data schemas and pipelines in heterogenous persistence architectures (RDBMS, NoSQL, VectorDBs)Apply dependency injection patterns (e.g., dependency_injector or equivalent) for modular, testable systemsInstrument everything traces, metrics, structured logs and use real runtime data to drive improvementsDeploy and operate services on Kubernetes with platform engineers (GitOps/Terraform collaboration)Drive reliability through rigorous testing (unit, async integration, and functional tests)Example ProjectsAdd new FastAPI endpoints secured by OIDC scopes with full request-context propagationBuild observable and distribute pipelines via Apache Pulsar (producers/consumers/readers) to support data and AI workflowsExtend DI-based resource providers with auto-instrumentation hooks and namespace replicationTune async SQLAlchemy transaction performance and manage connection lifecycles at scaleDefine SLO-adjacent telemetry and use traces/metrics to uncover and resolve performance bottlenecksWhat You Bring5 years of experience building backend or distributed systemsStrong fluency in Python 3.10 with async/await (we run 3.12)Deep experience with FastAPI or Starlette (middleware, request context, router composition)Hands-on skill with SQLAlchemy 2.x async engines, sessions, and repository patternsSolid understanding of distributed-systems fundamentals: locks, retries, backpressure, idempotency, consistencyPractical experience integrating OIDC/JWT in multi-tenant applicationsComfort with dependency injection frameworks/patternsAn observability-first mindset you instrument before you guessExperience building secure, scalable, and stable cloud-native services on Kubernetes with Istio Mesh (or alternative)Bonus Points ForProduction experience with Pulsar or Kafka (typed schemas, batching, QoS tuning)Experience with Zitadel or other modern IdPsFamiliarity with multi-tenant runtime schemas or dynamic data isolation; vCluster experience a huge plusExperience with AI-integration patterns and tools (i.e. LiteLLM, NotDiamond, etc.)GitOps (Argo CD preferred) and CI/CD (GitLab preferred) design and managementSecurity-first practices: secrets hygiene (Vault preferred), auditable logging, least-privilege designComfortable using AI-assisted development toolsExperience with enterprise SaaS systemsMindsetYou take ownership and deliver you ship, measure, iterate, and operateYou communicate clearly and collaborate well across engineering and productYou favor clarity, instrumentation, and reliability over ceremonyYou enjoy solving foundational problems in complex systemsWhy Join UsWork closely with a highly technical founding team that values design clarity and data-driven engineeringBuild core systems that power AI-enabled data products in a massive, under-modernized industryHave direct, visible impact in an early-stage environmentFlexible remote role with competitive compensation
Salary : $125,000 - $200,000