What are the responsibilities and job description for the Senior Data and Analytics Engineer position at FULL CIRCLE GROUP & THE LEADERSHIP CIRCLE?
Overview
At Leadership Circle, we are reimagining how leadership grows from one-time events to personalized, continuous, human-centered experiences.
We are seeking a Senior Data and Analytics Engineer to help build the data foundation that powers our next-generation platform. This is a new and critical function within our engineering organization, and the person in this role will have the opportunity to shape our data strategy from the ground up.
This role sits at the intersection of data engineering, analytics, and software development. You will design and build data pipelines, models, and schemas that enable user-facing analytics features, internal dashboards, and AI/ML experimentation. You will also evaluate and recommend cloud-based data platforms and tools such as Snowflake, Databricks, or BigQuery to power our data warehouse and data products.
You will partner closely with product, engineering, design, and data science stakeholders to deliver reliable, well-documented, and scalable data solutions. Success in this role requires hands-on technical depth, strong analytical thinking, and the ability to establish best practices in an environment where much of this work is being done for the first time.
A core expectation of this role is the active and effective use of AI-assisted development tools such as Claude Code. Success in this role requires embracing AI as a fundamental part of how data systems are designed, built, and maintained, using it to accelerate development, improve code quality, and increase overall engineering effectiveness.
Responsibilities
Data Engineering & Analytics
• Build, maintain, and optimize data pipelines, models, and schemas in collaboration with data scientists and platform engineers
• Design and implement a scalable data warehouse architecture to support both customer-facing analytics and internal reporting needs
• Evaluate and recommend cloud-based data platforms such as Snowflake, Databricks, or BigQuery to power data products and the internal data warehouse, establishing foundational infrastructure
• Develop and maintain data transformations that ensure clean, reliable, and well-documented datasets for downstream consumers
• Implement data quality checks, monitoring, and alerting to ensure pipeline reliability and data integrity
Data Products & Tooling
• Partner with product, engineering, and design to define and deliver user-facing analytics features
• Develop tools, dashboards, data products and models to support both customer needs and internal insight
• Support AI and ML experimentation by providing clean, reliable, and well-documented data to data scientists and engineers
• Create and maintain documentation, data dictionaries, and lineage tracking to ensure organizational understanding of data assets
Best Practices & Governance
• Help establish best practices for analytics engineering including testing, observability, and data governance
• Define and enforce data modeling standards, naming conventions, and schema design principles
• Contribute to engineering best practices including CI/CD for data pipelines, version control for transformations, and automated testing of data models
Cross-Functional Collaboration • Serve as a key member of cross-functional team spanning Product, UX, engineering, and other departments, partnering closely to understand data needs, align on priorities, and iteratively deliver analytics capabilities that serve both the product organization and the broader business
• Communicate technical concepts clearly and effectively to both technical and non-technical stakeholders
• Mentor other engineers on data best practices through code reviews, pairing, and knowledge sharing
• Foster a collaborative team environment by communicating proactively, sharing context, asking questions openly, and contributing to thoughtful technical discussions
AI-Assisted Development
• Actively use AI-assisted development tools such as Claude Code as a core part of daily engineering work including code generation, data modeling, debugging, and system comprehension
• Continuously improve how AI tools are used within the team to increase development speed, reduce friction, and improve overall engineering effectiveness
Requirements • 7 or more years of experience in data engineering, analytics engineering, or software engineering with a strong data focus
• Proven experience designing and building data warehouses, data lakes, or similar analytical data stores from the ground up
• Demonstrated experience in partnering with Product and UX to bring analytics products to market
• Deep proficiency with SQL and experience with modern data transformation frameworks such as dbt, Apache Spark, or similar tools
• Strong experience with cloud data platforms and services such as AWS Redshift, Snowflake, BigQuery, Databricks, or similar technologies
• Experience building and maintaining ETL/ELT pipelines using tools such as Airflow, Dagster, Prefect, Fivetran, or similar orchestration platforms
• Proficiency in Python or TypeScript for data processing, scripting, and tooling development
• Strong understanding of data modeling principles including dimensional modeling, star schemas, and data normalization
• Experience with PostgreSQL or similar relational databases including query optimization and schema design
• Familiarity with analytics and BI tools such as Looker, Metabase, Tableau, or similar platforms
• Ability to work effectively in environments with evolving requirements and shared technical ownership
• Strong communication skills and a team-first mindset with a willingness to ask questions, share context, and collaborate to find the best solutions
• Comfort giving and receiving feedback in code reviews and technical discussions
• Demonstrated experience using AI tools such as Claude Code, GitHub Copilot, or ChatGPT as a core part of the development workflow
• Strong ability to evaluate, validate, and improve AI-generated outputs to ensure production-quality results
• Mindset that embraces AI as a force multiplier for engineering productivity and continuous learning
Bonus
• Experience with infrastructure-as-code tools such as CDK, Terraform, or CloudFormation for data infrastructure provisioning
• Experience building AI-powered features such as LLM integrations, RAG systems, or ML pipelines that rely on well-structured data