What are the responsibilities and job description for the Data Engineer position at Haptiq?
Overview
Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets.
Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage.
The Opportunity
As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations.
Responsibilities and Duties
- Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms).
- Ensure consistent data hygiene, normalization, and enrichment across source systems.
- Develop and maintain data models and data warehouses optimized for analytics and operational reporting.
- Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights.
- Own the documentation of data schemas, definitions, lineage, and data quality controls.
- Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets.
- Monitor pipeline performance and proactively resolve data discrepancies or failures.
- Contribute to architectural decisions related to internal data infrastructure and tools.
Requirements
- 3–5 years of experience as a data engineer, analytics engineer, or similar role.
- Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt).
- Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift).
- Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday.
- Proficiency in Python or another scripting language for data manipulation.
- Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment).
- Strong understanding of data governance, documentation, and schema management.
- Excellent communication skills and ability to work cross-functionally.
Benefits
- Flexible work arrangements (including hybrid mode)
- Great Paid Time Off (PTO) policy
- Comprehensive benefits package (Medical / Dental / Vision / Disability / Life)
- Healthcare and Dependent Care Flexible Spending Accounts (FSAs)
- 401(k) retirement plan
- Access to HSA-compatible plans
- Pre-tax commuter benefits
- Employee Assistance Program (EAP)
- Opportunities for professional growth and development.
- A supportive, dynamic, and inclusive work environment.
Why Join Us?
We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it.
The compensation range for this role is $75,000 to $80,000 USD
Salary : $75,000 - $80,000