What are the responsibilities and job description for the Cylake - Data Pipeline Engineer position at Sagent Management?
Your Impact
Join a small team building the next generation of cybersecurity products from the ground up. Led by industry veterans with a proven track record of success - you will get to architect, build, and deliver hugely impactful products with this world-class team. You will have the opportunity to grow your career and skills along with the company from the very start.
Role Overview
Design, build, and maintain a scalable, open-source data lakehouse architecture supporting petabyte-scale analytics workloads. Responsible for architecting end-to-end data pipelines from ingestion through transformation to consumption, ensuring high performance, reliability, and data quality.
Required Experience
We’re committed to building a diverse, inclusive workplace where everyone can do their best work. We are proud to be an equal opportunity employer and do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. If you require a reasonable accommodation during the application or interview process, please let us know — we’re happy to support you.
View all jobs at this company
Join a small team building the next generation of cybersecurity products from the ground up. Led by industry veterans with a proven track record of success - you will get to architect, build, and deliver hugely impactful products with this world-class team. You will have the opportunity to grow your career and skills along with the company from the very start.
Role Overview
Design, build, and maintain a scalable, open-source data lakehouse architecture supporting petabyte-scale analytics workloads. Responsible for architecting end-to-end data pipelines from ingestion through transformation to consumption, ensuring high performance, reliability, and data quality.
Required Experience
- A proven track record of success architecting, building, and running large-scale data systems (PB scale)
- Experience with both batch and real-time processing architectures
- Experience with open source Data Lakehouse components, including Apache Iceberg, PostgreSQL, Neo4j, Apache Parquet, etc.
- Experience with tools for stream processing and data analytics - Apache Kafka, Spark, Flink, etc.
- Understanding and experience with data transformation solutions
- Excellent programming experience with Python
- Strong communication and documentation skills
- Experience with CSP data platforms is a plus
- Understanding of data lineage, quality, and governance tooling is a plus
We’re committed to building a diverse, inclusive workplace where everyone can do their best work. We are proud to be an equal opportunity employer and do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. If you require a reasonable accommodation during the application or interview process, please let us know — we’re happy to support you.
View all jobs at this company
Salary : $150,000 - $250,000