What are the responsibilities and job description for the Staff Data Engineer position at Eton Solution?
*Immigration sponsorship is not available in this role*
We are looking for an experienced Data Engineer (8 years of experience) with deep expertise in Flink SQL to join our engineering team. This role is ideal for someone who thrives on building robust real-time data processing pipelines and has hands-on experience designing and optimizing Flink SQL jobs in a production environment.
You’ll work closely with data engineers, platform teams, and product stakeholders to create scalable, low-latency data solutions that power intelligent applications and dashboards.
⸻
Key Responsibilities:
• Design, develop, and maintain real-time streaming data pipelines using Apache Flink SQL.
• Collaborate with platform engineers to scale and optimize Flink jobs for performance and reliability.
• Build reusable data transformation logic and deploy to production-grade Flink clusters.
• Ensure high availability and correctness of real-time data pipelines.
• Work with product and analytics teams to understand requirements and translate them into Flink SQL jobs.
• Monitor and troubleshoot job failures, backpressure, and latency issues.
• Contribute to internal tooling and libraries that improve Flink developer productivity.
Required Qualifications:
• Deep hands-on experience with Flink SQL and the Apache Flink ecosystem.
• Strong understanding of event time vs processing time semantics, watermarks, and state management.
• 3 years of experience in data engineering, with strong focus on real-time/streaming data.
• Experience writing complex Flink SQL queries, UDFs, and windowing operations.
• Proficiency in working with streaming data formats such as Avro, Protobuf, or JSON.
• Experience with messaging systems like Apache Kafka or Pulsar.
• Familiarity with containerized deployments (Docker, Kubernetes) and CI/CD pipelines.
• Solid understanding of distributed system design and performance optimization.
Nice to Have:
• Experience with other stream processing frameworks (e.g., Spark Structured Streaming, Kafka Streams).
• Familiarity with cloud-native data stacks (AWS Kinesis, GCP Pub/Sub, Azure Event Hub).
• Experience in building internal tooling for observability or schema evolution.
• Prior contributions to the Apache Flink community or similar open-source projects.
Why Join Us:
• Work on cutting-edge real-time data infrastructure that powers critical business use cases.
• Be part of a high-caliber engineering team with a culture of autonomy and excellence.
• Flexible working arrangements with competitive compensation.