What are the responsibilities and job description for the AWS Data Engineer position at Triunity Software?
Role Overview
We are seeking a skilled AWS Data Engineer to design, build, and maintain scalable data pipelines and data infrastructure on cloud platforms. The ideal candidate will work closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using AWS services
- Build and optimize ETL/ELT processes for large datasets
- Work with structured and unstructured data from multiple sources
- Implement data lakes and data warehouses on AWS
- Ensure data quality, integrity, and governance
- Monitor and troubleshoot data workflows and pipelines
- Collaborate with cross-functional teams to gather requirements and deliver solutions
- Optimize data storage and processing for performance and cost efficiency
- Implement security and compliance best practices in data handling
Core AWS Skills
- Strong experience with AWS services such as:
- Amazon S3 (data lake storage)
- AWS Glue (ETL pipelines)
- Amazon Redshift (data warehouse)
- AWS Lambda (serverless processing)
- Amazon EMR (big data processing)
- Amazon Kinesis (streaming data)
- Experience with AWS IAM, CloudWatch, and CloudFormation
Required Skills & Qualifications
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field
- 3–8 years of experience in data engineering or related roles
- Strong proficiency in SQL and Python (or Scala)
- Experience building ETL pipelines and working with big data technologies
- Hands-on experience with AWS cloud ecosystem
- Understanding of data warehousing concepts and data modeling
- Familiarity with distributed processing frameworks (Spark, Hadoop)