What are the responsibilities and job description for the Sr. Data Engineer position at Universal Business Consulting?
About the Role
We are seeking an experienced Senior Data Engineer to support a large-scale Salesforce CRM Data Migration and Data Warehousing initiative. This role is focused on building and maintaining scalable, cloud-based data pipelines using Databricks, Spark, and Python to enable reliable data integration into the enterprise Data Hub for analytics and reporting.
This is a hands-on data engineering role centered around ETL/ELT pipeline development — not a Java or microservices development position.
You will work in a fast-paced Agile/Scrum environment, collaborating with architects and engineers while independently owning critical data deliverables.
Key Responsibilities
Data Engineering & Pipeline Development
- Design and develop scalable ETL/ELT pipelines using Databricks, Spark, and PySpark
- Extract, transform, and ingest Salesforce CRM data into centralized Data Hub
- Build high-performance workflows for large, distributed datasets
- Optimize Spark jobs for scalability, performance, and cost efficiency
- Convert business requirements into robust technical data solutions
Cloud & Platform Integration
- Integrate pipelines with AWS services including S3, SQS, SNS, and Lambda
- Manage data ingestion, storage, and movement across cloud environments
- Support deployment, monitoring, and troubleshooting of pipelines
- Ensure high availability, reliability, and performance of systems
Operations & Agile Delivery
- Maintain and enhance existing production pipelines (BAU support)
- Monitor jobs, resolve failures, and implement performance improvements
- Participate in Agile ceremonies (sprint planning, standups, retrospectives)
- Collaborate with cross-functional stakeholders
- Independently own assigned deliverables while contributing to team success
Required Qualifications
- 10 years of experience in Data Engineering
- Strong Python programming skills
- Hands-on experience with PySpark / Apache Spark
- Proven experience with Databricks
- Expertise in building enterprise-scale ETL/ELT pipelines
- Experience integrating with AWS services (S3, SQS, SNS, Lambda)
- Knowledge of distributed data processing concepts
- Experience monitoring, troubleshooting, and optimizing pipelines
- Familiarity with Agile/Scrum delivery models
- Strong communication and collaboration skills
Nice to Have
- Exposure to Salesforce CRM data
- Basic knowledge of MuleSoft
- Java or Scala familiarity
- CI/CD or Infrastructure-as-Code (Terraform, GitHub Actions, etc.)
Technology Stack
- Language: Python
- Processing: Spark / PySpark
- Platform: Databricks
- Cloud: AWS
- Source System: Salesforce CRM
- Integration: MuleSoft (team-level)
Interview Process
- 45-minute technical/coding round
- Final technical/team discussion
- Maximum 2 rounds