What are the responsibilities and job description for the Data Engineer III 4P/598 position at 4P Consulting Inc.?
Data Engineer III
Experience Level: 5–10 Years
Location: Birmingham, AL
Contract- 1 Year
Client- Southern Company Services.
Position Overview
The Data Engineer III is responsible for designing, building, and optimizing scalable data pipelines and analytics solutions across relational databases, NoSQL systems, and cloud-based data lake environments. This role focuses on transforming raw data into reliable, structured, and machine-readable formats that support enterprise analytics, AI/ML initiatives, and operational reporting.
The ideal candidate brings strong experience in SQL, big data frameworks, cloud platforms, and modern data engineering best practices.
Key Responsibilities
Data Engineering & Pipeline Development
- Design, develop, test, deploy, and support scalable data pipelines
- Create and maintain Databricks pipelines for multiple data sources
- Develop batch and real-time data processing solutions
- Normalize databases and design schemas aligned with application requirements
- Combine and transform raw data into structured, analytics-ready datasets
Data Modeling & Architecture
- Design and implement data models (star schema, snowflake, relational, NoSQL)
- Implement data access strategies and storage optimization techniques
- Develop functional and technical designs for data engineering solutions
- Support diverse data source integration and enrichment
Big Data & Cloud Technologies
- Develop solutions using Spark, Hive, Hadoop
- Build and maintain solutions using Azure ecosystem tools:
- Azure Data Lake
- Azure Data Factory
- Azure Databricks
- Azure Synapse
- Azure Key Vault
- Power BI
- Work with MSBI tools (SSIS, SSAS), Informatica, Oracle Golden Gate
AI/ML & Advanced Analytics Support
- Support statistical models and AI/ML solutions using Python and/or R
- Prepare data pipelines to enable machine learning workflows
DevOps & Modern Engineering Practices
- Implement CI/CD pipelines for data engineering deployments
- Work within Agile development environments
- Utilize containerization tools (Docker, OpenShift)
- Develop API and web service integrations for data sourcing and delivery
Data Quality & Governance
- Implement and maintain data quality frameworks and tools
- Ensure consistency, accuracy, and reliability of data assets
Required Qualifications
- 5–10 years of hands-on data engineering experience
- Advanced SQL expertise
- Strong experience with Spark, Hive, Hadoop
- Hands-on experience with Azure cloud data tools
- Experience building Databricks pipelines
- Experience with MSBI (SSIS/SSAS), Informatica, Oracle, SQL Server
- Experience with batch and real-time data processing frameworks
- Experience with data modeling and schema design
- Experience working with APIs and web services
Preferred Qualifications
- Experience supporting AI/ML workflows
- Experience with containerization (Docker, OpenShift)
- Strong background in DevOps and CI/CD practices
- Experience working in enterprise-scale environments