What are the responsibilities and job description for the Data Platform Engineer position at Jobs via Dice?
Dice is the leading career destination for tech experts at every stage of their careers. Our client, New York Technology Partners, is seeking the following. Apply via Dice today!
Title: Data Platform Engineer Location: Newark, NJ (Onsite) Position: Contract
Job Description:
Design and develop data pipelines and models using Microsoft Fabric and OneLake, with a strong focus on interoperability and performance using columnar formats like Parquet and efficient serialization formats like Protobuf.
Collaborate with cross-functional teams (Product Owners, Azure Solution Architects, Data & Software Engineers) to drive a Fabric-native data platform that integrates seamlessly with Power BI.
Enable modern data architecture patterns, including Lakehouse, near real-time streaming, schema evolution, and efficient serialization in messaging and APIs.
Maintain and automate scalable data pipelines, emphasizing performance, metadata consistency, and optimal use of storage formats such as Parquet within the Microsoft ecosystem.
Champion DevOps practices, implementing CI/CD pipelines (via Azure DevOps or GitLab Pipelines) and Infrastructure-as-Code for consistent, secure deployment of Fabric-based solutions.
Monitor and optimize platform performance using observability tools, ensuring health, reliability, and cost-efficiency.
Drive cloud migration and modernization efforts, guiding the transition to Microsoft Azure and Fabric, ensuring format and schema
Title: Data Platform Engineer Location: Newark, NJ (Onsite) Position: Contract
Job Description:
Design and develop data pipelines and models using Microsoft Fabric and OneLake, with a strong focus on interoperability and performance using columnar formats like Parquet and efficient serialization formats like Protobuf.
Collaborate with cross-functional teams (Product Owners, Azure Solution Architects, Data & Software Engineers) to drive a Fabric-native data platform that integrates seamlessly with Power BI.
Enable modern data architecture patterns, including Lakehouse, near real-time streaming, schema evolution, and efficient serialization in messaging and APIs.
Maintain and automate scalable data pipelines, emphasizing performance, metadata consistency, and optimal use of storage formats such as Parquet within the Microsoft ecosystem.
Champion DevOps practices, implementing CI/CD pipelines (via Azure DevOps or GitLab Pipelines) and Infrastructure-as-Code for consistent, secure deployment of Fabric-based solutions.
Monitor and optimize platform performance using observability tools, ensuring health, reliability, and cost-efficiency.
Drive cloud migration and modernization efforts, guiding the transition to Microsoft Azure and Fabric, ensuring format and schema