Sorry! This job is no longer available. Please explore similar jobs listed on the left.
2020 Cubic Transportation Systems, Inc. is Hiring a Remote Principal Data Engineer (Remote)
Business Unit: Cubic Transportation Systems Company Details: Cubic offers an opportunity to provide innovative technology for government and commercial customers around the globe, helping to solve their future problems today. We’re the leading integrator of payment and information technology and services for intelligent travel solutions worldwide, and the leading provider of realistic combat training systems, secure communications and networking and highly specialized support services for military and security forces of the U.S. and allied nations. If you have an entrepreneurial spirit and thrive in an innovative environment, we want to talk to you about your next role at Cubic! We are seeking employees inspired by technology, and motivated by the rewards of hard work, commitment, teamwork, quality, integrity, and respect. We invite you to explore opportunities with Cubic. Job Details: Job Summary: Incumbents of position will be responsible for designing, building, and optimizing analytics and data management solutions to expand the capabilities and products delivered by our analytics platform in Microsoft Azure. Must be comfortable interfacing with and supporting multiple teams and working with product managers, data engineers, data scientists, business analysts, and project managers, at different regional offices across the globe. The ideal candidate has a passion for data, has experience building data management solutions and deploying to production environments, and enjoys working in a self-directed semi--structured environment with minimal supervision. Incumbents of this position will regularly exercise discretionary and substantial decision-making authority. Essential Job Duties and Responsibilities: Responsible for designing solutions, aligned with customer requirements for projects initiated by supported lines of business in coordination with other technical departments Designs and develops novel analytic applications and data management systems to provide new product capabilities and features Responsible for design and implementation of modern data engineering and software architecture patterns – BigData, Lambda, Kappa, Producer-Consumer (Kafka/Confluent), Spark Streaming, Real-Time Streaming, Data Warehouses etc. Responsible for design, development and implementation using PySpark, Python, Scala and SparkSql in Azure Databricks. Deep experience building cloud solutions and hands-on using Azure Synapse, Azure Databricks, Delta Lake, Spark Configuration, Azure Datalake Store, Azure KeyVault, Azure DataFactory, Azure EventHub, Postgres or other enterprise database platform to name a few. Apply technical knowledge to design solutions that meet business and IT needs, create Data Platform and Analytics roadmaps, and ensure long term technical viability of new deployments, infusing key analytics technologies where appropriate (e.g. SQL Server, Azure Synapse, API Services, BigData, Data Lake, Azure Databricks, Power BI, etc.) A good understanding and hands-on market leading ETL tools – Qlik, Matillion, FiveTrans, Talend etc. Responsible for designing CI/CD pipeline and Azure Repos for data-pipeline and database related deployment through different mechanism – (Visual Studio Dacpac, Azure Pipelines, Tabular). Good knowledge of PowerShell, DevOps, JIRA, Confluence and Release Management (Dev to Prod). Good understanding of Microsoft Azure – IaaS, PaaS and SaaS. Deploys applications and libraries to cloud environments, performs database maintenance and performance tuning, query optimizations, and supports end-to-end pipeline and solution QA Conducts and directs in-House design reviews and technical meetings, writes technical documentation, and makes presentations before management and customers. Leads implementation of complex big data projects with a focus on collecting, parsing, managing, evaluating and visualizing large sets of data to turn information into insights using multiple platforms Supports development of project concepts, objectives, specifications and resource needs. Prepares design specifications, analyses and recommendations. Monitors project status and progress and measure system performance. Provides mentorship to junior engineering staff. Technologies we use: Azure Cloud Platform Services Azure Data Factory Synapse Analytics / Azure Data Warehouse Databricks Analysis Services Azure Functions Azure DevOps PowerBI Minimum Job Requirements: Bachelor’s degree in computer science or equivalent. 8 years of hands-on data engineering experience using Azure Cloud Platform Services. Azure Certification strongly preferred. Experience bringing data pipelines into production. Ability to build process supporting data transformation, data structures, metadata, dependencies and workload management. Database design, schema definition, and database optimization. Experience building elastic, scalable APIs. Experience developing in an Agile environment. Experience with version control tools. Strong analytical and problem-solving skills, attention to details, critical thinking ability, and creativity. Excellent written and verbal communication skills Prior experience working in the transportation or logistics industries is a plus Some travel possible if not local to San Diego, CA The description provided above is not intended to be an exhaustive list of all job duties, responsibilities and requirements. Duties, responsibilities and requirements may change over time and according to business need. Worker Type: Employee
Job Summary