What are the responsibilities and job description for the Data Scientist position at Advance Auto Parts?
Job Description
Role Summary
We are seeking an experienced Data Scientist with strong expertise in Data Science, machine learning engineering with hands on experience in designing and deploying ML solutions in production. This role focuses on building scalable ML solutions, productionizing models, and enabling robust ML platforms for enterprise-grade deployments.
This position is 4 days in office, 1 day remote per week, based at our corporate headquarters in Raleigh, North Carolina (North Hills)
Key Responsibilities
- Build ML Models: Design and implement predictive and prescriptive models for regression, classification, and optimization problems.Apply advanced techniques such as structural time series modeling and boosting algorithms (e.g., XGBoost, LightGBM).
- Train and Tune Models: Develop and tune machine learning models using Python, PySpark, TensorFlow, and PyTorch.
- Collaboration & Communication: Work closely with stakeholders to understand business challenges and translate them into data science solutions and work in the end-to-end solutioning. Collaborate with cross-functional teams to ensure successful integration of models into business processes.
- Monitoring & Visualization: Rapidly prototype and test hypotheses to validate model approaches. Build automated workflows for model monitoring and performance evaluation. Create dashboards using tools like Databricks and Palantir to visualize key model metrics like model drift, Shapley values etc.
- Productionize ML: Build repeatable paths from experimentation to deployment (batch, streaming, and low-latency endpoints), including feature engineering, training, evaluation,
- Own ML Platform: Stand up and operate core platform components—model registry, feature store, experiment tracking, artifact stores, and standardized CI/CD for ML.
- Pipeline Engineering: Author robust data/ML pipelines (orchestrated with Step Functions / Airflow / Argo) that train, validate, and release models on schedules or events.
- Observability & Quality: Implement end-to-end monitoring, data validation, model/drift checks, and alerting SLA/SLOs.
- Governance & Risk: Enforce model/version lineage, reproducibility, approvals, rollback plans, auditability, and cost controls aligned to enterprise policies.
- Partner & Mentor: Collaborate with on-shore/off-shore teams; coach data scientists on packaging, testing, and performance; contribute to standards and reviews.
- Hands-on Delivery: Prototype new patterns; troubleshoot production issues across data, model, and infrastructure layers.
Required Qualifications
- Education: Bachelor’s degree in Computer Science, Information Technology, Data Science, or related field.
- Programming: 5 years experience with Python (pandas, PySpark, scikit-learn; familiarity with PyTorch/TensorFlow helpful), bash, experience with Docker.
- ML Experimentation: Design and implement predictive and prescriptive models for regression, classification, and optimization problems. Apply advanced techniques such as structural time series modeling and boosting algorithms (e.g., XGBoost, LightGBM).
- ML Tooling: 5 years experience with SageMaker (training, processing, pipelines, model registry, endpoints) or equivalents (Kubeflow, MLflow/Feast, Vertex, Databricks ML).
- Pipelines & Orchestration: 5 years’ experience with Databricks DABS or Airflow or Step Functions, e-driven designs with EventBridge/SQS/Kinesis.
- Cloud Foundations: 3 years experience with AWS/Azure/GCP on various services like ECR/ECS, Lambda, API Gateway, S3, Glue/Athena/EMR, RDS/Aurora (PostgreSQL/MySQL), DynamoDB, CloudWatch, IAM, VPC, WAF.
- Snowflake Foundations: Warehouses, databases, schemas, stages, Snowflake SQL, RBAC, UDF, Snowpark.
- CI/CD: 3 years hands-on experience with CodeBuild/Code Pipeline or GitHub Actions/GitLab; blue/green, canary, and shadow deployments for models and services.
- Feature Pipelines: Proven experience with batch/stream pipelines, schema management, partitioning, performance tuning; parquet/iceberg best practices.
- Testing & Monitoring: Unit/integration tests for data and models, contract tests for features, reproducible training; data drift/performance monitoring.
- Operational Mindset: Incident response for model services, SLOs, dashboards, runbooks; strong debugging across data, model, and infra layers.
- Soft Skills: Clear communication, collaborative mindset, and a bias to automate & document.
Additional Qualification:
- Experience in retail/manufacturing is preferred.
California Residents click below for Privacy Notice:
https://jobs.advanceautoparts.com/us/en/disclosures