What are the responsibilities and job description for the Senior DevOps Engineer position at Planet Pharma?
Position Summary
As a Senior DevOps Engineer you will be the primary builder and operator of cloud-native Digital Pathology infrastructure. You will focus on automating the secure, scalable hosting of image management systems (VMS) and AI workloads primarily within AWS, while managing connectivity to enterprise applications in Microsoft Azure. You will own the “Infrastructure as Code” (IaC) strategy, ensuring that the massive storage requirements of Whole Slide Imaging (WSI) and the burst-compute needs of AI inference are handled with efficiency, security, and strict GxP compliance. This role acts as the bridge between on-premise scientific computing and the limitless scale of the cloud.
Responsibilities
Design and implement secure, scalable cloud architecture on AWS (S3, EC2, Batch, Lambda) using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation
Automate intelligent storage lifecycle and tiering policies (for example, S3 Intelligent-Tiering and Glacier) to manage petabyte-scale pathology image archives cost-effectively while ensuring rapid retrieval for clinical review
Build and maintain robust CI/CD pipelines (for example, Jenkins, GitHub Actions, or Azure DevOps) to automate testing and deployment of AI models, integration scripts, and application updates
Implement comprehensive observability and reliability practices using monitoring and alerting tools (CloudWatch, Datadog, Splunk) to track system health, API latency, and data pipeline performance, ensuring high availability for clinical services
Manage secure cross-cloud networking and API connectivity between the AWS data plane and Azure-based enterprise systems (such as LIMS, billing, and ESB), ensuring seamless identity management and data flow
Enforce security-by-design principles by managing IAM roles, encryption keys (KMS), and network security controls to maintain compliance with HIPAA, GDPR, and GxP standards
Manage containerized workloads using Docker and Kubernetes to support portable AI inference and microservices that scale dynamically based on lab volume
Education, Experience & Qualifications
Bachelor’s Degree or equivalent work experience required
5 or more years of experience in DevOps or Cloud Engineering with a primary focus on AWS environments required
Previous Experience Managing Azure Resources In Terraform Preferred
Extensive experience with Infrastructure as Code (IaC), specifically Terraform (preferred) or AWS CloudFormation
Proven track record of managing hybrid cloud networking (Direct Connect/VPN) and cross-cloud integrations, including connecting AWS services to Azure AD or API Management
Experience in regulated industries (healthcare, finance, biotech) managing sensitive data (PHI/PII) is strongly preferred
Hands-on experience with container orchestration (EKS, ECS, or Kubernetes) and serverless computing
AWS mastery with deep knowledge of core services, including S3 (object locking and lifecycle), EC2 and Auto Scaling, VPC networking, IAM, and Lambda
Proficiency in Python, Bash, or Go for automation and glue code
Expertise in building CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or Azure DevOps
Strong understanding of encryption standards (TLS, AES), secrets management (Vault or Secrets Manager), and least-privilege access control
Functional knowledge of Azure AD, Azure Functions, or Azure API Management to support integration tasks
As a Senior DevOps Engineer you will be the primary builder and operator of cloud-native Digital Pathology infrastructure. You will focus on automating the secure, scalable hosting of image management systems (VMS) and AI workloads primarily within AWS, while managing connectivity to enterprise applications in Microsoft Azure. You will own the “Infrastructure as Code” (IaC) strategy, ensuring that the massive storage requirements of Whole Slide Imaging (WSI) and the burst-compute needs of AI inference are handled with efficiency, security, and strict GxP compliance. This role acts as the bridge between on-premise scientific computing and the limitless scale of the cloud.
Responsibilities
Design and implement secure, scalable cloud architecture on AWS (S3, EC2, Batch, Lambda) using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation
Automate intelligent storage lifecycle and tiering policies (for example, S3 Intelligent-Tiering and Glacier) to manage petabyte-scale pathology image archives cost-effectively while ensuring rapid retrieval for clinical review
Build and maintain robust CI/CD pipelines (for example, Jenkins, GitHub Actions, or Azure DevOps) to automate testing and deployment of AI models, integration scripts, and application updates
Implement comprehensive observability and reliability practices using monitoring and alerting tools (CloudWatch, Datadog, Splunk) to track system health, API latency, and data pipeline performance, ensuring high availability for clinical services
Manage secure cross-cloud networking and API connectivity between the AWS data plane and Azure-based enterprise systems (such as LIMS, billing, and ESB), ensuring seamless identity management and data flow
Enforce security-by-design principles by managing IAM roles, encryption keys (KMS), and network security controls to maintain compliance with HIPAA, GDPR, and GxP standards
Manage containerized workloads using Docker and Kubernetes to support portable AI inference and microservices that scale dynamically based on lab volume
Education, Experience & Qualifications
Bachelor’s Degree or equivalent work experience required
5 or more years of experience in DevOps or Cloud Engineering with a primary focus on AWS environments required
Previous Experience Managing Azure Resources In Terraform Preferred
Extensive experience with Infrastructure as Code (IaC), specifically Terraform (preferred) or AWS CloudFormation
Proven track record of managing hybrid cloud networking (Direct Connect/VPN) and cross-cloud integrations, including connecting AWS services to Azure AD or API Management
Experience in regulated industries (healthcare, finance, biotech) managing sensitive data (PHI/PII) is strongly preferred
Hands-on experience with container orchestration (EKS, ECS, or Kubernetes) and serverless computing
AWS mastery with deep knowledge of core services, including S3 (object locking and lifecycle), EC2 and Auto Scaling, VPC networking, IAM, and Lambda
Proficiency in Python, Bash, or Go for automation and glue code
Expertise in building CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or Azure DevOps
Strong understanding of encryption standards (TLS, AES), secrets management (Vault or Secrets Manager), and least-privilege access control
Functional knowledge of Azure AD, Azure Functions, or Azure API Management to support integration tasks