What are the responsibilities and job description for the Sr. Python Data Engineer position at ExecutivePlacements.com?
Our client is trying to find a really senior Python and Data Engineer with Asset Management or Investment domain experience.
Most of the rejected candidates' experience leans more toward Python-based ETL development using traditional technologies rather than large-scale distributed data processing.
We should target candidates with strong, proven expertise in PySpark and Databricks for this position.
Job Title: Python/Data Engineer
Location: Irvine or Los Angeles, CA - onsite work is mandatory
Length: 6-12 months
Relocation is ok for candidates who can relocate from DAY 1.
Must Have
Strong working experience in python including Pandas, numpy and FastAPI/Flask frameworks.
Good knowledge of cloud services, preferably AWS.
Strong Database knowledge and be able to write and comprehend SQL queries.
Should have Graphql knowledge
Experience building API, Calculation Engines, batch, and real time modules supporting web applications used by Trading or Portfolio management teams
Experience building real time applications for Structured, Rates, Corporate or Municipal Fixed Income Desk Experience with Python programming, AWS Stack - EKS, API Gateway, Lamda, Redis. Databases Postgress, S3 technologies, Integrating with market data providers like Bloomberg, TradeWeb etc.
Should have worked on any ETL pipeline
10-12 years of Experience in Asset management or financial services industry
Strong communication skills, capable to coordinate with various stakeholders
Nice To Have
Knowledge of JupyterLab (Good to have)
Knowledge of Apache Airflow or any Workflow management tool (Good to have)
Knowledge of DevOps is plus and good to have.
Strong learning mindset to learn and perform POC on new advanced technologies and services related to Data Science platforms..
Most of the rejected candidates' experience leans more toward Python-based ETL development using traditional technologies rather than large-scale distributed data processing.
We should target candidates with strong, proven expertise in PySpark and Databricks for this position.
Job Title: Python/Data Engineer
Location: Irvine or Los Angeles, CA - onsite work is mandatory
Length: 6-12 months
Relocation is ok for candidates who can relocate from DAY 1.
Must Have
Strong working experience in python including Pandas, numpy and FastAPI/Flask frameworks.
Good knowledge of cloud services, preferably AWS.
Strong Database knowledge and be able to write and comprehend SQL queries.
Should have Graphql knowledge
Experience building API, Calculation Engines, batch, and real time modules supporting web applications used by Trading or Portfolio management teams
Experience building real time applications for Structured, Rates, Corporate or Municipal Fixed Income Desk Experience with Python programming, AWS Stack - EKS, API Gateway, Lamda, Redis. Databases Postgress, S3 technologies, Integrating with market data providers like Bloomberg, TradeWeb etc.
Should have worked on any ETL pipeline
10-12 years of Experience in Asset management or financial services industry
Strong communication skills, capable to coordinate with various stakeholders
Nice To Have
Knowledge of JupyterLab (Good to have)
Knowledge of Apache Airflow or any Workflow management tool (Good to have)
Knowledge of DevOps is plus and good to have.
Strong learning mindset to learn and perform POC on new advanced technologies and services related to Data Science platforms..