What are the responsibilities and job description for the Senior Data Engineer position at Gro Intelligence?
What You’ll Do
Below are just some of the challenges and problems we are trying to solve here at Gro. If you can help us with one or more we would love to speak with you - particularly if you’re eager to see your work have a bigger impact on the world
- Evolve how multi-dimensional data is represented, updated, transferred, and served.
- Design and implement a flexible and scalable architecture to effectuate the data lifecycle.
- Work with specialists across multiple fields to continue refining the data platform to meet evolving serving needs and new data sources.
Who You Are
- Someone who loves to learn, is willing to experiment, and remains adaptable.
- Self-motivated to drive projects to completion.
- Someone who is passionate about building, tuning, and scaling systems.
- Open-minded about languages/tools/frameworks, and able to make good decisions about which to use (and when) to solve a particular problem.
- A great team player; someone with a point of view who knows how to humbly express it.
Qualifications
- Experience with data lifecycle design, data integrations, data migrations, data quality, data governance, and restructuring data in complex systems.
- Solid understanding of different data storage technologies (RDBMS, columnar, timeseries, graph, MPP, Parquet, SQLite, etc.).
- 7 years experience with a combination of data modeling, and database operations.
- 7 years experience developing scalable data platforms on a terabyte or petabyte scale.
- Ability to create and clearly communicate designs and ideas.
- Experience with enterprise cloud infrastructure and services (we use AWS)
- Strong proficiency in Python (or C, C or Rust)
- BS in Computer Science, a related technical field, or equivalent practical experience
Nice to have
- Experience with Postgres and RDS.
- Experience with NoSQL/Non-traditional databases (e.g. columnar, document, time-series, graph).
- Experience with object stores (S3, Azure Blob Storage, etc).
- Experience with data pipeline technologies such as Kafka, RabbitMQ, Flink, etc.
- Experience with Geospatial or highly pixelated data sets.
- Experience with data sets larger than traditional relational databases can handle.
- Experience in technical leadership, leading project teams, and setting technical direction.
Compensation
The salary range for this role is expected to be $140,000 - $225,000. Other compensation for the role includes equity, a generous PTO policy and health, vision & dental insurance.
Salary : $140,000 - $225,000