What are the responsibilities and job description for the Big Data Engineer – Hadoop, PySpark, Apache Kafka position at NuStar Technologies?
Must Have Technical/Functional Skills
Primary Skill: Hadoop ecosystem (HDFS, Hive, Spark),PySpark,Python,Apache Kafka
Secondary: UI – Angular.
Experience: Minimum 9 years
Roles & Responsibilities
Architectural Leadership:
Required Skills & Qualifications:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Salary Range: $100,000 - $120,000 a year
Primary Skill: Hadoop ecosystem (HDFS, Hive, Spark),PySpark,Python,Apache Kafka
Secondary: UI – Angular.
Experience: Minimum 9 years
Roles & Responsibilities
Architectural Leadership:
- Define end-to-end architecture for data platforms, streaming systems, and web applications.
- Ensure alignment with enterprise standards, security, and compliance requirements.
- Evaluate emerging technologies and recommend adoption strategies.
- Design and implement data ingestion, transformation, and processing pipelines using Hadoop, PySpark, and related tools.
- Optimize ETL workflows for large-scale datasets and real-time streaming.
- Integrate Apache Kafka for event-driven architectures and messaging.
- Build and maintain backend services using Python and microservices architecture.
- Develop responsive, dynamic front-end applications using Angular.
- Implement RESTful APIs and ensure seamless integration between components.
- Work closely with product owners, business analysts, and DevOps teams.
- Mentor junior developers and data engineers.
- Participate in agile ceremonies, code reviews, and design discussions.
Required Skills & Qualifications:
- Strong experience with Hadoop ecosystem (HDFS, Hive, Spark).
- Proficiency in PySpark for distributed data processing.
- Advanced programming skills in Python.
- Hands-on experience with Apache Kafka for real-time streaming.
- Frontend development using Angular (TypeScript, HTML, CSS).
- Expertise in designing scalable, secure, and high-performance systems.
- Familiarity with microservices, API design, and cloud-native architectures.
- Knowledge of CI/CD pipelines, containerization (Docker/Kubernetes).
- Exposure to cloud platforms (AWS, Azure, GCP).
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- 9 years in software development, with at least 4 years in architecture and Big Data technologies.
- BFSI domain experience or large-scale enterprise systems.
- Understanding of data governance, security, and compliance standards.
- Strong analytical and problem-solving abilities.
- Excellent communication and leadership skills.
- Ability to thrive in a fast-paced, agile environment.
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Salary Range: $100,000 - $120,000 a year
Salary : $100,000 - $120,000