What are the responsibilities and job description for the Hadoop DBA- Lead position at Visionary Innovative Technology Solutions LLC?
Hadoop DBA- Lead
Charlotte NC
Full-time Hiring- Onsite
Job Summary:
We are seeking a highly skilled Hadoop Technical Lead to provide technical leadership and expertise within our Hadoop environment, specifically focused on data management and analysis. The ideal candidate will possess strong Hadoop DBA knowledge and a deep understanding of various Big Data technologies, ensuring seamless integration and optimal performance within the Hadoop ecosystem.
The Hadoop Technical Lead is responsible for end‑to‑end ownership of Hadoop / Big Data platforms, providing technical leadership, architecture guidance, and advanced operational support.
Responsibilities:
- Own the overall Hadoop platform architecture, design, and operational strategy
- Act as the technical escalation (L3) for complex production issues and performance bottlenecks
- Provide architecture-level guidance on Hadoop ecosystem components and integrations
- Lead platform assessments, optimization initiatives, and continuous improvement
- Design and enforce best practices for cluster configuration, resource management, and data layout
- Plan and execute Hadoop version upgrades, patching, and distribution migrations
- Ensure compliance with enterprise security, audit, and data governance policies
- Review and approve access models, service accounts, and encryption standards
- Establish proactive monitoring, ing, and observability mechanisms
- Work closely with Data Engineers, BI teams, Application teams, and Infrastructure teams
- Participate in design reviews, CABs, and technical governance forums
- Translate business requirements into scalable technical solutions
- Provide technical guidance and support for Hadoop-related projects, ensuring best practices are followed.
- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.
- Optimize and maintain Hadoop clusters, ensuring high availability and performance.
- Implement and manage data ingestion processes using tools such as Flume and Kafka.
- Utilize HDFS, MapReduce, Hive, Impala, HBase, and Spark/Spark Streaming for data processing and analysis.
- Monitor and troubleshoot Hadoop ecosystem components, ensuring system reliability and efficiency.
- Stay updated with the latest trends and advancements in Big Data technologies and recommend improvements.
- Document architecture designs, processes, and best practices for future reference.
Mandatory Skills:
- Strong knowledge and experience as a Hadoop Architecture and Internals.
- Proficiency in HDFS, MapReduce, Hive, Impala, HBase, Flume, ZooKeeper, Spark/Spark Streaming, and Kafka.
- Deep understanding of the Hadoop ecosystem and its components.
- Experience in designing and implementing scalable data architectures.
- Strong analytical and problem-solving skills.
- Excellent communication and collaboration abilities.
- Cluster monitoring, troubleshooting, and performance tuning
- Security implementation in Hadoop ecosystems
Preferred Skills:
- Familiarity with cloud-based Hadoop solutions (e.g., AWS EMR, Azure HDInsight).
- Experience with data modeling and ETL processes.
- Knowledge of machine learning frameworks and tools.
- Understanding of data management and analysis.
- Automation using Python, Shell, Ansible, or similar tools
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Relevant certifications in Hadoop or Big Data technologies are a plus.