Epicareer Might not Working Properly
Learn More

Hadoop Engineer

Salary undisclosed

Checking job availability...

Original
Simplified

• Design, develop, and maintain large-scale data processing systems using Hadoop ecosystem tools.

• Optimize and troubleshoot Hadoop clusters to ensure high availability and performance.

• Collaborate with data engineers, analysts, and other stakeholders to support data pipelines and analytical use cases.

• Implement best practices in data governance, security, and compliance.

• Continuously evaluate new big data technologies to improve efficiency and scalability.

Requirements:

• Minimum 3–5 years of experience working with Hadoop and its ecosystem (e.g., HDFS, Hive, Pig, HBase, Spark, Oozie).

• Strong proficiency in programming languages such as Java, Scala, or Python.

• Experience with data pipeline design and performance tuning in distributed environments.

• Solid understanding of database systems, ETL processes, and big data architecture.

• Ability to work in a collaborative and fast-paced environment.

• Familiarity with cloud platforms (e.g., AWS, GCP, Azure) is a plus.