6 to 8 Years Relevant Experience
About the Role
We’re looking for a Big Data Engineer to design, develop, and maintain scalable data systems and pipelines. The role requires strong expertise with distributed data processing tools and the ability to ensure high-quality, reliable data for analytics and applications.
Key Responsibilities
- Design, develop, and maintain scalable big data systems and pipelines.
- Implement and optimize data processing frameworks using Hadoop, Spark, and Hive.
- Build and maintain ETL processes to ensure data availability, accuracy, and quality for downstream systems.
Essential Skills
- Strong hands-on experience with Hadoop, Spark, and Hive.
- Proven ability to optimize large-scale datasets.
- Experience in implementing scalable data processing frameworks.
Desirable Skills
- Experience in developing and maintaining ETL processes.
- Strong understanding of data validation, quality, and governance.