Job Descritpion of Big Data Engineer
5+ Years Relevant Experience
Key Responsibilities:
- Develop scalable and efficient data-driven applications using Hadoop, Spark, Hive, Impala, and NiFi in an on-premises environment.
- Design and implement high-performance Spark jobs using Java.
- Build data pipelines and compute tiers leveraging Hadoop, Spark, and Impala.
- Review and enhance code quality for Hadoop and Spark-based batch jobs.
- Collaborate with cross-functional teams to deliver robust software solutions that meet business needs.
- Mentor junior engineers and serve as a technical point of contact for Hadoop ecosystem technologies.
- Evaluate and recommend new tools and technologies to improve performance, scalability, and system reliability.
- Ensure solutions follow best practices, are maintainable, scalable, and performance-optimized.
Required Skills:
- Strong Java development experience, especially within the context of SpringBoot.
- Experience in full stack software development with a data-intensive focus.
- Expertise in big data technologies such as Hadoop, NiFi, Hive, Impala, and Spark.
- Deep understanding of SQL (preferably Oracle).
- Solid knowledge of object-oriented programming (OOP) principles.
- Familiarity with Hadoop internals is a strong plus.
- Strong analytical and problem-solving skills.
- Excellent communication skills—both verbal and written.
- Ability to mentor junior developers and lead technical improvements.
Required Skills for Big Data Engineer Job
- Hadoop
- NiFi
- Hive and Spark
- Java
Our Hiring Process
- Screening (HR Round)
- Technical Round 1
- Technical Round 2
- Final HR Round