Job Descritpion of Databricks Developer
4+ Years Relevant Experience
Key Responsibilities:
- Design, develop, and optimize ETL/ELT data pipelines using Apache Spark on Databricks.
- Collaborate with data engineers, analysts, and business stakeholders to understand data requirements.
- Implement scalable data processing solutions on Azure Databricks or AWS Databricks.
- Write efficient Spark code in PySpark, Scala, or SQL within the Databricks environment.
- Integrate Databricks with various data sources like Azure Data Lake, AWS S3, Delta Lake, SQL Server, etc.
- Develop and maintain Delta Lake tables and handle versioned data.
- Monitor and troubleshoot jobs, workflows, and cluster performance.
- Apply CI/CD best practices for Databricks notebooks using tools like Git, Azure DevOps, or Jenkins.
- Ensure data security and governance policies are followed, including role-based access control and audit logging.
Required Skills:
- 5+ years of experience in Data Engineering or Big Data development.
- Strong expertise in Databricks, Apache Spark, and Delta Lake.
- Hands-on experience with PySpark, SQL, and optionally Scala.
- Good understanding of cloud platforms (Azure or AWS), especially storage, compute, and networking components.
- Familiarity with Databricks Workflows, Jobs, and Cluster management.
- Experience in data modeling, data warehousing, and data lakehouse architecture.
- Knowledge of version control systems (Git) and CI/CD pipelines.
- Ability to work in Agile development environments and handle multiple tasks.
Required Skills for Databricks Developer Job
- Databricks Developer
- PySpark and AWS
- Python
- CI/CD
- ETL
Our Hiring Process
- Screening (HR Round)
- Technical Round 1
- Technical Round 2
- Final HR Round