Minimum 5 years + working exp as Databricks Developer
Minimum 3 years + working exp on PySpark and AWS
5+ years relevant and progressive data engineering experience
Deep Technical knowledge and experience in Databricks, Python, Scala, Microsoft Azure architecture and platform including Synapse, ADF (Azure Data Factory) pipelines and Synapse stored procedures
Hands-on experience working with data pipelines using a variety of source and target locations (e.g., Databricks, Synapse, SQL Server, Data Lake, file-based, SQL and No-SQL database)
Experience in engineering practices such as development, code refactoring, and leveraging design patterns, CI/CD, and building highly scalable data applications and processes
Experience developing batch ETL pipelines; real-time pipelines are a plus
Knowledge of advanced data engineering concepts such as dimensional modeling, ETL, data governance, data warehousing involving structured and unstructured data
Thorough knowledge of Synapse and SQL Server including T-SQL and stored procedures
Experience working with and supporting cross-functional teams in a dynamic environment
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
Advise, consult, mentor and coach other data and analytic professionals on data standards and practices
Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytic solutions
Develop and deliver documentation on data engineering capabilities, standards, and processes; participate in coaching, mentoring, design reviews and code reviews
Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives.
Solve complex data problems to deliver insights that helps the organization achieve its goals
Knowledge and understanding of Boomi is a plus
As it is 24*7 production support project, Resource should willing to work in shifts and 7 night shifts in a month