Job ID: NC-749745 (910791106)
Hybrid/Local Databricks Administrator/Architect(Certified) with SQL Analytics, Delta Lake, Azure, SSIS AWS, GCP, Python, PySpark, GIT and CDC Experience
Location: Raleigh, NC(NCDOT)
Duration: 12 Months
Skills:
Extensive hands-on experience implementing Lakehouse architecture using Databricks Data Engineering platform, SQL Analytics, Delta Lake, Unity Catalog	Required	5	Years
Strong understanding of Relational & Dimensional modeling.	Required	5	Years
Demonstrate proficiency in coding skills – Python, SQL, and PySpark to efficiently prioritize perf, security, scalability, robust data integrations.	Required	6	Years
Experience implementing serverless real-time/near real-time arch. using Cloud (i.e., Azure, AWS, or GCP Tech Stack), and Spark tech (Streaming & ML)	Required	2	Years
Experience Azure Infra config (Networking, architect and build large data ingestion pipelines and conducting data migrations using ADF or similar tech	Required	4	Years
Experience working w/ SQL Server features such as SSIS and CDC.	Required	7	Years
Experience with Databricks platform, security features, Unity Catalog, and data access control mechanisms.	Required	2	Years
Experience with GIT code versioning software.	Required	4	Years
Databricks Certifications	Desired
Job Description:
*Provide mentorship, guidance, overall knowledge share, and support to team members, promoting continuous learning and development.
*Oversee the design, implementation, and maintenance of Databricks clusters.
*Ensure the platform’s scalability, performance, and security.
*Provide escalated support and troubleshooting to users.
*Oversee maintenance of role-based access to data and features in the Databricks Platform using Unity Catalog.
*Review clusters health check and best practices implementation.
*Review and maintain documentation for users and administrators.
*Design and implement tailored data solutions to meet customer needs and use cases, spanning from ingesting data from APIs, building data pipelines, analytics, and beyond within a dynamically evolving technical stack.
*Work on projects involving on-prem data ingestion into Azure using ADF.
*Build data pipelines based on the medallion architecture that clean, transform, and aggregate data from disparate sources.
