Job ID: NC-758398 (911990314)

Hybrid/Local Data Engineer (15+) with SQL, data modeling/warehousing, ETL/ELT pipelines, Python, Java, Scala/NodeJS, AWS, Redshift, S3, Lambda, Synapse, Azure DevOps, Spark, Databricks, Snowflake, Hadoop, CI/CD, Git, automation experience

Location: Durham NC (NCDHHS-NCFAST)
Duration: 12 Months
Position: 1(2)

Skills:
Experience with Data Engineering Required 5 Years
Experience with SQL Required 8 Years
Experience with data modeling Required 4 Years
Experience with data warehousing Required 4 Years
Experience building ETL/ELT pipelines Required 4 Years
Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Required 3 Years
Experience with AWS and / or Azure technologies like Redshift, S3, Lambda, Synapse, Azure DevOps Desired 2 Years
Experience with non-relational databases / data stories (object storage, document or key-value stores, graph databases, column-family databases) Desired 3 Years
Experience with big data technologies such as Spark, Databricks, Snowflake, Hadoop, or similar Desired 3 Years
Experience with DevOps and CI/CD (Git and automation tools) Desired 5 Years

Description:
DHHS-ITD is seeking a Senior Data Engineer to support digital transformation processes for the Child Welfare System that is migrating to PATH NC.

The Department of Health and Human Services (DHHS) requires the assistance of a contract resource to serve as a Senior Data Engineer. The primary purpose of this position is to support the development, implementation, and future maintenance of the new Child Welfare System, PATH NC. This role will be instrumental in data conversion activities from the existing legacy systems and counties to the new Salesforce platform.

Key Responsibilities:
Learn and understand a broad range of DHHS’s data resources and their business application domains.
Drive digital transformation initiatives across the program.
Recognize and adopt best practices in data transformation: data integrity, test design, analysis, validation, and documentation.
Design, implement, and support a platform providing secured access to large datasets.
Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
Provide advanced data support to data engineering and ETL teams.
Extract and evaluate data to ensure accurate mapping for end-user consumption.
Collaborate with technical support teams and business process owners to ensure seamless operations service delivery.
Develop processes and procedures to enhance the quality and efficiency of quality control checks in the production environment.
Create and implement plans to improve data quality by identifying causes of errors or discrepancies and devising effective solutions.
Establish robust procedures for data collection and analysis.
Programmatically manipulate and analyze large datasets, build data pipelines, and automate tasks.
Design and maintain efficient data processing flows.
Support test readiness reviews, test planning work groups, and pre- and post-test briefings.
Communicate issues effectively in writing and orally to executive audiences, business stakeholders, and technical teams.
Support the technical and operational architecture of complex enterprise systems.
Tune application and query performance using profiling tools and SQL.
Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using Azure and / or AWS / public cloud tools.

NC-758398 SM.docx

NC-RTR-758398.docx

Hybrid/Local Data Engineer (15+) with SQL, data modeling/warehousing, ETL/ELT pipelines, Python, Java, Scala/NodeJS, AWS, Redshift, S3, Lambda, Synapse, Azure DevOps, Spark, Databricks, Snowflake, Hadoop, CI/CD, Git, automation experience

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from innoSoul

Subscribe now to keep reading and get access to the full archive.

Continue reading