Data Scientist/Architect with data engineering/governance/integration/management, DW/Data Hub/Lake, Big Data, data pipelines/structures, ETL optimization, HIVE/Impala/Presto/Hadoop and MQ experience

Job ID: MI-639091 (918691110)

Data Scientist/Architect with data engineering/governance/integration/management, DW/Data Hub/Lake, Big Data, data pipelines/structures, ETL optimization, HIVE/Impala/Presto/Hadoop and MQ experience

Location: Lansing, MI (MDOT)
Duration: 12 months
Interview: Webcam Interview Only

Skills:
Bachelor’s degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related field. Required 1 Years
Advanced degree (MS) in computer science, statistics, applied mathematics, information science (MIS), data management, information systems. Desired 1 Years
Possess combination of data integration and engineering expertise, IT skills, data governance skills, and analytics skills. Required 4 Years
Experience in data architecture and integration design, and data management disciplines, data warehousing, Big Data related initiatives. Required 6 Years
Experience leading cross-functional teams and collaborating with business and technical stakeholders to initiate, plan, and execute. Required 3 Years
Strong experience documenting complex requirements, considering ambiguous information and engaging cross functionally, to propose elegant designs. Required 6 Years
Strong experience with various Data Management architectures like Data Warehouse, Data Lake, Data Hub, Operational Data Stores. Required 4 Years
Strong ability to design, build and manage data pipelines for data structures. Required 3 Years
Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flow. Required 4 Years
Strong experience in working with large, heterogeneous datasets. Required 6 Years
Experience working with data governance/data quality and data security teams and specifically information stewards. Required 3 Years
Demonstrated success in working with large, heterogeneous datasets to extract business value using popular data preparation tools. Required 4 Years
Strong experience with popular database programming languages including SQL, PL/SQL, others for relational databases. Required 6 Years
Strong experience in working with SQL on Hadoop query languages and tools including HIVE, Impala, Presto, and others. Required 3 Years
Knowledge of and experience with multiple data integration platforms. Required 4 Years
Strong experience with advanced analytics tools for Object-oriented/object function scripting. Required 6 Years
Strong experience in working with both open-source and commercial message queuing technologies. Required 6 Years
Knowledge about various architectures, patterns and protocols. Required 6 Years

This position covers work across all business units within MDOT and will work closely with business stakeholders and IT technical staff. They must be an expert with a deep understanding of integration technologies and architecture best practices, and be able to build documented designs, and work to model, execute, operationalize, and manage data integration pipelines that ingest, transform and provision data across various sources into an organized and unified view.

This position will be responsible for leading the data architecture improvements needed to stabilize the existing agency-wide data architecture with respect to improving data quality, meta data management, reference data management, and master data management which will lay the foundation for a stable Data Warehouse for MDOT. In addition, this position will lead the integration architecture of the MDOT Data Warehouse to support MDOT’s data analytical needs which includes Machine Learning and Business Intelligence.

This position will play a pivotal role in assessing, designing, and operationalizing the most-urgent data integration pipelines for transactional and analytics initiatives. The role will lead all key data engineering and integration efforts across MDOT, under the guidance of MDOT’s EIM Director, and in collaboration with the Chief Information Steward, Operations and Security Manager, IT/Data architects, ETL developers, Information Stewards, and other business teams as needed

This role will lead a small new team of technical architects and integration experts, while leveraging subject-matter experts as required, to build core competencies within the organization, namely building, managing and operationalizing reusable data pipelines for key application and analytics initiatives, guaranteeing compliance with data governance and data security requirements, and enabling faster and reliable data access across the organization.

The data engineering and integration lead will be measured on their ability to plan and execute the integration of data across transactional systems and to data analytics and warehousing systems, to streamline project executions and facilitate the analysis of data in new ways to deliver business insights and efficiencies.
This role will require both creative and collaborative work with IT and business areas. It will involve promoting effective data management practices, sound design and reusability, and a better understanding of how these are essential for data analytics.

Under the guidance of the EIM Director and their designee, they will be tasked with working with key business stakeholders, IT experts and subject-matter experts to plan and deliver optimal solutions across OLTP and OLAP systems. Additionally, they will be expected to collaborate with data analysts and data consumers, and work on models and procedures to optimize them for data quality, security, and governance, and performance optimization of pipelines across various environments, and put them into production leading to potentially large productivity gains.
.

MI_Resume_Template-639091.docx

Leave a Reply

Search

Popular Posts

Categories

Archives

Tags

There’s no content to show here yet.

Discover more from innoSoul:

Subscribe now to keep reading and get access to the full archive.

Continue reading