Big Data Engineer with Hadoop, MapR, HIVE, Apache Spark, data service design, Data Lake, ETL and Data Warehouse experience

Job ID: MP-BigData (90090114)WF

Big Data Engineer with Hadoop, MapR, HIVE, Apache Spark, data service design, Data Lake, ETL and Data Warehouse experience

Location: Chandler, AZ Minneapolis, MN Des Moines, IA Charlotte, NC (remote for now)
Duration: 12 months
Visa: GC/USC preferred; no OPT/CPT/H1
• Candidates should have proper resumes with details in below format:
o First Name & Last Name (Proper Legal Name)
o Phone number (No Google Voice/ Magic Jack/ VoIP numbers) Please check where the voice mail lands.
o Email Address (Proper and domain names used by companies etc.)
o Education Details in format:
– Bachelors Degree| University Name| College Name| Start MM/YY| End MM/YY
– Masters Degree| University Name| College Name| Start MM/YY| End MM/YY
• LinkedIn should be properly checked.
• Work closely with various team to understand the data requirements
• Analyze incoming data requests for team and determine appropriate target solutions • Develop data service design • Lead the data service build, test and deployment • Partner with Enterprise data teams such as Data Management & Insights and Enterprise Data Environment (Data Lake) and identify the best place to source the data • Work with business analysts, development teams and project managers for requirements and business rules
• Collaborate with source system and approved provisioning point (APP) teams, Architects, Data Analysts and Modelers to build scalable and performant data solutions • Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
• Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure
• Work with DBAs in Enterprise Database Management group to troubleshoot problems and optimize performance
• Support ongoing data management efforts for Development, QA and Production environments
• Utilizes a thorough understanding of available technology, tools, and existing designs • Acts as expert technical resource to programming staff in the program development, testing, and implementation process
• Skillset needed: Hadoop, MapR, HIVE, Apache Spark, of application development and implementation experience

Leave a Reply