BIG DATA ARCHITECT (Spark Streaming) with Hadoop, Scrum, ETL, security, web services and Spark code optimization experience

ID: 07715862 (98591231)

BIG DATA ARCHITECT (Spark Streaming) with Hadoop, Scrum, ETL, security, web services and Spark code optimization experience

Location : Milwaukee, WI
Duration: 12 months

Job Description
Define big data solutions that leverage value to the customer; understand customer use cases and workflows and translate them into engineering deliverables
Architecting and Designing Hadoop solution.
Actively participate in Scrum calls, work closely with product owner and scrum master for the sprint planning, estimates and story points.
Break the user stories into actionable technical stories, dependencies and plan the execution into sprints.
Designing batch and real time load jobs from a broad variety of data sources into Hadoop. And design ETL jobs to read data from Hadoop and pass to variety of consumers / destinations.
Perform analysis of vast data stores and uncover insights.
Responsible for maintaining security and data privacy, creating scalable and high-performance web services for data tracking.
Propose best practices / standards and implement them in the deliverables.
Analyze the long running queries and jobs, performance tune them by using query optimization techniques and Spark code optimization.

Skills
APPS-NICHE-BIGDATA-HADOOP

Skills & Qualifications:-

12+ years of total IT experience including 3+ years of Big Data experience (Hadoop, Spark Streaming, Kafka, Spark SQL, HBase, Hive and Sqoop). Hands on experience on Big Data tools and technologies is mandatory.
Proven experience of driving technology and architectural execution for enterprise grade solutions based on Big Data platforms.
Designed at least one Hadoop Data Lake end to end using the above Big Data Technologies.
Exp in designing Hive and HBase Data models for storage and high-performance queries.
Knowledge of standard methodologies, concepts, best practices, and procedures within Big Data environment.
Proficient in Linux/Unix scripting.
Bachelor’s degree in Engineering – Computer Science, or Information Technology. Master’s degree in Finance, Computer Science, or Information Technology a plus.
Experience in Agile methodology is a must.
Experience in Storm and NoSQL Databases (e.g. Cassandra) is desirable.
Knowledge on Oracle or any other RDBMS experience is desirable
Familiarity with one of the leading Hadoop distributions like Hortonworks, Cloudera, or MapR is desirable.
Exposure to infrastructure as service providers such as: Google Compute Engine, Microsoft Azure or Amazon AWS is a plus.
Self-starter and able to independently implement the solution.
Good communication skills and problem-solving techniques

Full Name:
Contact:
Email:
Rate
Skype Id:
Date of Birth
Current location:
Open to relocate
Currently in project:
Availability to start:
Visa Status with Validity:
Last 5 Digit of SSN:
Total Years of IT Experience:
Experience working in US:
Available interview time slots
Passport Number
Bachelor’s Degree
Education (Passing year of Bachelors/Masters / University):

Leave a Reply

Search

Popular Posts

Categories

Archives

Tags

There’s no content to show here yet.

Discover more from innoSoul:

Subscribe now to keep reading and get access to the full archive.

Continue reading