Big Data Developer with Java, SEDA, Lambda, Kappa, Spark, HDFS, MapReduce, Hive, Sqoop, Impala, Drill, Scala, TDD, jUnit, Mockito, JSON, JAXB, Maven, Jenkins, SVN, JIRA, Control M, Spring, Hibernate, REST, Jersey, Mule, Ember, Angular and shell scripting experience
Location: Westerville, OH
Education: BS in Computer Science or similar technical Bachelors
Excellent analytical, communication, organizational and problem-solving skills coupled with a strong work ethic
Ability to translate business requirements into functional requirements documentation.
9+ years’ experience with full development lifecycle from inception through implementation leveraging Java and various Java frameworks
4+ years architecting and implementing applications leveraging common patterns such as SEDA, Lambda, Kappa and similar data processing architectures.
4+ years implementing Big Data technologies including Spark, HDFS, MapReduce, Hive, Sqoop, and similar technologies.
4+ years leveraging big data consumption tools such as Impala, Hive, Drill, or similar query engines.
4+ years’ experience with Scala and similar Big Data oriented languages
Experience with development, deployment, and support of large-scale distributed applications in a mission-critical production environment.
Test-infected attitude (strong desire to perform thorough and exhaustive unit, integration and system testing).
Preparing test plans and performing system testing
Experience with TDD utilizing test data, jUnit and Mockito
Experience with JSON, XML, XSD and JAXB
Experience with Change Management and Incident Management process
Strong experience in using Eclipse, Maven, Jenkins, SVN, JIRA, Control M or equivalent tools
Ability to work independent as well as in a team environment
Able to take the challenges of new technology and provide a production worthy output
Experience with common frameworks like Spring, Apache, Hibernate (or similar ORM tools), REST such as Jersey, JSON, etc.
Be a self-starter and be able to reach out to various groups to get the requirements to completion
3+ years working with Open Source Java frameworks (Spring, Hibernate, Mule ESB, Jersey or similar).
Strong working knowledge of Oracle RDBMS.
Experience with Linux shell scripts is nice to have
Experience with Data Management is an added advantage
The Big Data Lead Developer is responsible for design and development of core platform that enables the delivery and construction processes for the Data Management, Data Discovery and Analytics group, leveraging emerging big data technologies.
The individual is a subject matter expert technologist with strong Java experience and very knowledgeable with utilization and integration of Open Source software.
The individual has deep understanding and application of enterprise software design for implementation of data services and middleware.
This is a have been there, done that technologist who thrives on driving efforts to completion while utilizing best in breed technologies and methodologies.
The individual should also function as a Solution Architect, must be a visionary, and execution driven.
The individual must have successful experience in Big Data implementations for large data integration initiatives.
Day-to-day activities will vary widely based on the state of the organizations priorities and needs at that point in time.
As such, this individual must be comfortable with flexibility in their role.
They must be able to operate in a relaxed, yet confident manner, without explicit hierarchy and structure governing work.
An affinity towards, and appreciation of, an influence-based and entrepreneurial culture is critical for success.
Key Responsibilities include:
Component Software Design & Development.
Ensuring excellent practices are utilized in delivering Big Data Management and Integration Solutions.
Ensuring design decisions can be actioned by the development team.
Participating in agile development projects.
Acting as a role model for all best practices, ensuring consistency across entire team.
Mentoring technical development team on optimal utilization of Big Data solutions and Apache Open Source Software. Helping build a great team.
Leveraging new and emerging practices for Enterprise Data Architecture.
Engage in enterprise-level systems component design and implementation.
Systems integration, including design and development of APIs, Adapters, and Connectors.
Integration with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions.
Write and maintain reference architectures and systems design best practices guidelines.