girl looking into her desktop
Back to search results

Java - Hadoop Developer - Core Technology Infrastructure

Charlotte, North Carolina;

Job Description:

Job Description:

This role is responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues.  These individuals are proficient in data engineering practices, and have extensive experience of using design and architectural patterns. Candidate must possess a passion for producing high quality Hadoop solutions, be ready to jump in and solve complex problems, and interact with users to understand the requirements and deliver solutions. 

Job Summary:

Responsible for developing and delivering complex software requirements to accomplish business goals. Ensures that software is developed to meet functional, non-functional, and compliance requirements. Codes solutions, unit tests, and ensures the solution can be integrated successfully into the overall application/system with clear, robust and well-tested interfaces. Familiar with development and testing practices of the bank. Contributes to story refinement/defining requirements.  Participates and guides team in estimating work necessary to realize a story/requirement through the delivery lifecycle. Performs proof of concept as necessary to mitigate risk or implement new ideas. Codes solutions and unit tests to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Assists team with resolving technical complexities involved in realizing story work. Contributes to existing test suites (integration, regression, performance); Analyzes test reports, identifies any test issues/errors; Triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Participates, contributes and can coach team members in the delivery/release (CI-CD) events. e.g. branching timelines, pull requests, issue triage, merge/conflict resolution, release notes. Individual contributor.

Required Skills:

  • 5+ years of hands-on experience in designing, building and supporting Hadoop Applications using Spark, Sqoop and Hive.
  • Strong knowledge of working with large data sets and high capacity big data processing platform, SQL and Data Warehouse projects
  • Strong experience in Unix and Shell scripting.
  • Must have a high degree of initiative and self-motivation and demonstrate the ability to drive results.

Desired Skills:

  • Extensive hands on experience in designing, developing, and maintaining software solutions in Big Data and Streaming Platforms using Spark.
  • Knowledge of processing and deployment technologies such YARN, Linux and Containers.
  • Experience programming and building full stack solutions utilizing distributed computing.
  • Hands on experience in designing, developing, and maintaining NRT software frameworks using, Spark, Hadoop MR, Kafka, Java/Scala/Python etc.
  • Experience in developing integration on Hadoop cluster with Spark framework for deploying runtime.
  • Bachelor’s or master’s degree in Computer Science or related field

Job Band:

H5

Shift: 

1st shift (United States of America)

Hours Per Week:

40

Weekly Schedule:

Referral Bonus Amount:

0

Job Description:

Job Description:

This role is responsible for leading efforts to develop and deliver complex data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues.  These individuals are proficient in data engineering practices, and have extensive experience of using design and architectural patterns. Candidate must possess a passion for producing high quality Hadoop solutions, be ready to jump in and solve complex problems, and interact with users to understand the requirements and deliver solutions. 

Job Summary:

Responsible for developing and delivering complex software requirements to accomplish business goals. Ensures that software is developed to meet functional, non-functional, and compliance requirements. Codes solutions, unit tests, and ensures the solution can be integrated successfully into the overall application/system with clear, robust and well-tested interfaces. Familiar with development and testing practices of the bank. Contributes to story refinement/defining requirements.  Participates and guides team in estimating work necessary to realize a story/requirement through the delivery lifecycle. Performs proof of concept as necessary to mitigate risk or implement new ideas. Codes solutions and unit tests to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Assists team with resolving technical complexities involved in realizing story work. Contributes to existing test suites (integration, regression, performance); Analyzes test reports, identifies any test issues/errors; Triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Participates, contributes and can coach team members in the delivery/release (CI-CD) events. e.g. branching timelines, pull requests, issue triage, merge/conflict resolution, release notes. Individual contributor.

Required Skills:

  • 5+ years of hands-on experience in designing, building and supporting Hadoop Applications using Spark, Sqoop and Hive.
  • Strong knowledge of working with large data sets and high capacity big data processing platform, SQL and Data Warehouse projects
  • Strong experience in Unix and Shell scripting.
  • Must have a high degree of initiative and self-motivation and demonstrate the ability to drive results.

Desired Skills:

  • Extensive hands on experience in designing, developing, and maintaining software solutions in Big Data and Streaming Platforms using Spark.
  • Knowledge of processing and deployment technologies such YARN, Linux and Containers.
  • Experience programming and building full stack solutions utilizing distributed computing.
  • Hands on experience in designing, developing, and maintaining NRT software frameworks using, Spark, Hadoop MR, Kafka, Java/Scala/Python etc.
  • Experience in developing integration on Hadoop cluster with Spark framework for deploying runtime.
  • Bachelor’s or master’s degree in Computer Science or related field

Shift:

1st shift (United States of America)

Hours Per Week: 

40

Learn more about this role

Full time

JR-21061779

Band: H5

Manages People: No

Travel: Yes, 5% of the time

Manager:

Talent Acquisition Contact:

Kathleen Jones-Griffith

Referral Bonus:

0