girl looking into her desktop
Back to search results

Software Engineer III - Hadoop/Oracle PL/SQL Developer

Addison, Texas;

Job Description:

Position Summary

Responsible for designing and developing complex requirements to accomplish business goals. Ensures that software is developed to meet functional, non-functional, and compliance requirements. Ensures solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Possess strong proficiency in development and testing practices common to the industry, and have extensive experience of using design and architectural patterns. At this level, specializations start to form in either Architecture, Test Engineering or DevOp. Contributes to story refinement/defining requirements. Participates and guides team in estimating work necessary to realize a story/requirement through the delivery lifecycle. Performs spike/proof of concept as necessary to mitigate risk or implement new ideas. Codes solutions and unit tests to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Utilizes multiple architectural components (across data, application, business) in design and development of client requirements. Assists team with resolving technical complexities involved in realizing story work. Designs/develops/modifies architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained. Designs/develops/maintains automated test suites (integration, regression, performance). Sets up and develops a continuous integration/continuous delivery pipeline. Automates manual release activities. Mentors other Software Engineers and coaches team on CI-CD practices and automating tool stack.

Required Skills

  • Experience in HDFS, Map Reduce, Hive, Impala, Linux/Unix technologies.

  • Hands on experience on any RDBMS (Oracle, SQL Server, DB2, etc.) technologies.

  • Troubleshoot and identify technical problems, in applications or processes and provide solutions.

  • Performance tuning using execution plans and other tools.

  • Create, execute and document the tests necessary to ensure that an application meets performance requirements.

  • Sound understanding and experience with Hadoop ecosystem (Cloudera).

  • Able to understand and explore the constantly evolving tools within Hadoop ecosystem and apply them appropriately to the relevant problems at hand.

  • Experience in Unix shell and able to analyze the existing shell scripts.

  • Exposure to JIL scripts.

  • Provide technical resources to assist in the design, testing and implementation of software code and infrastructure to support data infrastructure and governance activities.

Desired Skills

  • Experience in working with a Big Data implementation in production environment.

  • Experience in Kafka/Flume/Spark is an added advantage.

  • Exposure to ETL tools e.g. data stage, Sqoop.

  • Able to analyze the existing shell scripts/python/Perl code to debug any issues or enhance the code.

  • Assist with the technical design/architecture and implementation of the big data cluster in various environments.

  • Utilities/libraries that can be reused in multiple big data development efforts.

  • Work with line of business (LOB) personnel and internal Data Services team to develop system specifications in compliance with corporate standards for architecture adherence and performance guidelines.

Job Band:

H5

Shift: 

1st shift (United States of America)

Hours Per Week:

40

Weekly Schedule:

Referral Bonus Amount:

0

Job Description:

Position Summary

Responsible for designing and developing complex requirements to accomplish business goals. Ensures that software is developed to meet functional, non-functional, and compliance requirements. Ensures solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Possess strong proficiency in development and testing practices common to the industry, and have extensive experience of using design and architectural patterns. At this level, specializations start to form in either Architecture, Test Engineering or DevOp. Contributes to story refinement/defining requirements. Participates and guides team in estimating work necessary to realize a story/requirement through the delivery lifecycle. Performs spike/proof of concept as necessary to mitigate risk or implement new ideas. Codes solutions and unit tests to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Utilizes multiple architectural components (across data, application, business) in design and development of client requirements. Assists team with resolving technical complexities involved in realizing story work. Designs/develops/modifies architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained. Designs/develops/maintains automated test suites (integration, regression, performance). Sets up and develops a continuous integration/continuous delivery pipeline. Automates manual release activities. Mentors other Software Engineers and coaches team on CI-CD practices and automating tool stack.

Required Skills

  • Experience in HDFS, Map Reduce, Hive, Impala, Linux/Unix technologies.

  • Hands on experience on any RDBMS (Oracle, SQL Server, DB2, etc.) technologies.

  • Troubleshoot and identify technical problems, in applications or processes and provide solutions.

  • Performance tuning using execution plans and other tools.

  • Create, execute and document the tests necessary to ensure that an application meets performance requirements.

  • Sound understanding and experience with Hadoop ecosystem (Cloudera).

  • Able to understand and explore the constantly evolving tools within Hadoop ecosystem and apply them appropriately to the relevant problems at hand.

  • Experience in Unix shell and able to analyze the existing shell scripts.

  • Exposure to JIL scripts.

  • Provide technical resources to assist in the design, testing and implementation of software code and infrastructure to support data infrastructure and governance activities.

Desired Skills

  • Experience in working with a Big Data implementation in production environment.

  • Experience in Kafka/Flume/Spark is an added advantage.

  • Exposure to ETL tools e.g. data stage, Sqoop.

  • Able to analyze the existing shell scripts/python/Perl code to debug any issues or enhance the code.

  • Assist with the technical design/architecture and implementation of the big data cluster in various environments.

  • Utilities/libraries that can be reused in multiple big data development efforts.

  • Work with line of business (LOB) personnel and internal Data Services team to develop system specifications in compliance with corporate standards for architecture adherence and performance guidelines.

Shift:

1st shift (United States of America)

Hours Per Week: 

40

Learn more about this role

Full time

JR-21045559

Band: H5

Manages People: No

Travel: Yes, 5% of the time

Manager:

Talent Acquisition Contact:

Jessica Kreiselmaier

Referral Bonus:

0