girl looking into her desktop
Back to search results

Site Reliability Engineer - Hadoop Admin, GBAM & ERF TI & App Production Services, Core Technology Infrastructure

Jersey City, New Jersey

Job Description:

DevOps/SRE role supporting NextGen Platforms built around Big Data Technologies (Hadoop, Spark, Kafka, Impala, Hbase, Docker-Container, Ansible and many more). Requires experience in cluster management of vendor based Hadoop and Data Science (AI/ML) products like Cloudera, DataRobot, C3, Panopticon, Talend, Trifacta, Selerity, ELK, KPMG Ignite etc. DevOps Analyst is involved in the full life cycle of an application and part of an agile development process. They require the ability to interact, develop, engineer, and communicate collaboratively at the highest technical levels with clients, development teams, vendors and other partners. The following section is intended to serve as a general guideline for each relative dimension of project complexity, responsibility and education/experience within this role.

- Works on complex, major or highly visible tasks in support of multiple projects that require multiple areas of expertise
- Team member will be expected to provide expertise (or ability to self-learn) to build frameworks (APIs, Java, Scala, Scripting) which can be used for Application Configurations (e.g. Cloudera Admin related tasks – flume setup, Oozie scheduling), deployment orchestration, State-of-World Monitoring & Metric solutions (experience with ELK, Prometheus, SPLUNK, Grafana and other metrics/monitoring frameworks)
- Team member will be expected to provide subject matter expertise in managing Hadoop and Data Science Platform operations with focus around Cloudera Hadoop, Jupyter Notebook, Openshift, Docker-Container Cluster Management and Administration
- Integrates solutions with other applications and platforms outside the framework
- He / She will be responsible for managing platform operations across all environments which includes upgrades, bug fixes, deployments, metrics / monitoring for resolution and forecasting, disaster recovery, incident / problem  / capacity management
- Serves as a liaison between client partners and vendors in coordination with project managers to provide technical solutions that address user needs

Required Skills:

- Hadoop, Kafka, Spark, Impala, Hive, Hbase etc.
- Knowledge with Cloudera Big Data stack, Jupyter Notebook, Docker-Container, Openshift, Kubernetes
- Strong technical knowledge: Unix/Linux; Database (Sybase/SQL/Oracle), Java, Python, Perl, Shell scripting, Infrastructure.
- Experience in Monitoring & Alerting, and Job Scheduling Systems
- Being comfortable with frequent, incremental code testing and deployment
- Strong grasp of automation / DevOps tools – Ansible, Jenkins, SVN, Bitbucket

Desired Skills:

- Experience working on Big Data Technologies
- Cloudera Admin / Dev Certification
- Certification in Cloud, Docker-Container, Openshift Technologies

Core Technology Infrastructure Organization:

  • Is committed to building a workplace where every employee is welcomed and given the support and resources to perform their jobs successfully.
  • Wants to be a great place for people to work and strive to create an environment where all employees have the opportunity to achieve their goals.
  • Believes diversity makes us stronger so we can reflect, connect and meet the diverse needs of our clients and employees around the world.
  • Provides continuous training and development opportunities to help employees achieve their career goals, whatever their background or experience.
  • Is committed to advancing our tools, technology, and ways of working to better serve our clients and their evolving business needs.
  • Believes in responsible growth and is dedicated to supporting our communities by connecting them to the lending, investing and giving they need to remain vibrant and vital.

LOB Job Profile:

Responsible for developing, enhancing, modifying and/or maintaining applications in the Global Markets environment. Software developers design, code, test, debug and document programs as well as support activities for the corporate systems architecture. Employees work closely with business partners in defining requirements for system applications. Employees are expected to have in-depth capital markets product knowledge, and manage a high level of risk. Employees typically have in-depth knowledge of development tools and languages. Is clearly recognized as a content expert by peers. Individual contributor role. Typically requires 5-7 years of applicable experience. This job code is only to be used for associates supporting Global Markets.

Job Band:

H5

Shift: 

1st shift (United States of America)

Hours Per Week:

40

Weekly Schedule:

Referral Bonus Amount:

0

Job Description:

DevOps/SRE role supporting NextGen Platforms built around Big Data Technologies (Hadoop, Spark, Kafka, Impala, Hbase, Docker-Container, Ansible and many more). Requires experience in cluster management of vendor based Hadoop and Data Science (AI/ML) products like Cloudera, DataRobot, C3, Panopticon, Talend, Trifacta, Selerity, ELK, KPMG Ignite etc. DevOps Analyst is involved in the full life cycle of an application and part of an agile development process. They require the ability to interact, develop, engineer, and communicate collaboratively at the highest technical levels with clients, development teams, vendors and other partners. The following section is intended to serve as a general guideline for each relative dimension of project complexity, responsibility and education/experience within this role.

- Works on complex, major or highly visible tasks in support of multiple projects that require multiple areas of expertise
- Team member will be expected to provide expertise (or ability to self-learn) to build frameworks (APIs, Java, Scala, Scripting) which can be used for Application Configurations (e.g. Cloudera Admin related tasks – flume setup, Oozie scheduling), deployment orchestration, State-of-World Monitoring & Metric solutions (experience with ELK, Prometheus, SPLUNK, Grafana and other metrics/monitoring frameworks)
- Team member will be expected to provide subject matter expertise in managing Hadoop and Data Science Platform operations with focus around Cloudera Hadoop, Jupyter Notebook, Openshift, Docker-Container Cluster Management and Administration
- Integrates solutions with other applications and platforms outside the framework
- He / She will be responsible for managing platform operations across all environments which includes upgrades, bug fixes, deployments, metrics / monitoring for resolution and forecasting, disaster recovery, incident / problem  / capacity management
- Serves as a liaison between client partners and vendors in coordination with project managers to provide technical solutions that address user needs

Required Skills:

- Hadoop, Kafka, Spark, Impala, Hive, Hbase etc.
- Knowledge with Cloudera Big Data stack, Jupyter Notebook, Docker-Container, Openshift, Kubernetes
- Strong technical knowledge: Unix/Linux; Database (Sybase/SQL/Oracle), Java, Python, Perl, Shell scripting, Infrastructure.
- Experience in Monitoring & Alerting, and Job Scheduling Systems
- Being comfortable with frequent, incremental code testing and deployment
- Strong grasp of automation / DevOps tools – Ansible, Jenkins, SVN, Bitbucket

Desired Skills:

- Experience working on Big Data Technologies
- Cloudera Admin / Dev Certification
- Certification in Cloud, Docker-Container, Openshift Technologies

Core Technology Infrastructure Organization:

  • Is committed to building a workplace where every employee is welcomed and given the support and resources to perform their jobs successfully.
  • Wants to be a great place for people to work and strive to create an environment where all employees have the opportunity to achieve their goals.
  • Believes diversity makes us stronger so we can reflect, connect and meet the diverse needs of our clients and employees around the world.
  • Provides continuous training and development opportunities to help employees achieve their career goals, whatever their background or experience.
  • Is committed to advancing our tools, technology, and ways of working to better serve our clients and their evolving business needs.
  • Believes in responsible growth and is dedicated to supporting our communities by connecting them to the lending, investing and giving they need to remain vibrant and vital.

LOB Job Profile:

Responsible for developing, enhancing, modifying and/or maintaining applications in the Global Markets environment. Software developers design, code, test, debug and document programs as well as support activities for the corporate systems architecture. Employees work closely with business partners in defining requirements for system applications. Employees are expected to have in-depth capital markets product knowledge, and manage a high level of risk. Employees typically have in-depth knowledge of development tools and languages. Is clearly recognized as a content expert by peers. Individual contributor role. Typically requires 5-7 years of applicable experience. This job code is only to be used for associates supporting Global Markets.

Shift:

1st shift (United States of America)

Hours Per Week: 

40

Learn more about this role

Full time

JR-21071117

Band: H5

Manages People: No

Travel: No

Manager:

Talent Acquisition Contact:

Kari Elsts

Referral Bonus:

0

Street Address

Primary Location:
525 Washington Blvd, NJ, Jersey City, 07310