Back to search results

Big Data Hadoop Platform Engineer

NEW YORK, New York

Job Description:

Participates in design, development and implementation of architectural deliverables, to include components of the assessment and optimization of system design and review of user requirements. Contributes to the determination of technical and operational feasibility of solutions. Develops prototypes of the system design and works with database, operations, technical support and other IT areas as appropriate throughout development and implementation processes. May lead multiple projects with competing deadlines. Serves as a fully seasoned/proficient technical resource; provides tech knowledge and capabilities as team member and individual contributor. Will not have direct reports but will influence and direct activities of a team related to special initiatives or operations, as well as mentor junior band 5 Architect 1's. Provides input on staffing, budget and personnel. Typically 7 or more years of architecture experience.

Job Description

The Platform Engineer position will be part of the Insight Core Hadoop platform team within Global Banking and Markets.


•Responsible for developing, enhancing, modifying and/or maintaining a multi–tenant big data platform  

•Work closely with the Business Stakeholders, Management Team, Development Teams, Infrastructure Management and support partners

•Use your in-depth knowledge of development tools and languages towards design and development of applications to meet complex business requirements

•Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc

•Design and implement scalable data platforms for our customer facing services

•Deploy and scale Hadoop infrastructure

•Hadoop / HDFS maintenance and operations

•Data cluster monitoring and troubleshooting

•Hadoop capacity planning

•OS integration and application installation

•Partner with program management, network engineering, site reliability operations, and other related groups

•Willingness to participate in a 24x7 on-call rotation for escalations

Educational Requirements:

•Bachelor’s Degree in Information/Computer Science or related field OR equivalent professional experience

Required Skills

•Solid understanding of UNIX and network fundamentals

•Expertise with Hadoop and its ecosystem Hive, Pig, Spark, HDFS, HBase, Oozie, Sqoop, Flume, Zookeeper, Kerberos, Sentry, Impala etc.

•Experience designing multi-tenant, containerized Hadoop architectures for memory/CPU management/sharing across different LOBs

•Experience managing clustered services, secure distributed systems, production data stores

•Experience administering and operating Hadoop clusters

•Cloudera CHD4 /CDH5 cluster management and capacity planning experience

•Ability to rapidly learn new software languages, frameworks and APIs quickly

•Experience scripting for automation and config management (Chef, Puppet)

•Multi-datacenter, multi-tenant deployment experience, a plus

•Strong troubleshooting skills with exposure to large scale production systems

•Hands on development experience and high proficiency in Java / Python

•Skilled in data analysis, profiling, data quality and processing to create visualizations

•Experience working with Agile Methodology


1st shift (United States of America)

Hours Per Week: 


Learn more about this role

Full time


Manages People: No

Travel: No


Talent Acquisition Contact:

Referral Bonus:

Street Address

Primary Location: