girl looking into her desktop
Back to search results

OMNI Feature Lead - Core Technology Infrastructure

Charlotte, North Carolina;

Job Description:

Enterprise Data Platform Service OMNI initiative is looking for a Feature Lead who will be responsible for development of the OMNI Big Data Platform. The ideal candidate is someone who has hands-on experience developing in the Java, Python or Cloudera data environment. The candidate will be working with agile teams and responsible for all aspects of SDLC. Ideal candidate is someone who has worked on large scale Data transformation initiatives, has experience working with risk data analytics required to transform and test when working on large scale data transformation initiatives.

A successful feature lead has:

• Experience in developing high-performance large-scale data processing and data warehousing applications.

• Experience in Java/Hadoop/HDFS/SPARK concepts and ability to write Spark Dataset, Data frame, JDBC & HiveQL jobs.

• Proven understanding with Hadoop, Spark, Hive and ability to write Shell scripting.

• Familiarity with data loading tools like Sqoop, Flume and Kafka. Knowledge of workflow/schedulers like Oozie.

• Good aptitude in multi-threading and concurrency concept. Loading data from disparate data source sets.

• Hands on experience with NO SQL databases. Ability to analyze, identify issues with existing cluster, suggest architectural design changes.

• Working knowledge in relational databases such as Oracle, Teradata, Exadata Netezza.

• Ability/Knowledge to implement Data Management and Governance in Hadoop Platform.

• Well versed in Linux Environment

• Extensive experience in application development

• Excellent analytical and business process flows, design and diagrams skills

• Strong Collaboration and Team skills

• Proven history of delivering against agreed objectives

• Demonstrated problem solving skills

• Ability to pick up new concepts and apply to knowledge

• Ability to coordinate competing priorities

• Ability to work in diverse team environments that are local and remote

• Work with minimal supervision

Required Skills

Hands on experience in the following area:

• Object oriented programing

• Large scale parallel data processing using grid environment

• Data Warehouse and Modeling experience.

• Python, Java / C++

• SQL

• Shell Scripting

• Capital market product knowledge

• SDLC methodologies Agile, Scrum, Waterfall

Desired Skills

• Scala, Py-Spark

• Relational DBs e.g. Teradata/Sybase/Netezza etc.

• Hadoop eco system

• Spark, Spark-SQL

• Hive on Tez, Hive 3.x

• Impala, Oozie, Kafka, HBASE

• Restful Services

• Job Scheduler (i.e. like Autosys) 

Job Band:

H5

Shift: 

1st shift (United States of America)

Hours Per Week:

40

Weekly Schedule:

Referral Bonus Amount:

0

Job Description:

Enterprise Data Platform Service OMNI initiative is looking for a Feature Lead who will be responsible for development of the OMNI Big Data Platform. The ideal candidate is someone who has hands-on experience developing in the Java, Python or Cloudera data environment. The candidate will be working with agile teams and responsible for all aspects of SDLC. Ideal candidate is someone who has worked on large scale Data transformation initiatives, has experience working with risk data analytics required to transform and test when working on large scale data transformation initiatives.

A successful feature lead has:

• Experience in developing high-performance large-scale data processing and data warehousing applications.

• Experience in Java/Hadoop/HDFS/SPARK concepts and ability to write Spark Dataset, Data frame, JDBC & HiveQL jobs.

• Proven understanding with Hadoop, Spark, Hive and ability to write Shell scripting.

• Familiarity with data loading tools like Sqoop, Flume and Kafka. Knowledge of workflow/schedulers like Oozie.

• Good aptitude in multi-threading and concurrency concept. Loading data from disparate data source sets.

• Hands on experience with NO SQL databases. Ability to analyze, identify issues with existing cluster, suggest architectural design changes.

• Working knowledge in relational databases such as Oracle, Teradata, Exadata Netezza.

• Ability/Knowledge to implement Data Management and Governance in Hadoop Platform.

• Well versed in Linux Environment

• Extensive experience in application development

• Excellent analytical and business process flows, design and diagrams skills

• Strong Collaboration and Team skills

• Proven history of delivering against agreed objectives

• Demonstrated problem solving skills

• Ability to pick up new concepts and apply to knowledge

• Ability to coordinate competing priorities

• Ability to work in diverse team environments that are local and remote

• Work with minimal supervision

Required Skills

Hands on experience in the following area:

• Object oriented programing

• Large scale parallel data processing using grid environment

• Data Warehouse and Modeling experience.

• Python, Java / C++

• SQL

• Shell Scripting

• Capital market product knowledge

• SDLC methodologies Agile, Scrum, Waterfall

Desired Skills

• Scala, Py-Spark

• Relational DBs e.g. Teradata/Sybase/Netezza etc.

• Hadoop eco system

• Spark, Spark-SQL

• Hive on Tez, Hive 3.x

• Impala, Oozie, Kafka, HBASE

• Restful Services

• Job Scheduler (i.e. like Autosys) 

Shift:

1st shift (United States of America)

Hours Per Week: 

40

Learn more about this role

Full time

JR-21064182

Band: H5

Manages People: No

Travel: Yes, 5% of the time

Manager:

Talent Acquisition Contact:

Kathleen Jones-Griffith

Referral Bonus:

0