girl looking into her desktop
Back to search results

Architect II

Charlotte, North Carolina

Job Description:

Data Science and Analytics Platform Client Services(DCS) team is seeking a Hadoop Developer/ SME who will be responsible for providing technical and administrative support for Linux, Cloud, and Hadoop platforms in a fast-paced operations environment supporting  business critical applications using HDFS, Sentry, Impala, Spark, and Hive. The Analyst will be able to perform troubleshooting of any Hadoop ecosystem services issues, performance analysis, ensuring security, developing and testing Unix Shell scripts, scripting in Perl, Java and coding required for Hadoop administration and associated core Hadoop ecosystem. Candidate will also be involved with the setup and transitioning of Hadoop application tenants from Hadoop bare metal clusters to virtual machines.

Responsibilities Include:

•  Proven understanding with Cloudera Hadoop, IMPALA, Hive, HBase, Sqoop, Apache Spark, Nifi, security Sentry/Ranger, Metadata Navigator/Atlas etc

• Administer, troubleshoot, perform problem isolation and correct problems discovered in clusters

• Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.

•  Provide code deployment support for Test and Production environments

• Diagnose and address database performance issues using performance monitors and various tuning techniques

• Interact with Storage and Systems administrators on Linux/Unix/VM operating systems and Hadoop Ecosystems

• Troubleshoot platform problems and connectivity issues

• Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.

• Document programming problems and resolutions for future reference. Demonstrated abilities utilizing core Hadoop (i.e. Hadoop,  Hive,  Oozie, HDFS, Unix Shell, Java, etc.)  

•Experience with Cloud technologies (AWS, Google, Azure)  

•Understanding of Hypervisor and Containers concepts  

•Understanding of Master Data Management concepts

•Experience with ETL (i.e.Informatica, DMX-H, etc.) and Relational Databases (i.e. SQL, Teradata, Oracle, DB2, etc.).

Required Skills

• Ability to work well as a team and as an individual with minimal supervision

• Scripting/programming/development experience - Shell/Python/Java

• Excellent communication and project management skills

• Bachelor’s Degree in Information Technology, Engineering, Computer Science, related field or equivalent work experience

• At least 3 years of experience in Big Data technologies and concepts with a solid understanding of Big Data technology

• At least 3 years of experience in a large Data Warehouse environment.

• At least 3 years of experience in Linux/Unix.

• At least 2 years of experience with scheduling tools such as Autosys.

• Experience with developer tools for code management, ticket management, performance monitoring, automated testing

• Strong understanding of Data Warehousing concepts

• Solid understanding and experience with Agile process and vernacular

• Solid understanding and experience with CI/CD concepts

• Deep knowledge of industry-standard, enterprise-class best practices for a large DFS environment

• Solid understand of DevOps model • Good understanding of Linux/VM platform and Information Security

Job Band:

H5

Shift: 

1st shift (United States of America)

Hours Per Week:

40

Weekly Schedule:

Referral Bonus Amount:

0

Job Description:

Data Science and Analytics Platform Client Services(DCS) team is seeking a Hadoop Developer/ SME who will be responsible for providing technical and administrative support for Linux, Cloud, and Hadoop platforms in a fast-paced operations environment supporting  business critical applications using HDFS, Sentry, Impala, Spark, and Hive. The Analyst will be able to perform troubleshooting of any Hadoop ecosystem services issues, performance analysis, ensuring security, developing and testing Unix Shell scripts, scripting in Perl, Java and coding required for Hadoop administration and associated core Hadoop ecosystem. Candidate will also be involved with the setup and transitioning of Hadoop application tenants from Hadoop bare metal clusters to virtual machines.

Responsibilities Include:

•  Proven understanding with Cloudera Hadoop, IMPALA, Hive, HBase, Sqoop, Apache Spark, Nifi, security Sentry/Ranger, Metadata Navigator/Atlas etc

• Administer, troubleshoot, perform problem isolation and correct problems discovered in clusters

• Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.

•  Provide code deployment support for Test and Production environments

• Diagnose and address database performance issues using performance monitors and various tuning techniques

• Interact with Storage and Systems administrators on Linux/Unix/VM operating systems and Hadoop Ecosystems

• Troubleshoot platform problems and connectivity issues

• Performance tuning of Hadoop clusters and ecosystem components and jobs. This includes the management and review of Hadoop log files.

• Document programming problems and resolutions for future reference. Demonstrated abilities utilizing core Hadoop (i.e. Hadoop,  Hive,  Oozie, HDFS, Unix Shell, Java, etc.)  

•Experience with Cloud technologies (AWS, Google, Azure)  

•Understanding of Hypervisor and Containers concepts  

•Understanding of Master Data Management concepts

•Experience with ETL (i.e.Informatica, DMX-H, etc.) and Relational Databases (i.e. SQL, Teradata, Oracle, DB2, etc.).

Required Skills

• Ability to work well as a team and as an individual with minimal supervision

• Scripting/programming/development experience - Shell/Python/Java

• Excellent communication and project management skills

• Bachelor’s Degree in Information Technology, Engineering, Computer Science, related field or equivalent work experience

• At least 3 years of experience in Big Data technologies and concepts with a solid understanding of Big Data technology

• At least 3 years of experience in a large Data Warehouse environment.

• At least 3 years of experience in Linux/Unix.

• At least 2 years of experience with scheduling tools such as Autosys.

• Experience with developer tools for code management, ticket management, performance monitoring, automated testing

• Strong understanding of Data Warehousing concepts

• Solid understanding and experience with Agile process and vernacular

• Solid understanding and experience with CI/CD concepts

• Deep knowledge of industry-standard, enterprise-class best practices for a large DFS environment

• Solid understand of DevOps model • Good understanding of Linux/VM platform and Information Security

Shift:

1st shift (United States of America)

Hours Per Week: 

40

Learn more about this role

Full time

JR-21044089

Band: H5

Manages People: No

Travel: Yes, 5% of the time

Manager:

Talent Acquisition Contact:

Sarah Rogers

Referral Bonus:

0

Street Address

Primary Location:
900 W TRADE ST, NC, Charlotte, 28255