Back to search results

Data Engineer I A - GBS IND

Mumbai, , India;

Job Description:

Overview (Bank overview, GBS India overview, Function Overview)*

Bank of America is one of the world’s leading financial institutions, serving individual consumers, small and middle-market businesses and large corporations with a full range of banking, investing, asset management and other financial and risk management products and services.

We are committed to attracting and retaining top talent across the globe to ensure our continued success. Along with taking care of our customers, we want to be the best place for people to work and aim at creating a work environment where all employees have the opportunity to achieve their goals.

We are a part of the Global Business Services which delivers technology and operations capabilities to all Bank of America lines of business (LOB) and enterprise functions.

Our employees help our customers and clients at every stage of their financial lives, helping them connect to what matters most. This purpose defines and unites us. Every day, we are focused on delivering value, convenience, expertise and innovation for individuals, businesses and institutional investors we serve worldwide.

* BA Continuum is a nonbank subsidiary of Bank of America, part of Global Business Services in the bank.

Process Overview*

DAIT (Data Analytics and Insights Technology) leads the development of the next generation of data analytics technology solutions for the company’s consumer and wealth management client-facing channels.

Job Description*

S/he should be able to quickly understand the current architecture and provide development and maintenance support. The resource should have strong SQL experience in coding/database preferably SQL Server. Associate should be able to understand the client/server architecture and provide development solution to existing applications and new requirement using MS.Net framework. The coding skills would help with risk assessment, application research and documentation of current processes and migrations.

Responsibilities*

  • Utilize Teradata, Unix to write the code for data migration system enhancements, can work on Code Repository Tools and Automated Testing Tools
  • Work on SAS code and understanding of SAS development
  • Able to work on Hadoop development using tools like Hive, Sqoop, Spark, Impala etc.
  • Perform all activities related to development including designing, coding, testing, debugging, documenting and communicating with the team and outside application users about the programs. Will also including coordinating the installation of computer programs and systems.
  • Develop & support Model scoring application implement and execute models in batch services. Applications interact with several bank channels and enterprise data warehouse including Teradata, Hadoop data lake.
  • Involve in providing research and analytics for data issues impacting models’ scoring output and assist modelers/LOB partners to remediate existing model scoring process and obtain approval from model governance
  • Monitor computer programs and applications, troubleshoot program and system malfunctions to restore normal functioning.
  • Write permanent test scripts to be used in regression testing to ensure the current modules are not impacted by future changes.
  • Understand Agile methodologies and work independently in sprints
  • Participate in Agile calls and provide daily status & suggestions
  • Conduct Root Cause Analysis & troubleshoot multiple and complex issues

Requirements*

Education*: BE or MCA or as per company standards

Certifications If Any: Not mandatory.

Experience Range*:  2 - 4 Years

Foundational skills*:

  • Teradata, DB2, Oracle, MS-SQL and solid understanding of SQL - data modelling, complex queries, optimization, scalability considerations and fine tuning.
  • Hands-on experience in Unix/Linux Operating systems environment – intermediate level for shell scripting, familiar with UNIX text processing tools – grep, sed and awk.
  • Strong ETL development experience using tools from Hadoop ecosystem like: Sqoop, Hive, Spark etc.
  • SDLC tools exposure and process automation experience, experience in BitBucket and Ansible for code repository and deployment. Experience in Autosys for process automation.
  • Worked intensively on Performance Tuning and Query Optimization
  • Good experience in design and development
  • Excellent Troubleshooting skill

Desired skills*

  • Any experience in Advance Spark and handling large datasets will be an advantage
  • Should have knowledge on Autosys/ any scheduling tool
  • Hands-on in Tableau reporting would be would be an added advantage

Work Timings*

  • 11 AM – 8 PM IST Ready to work on production support (if required)
  • Ready to work on weekends (if required)

Job Location: Mumbai, Chennai, Gurugram

Learn more about this role

Full time

JR-21081740

Band: H7

Manages People:

Manager:

Talent Acquisition Contact:

Syed Jung

Referral Bonus:

0