Create a Job Alert.

Enter your email below to save this search and receive job recommendations for similar positions.
Thank you. We'll send jobs matching these to
You already suscribed to this job alert.
US
0 suggestions are available, use up and down arrow to navigate them
What job do you want?

Hadoop Lead Engineer job in Charlotte at Bank of America

Create Job Alert.

Get similar jobs sent to your email

List of Jobs

Apply to this job.
Think you're the perfect candidate?
Hadoop Lead Engineer at Bank of America

Hadoop Lead Engineer

Bank of America Charlotte, NC Full Time

Job Description:

Position Summary

We are searching for a Feature Lead Hadoop Engineer who will be responsible for development and support of AML consumer platform. The position requires someone with a combination of data and development skills, who has the ability to understand the complex nature of regulatory platforms. The individual will be leading and working with other members of 2 small Agile teams and will be responsible for the development, and application / level-3 support.

The Feature Lead will be expected to perform analysis in support of development, test, and production activities. Individual will be expected to lead / perform limited data modeling & analyst activities (data mapping, data feeds analysis / profiling, data transformation definition, etc.), to work with line of business analysts and quants to understand / capture new requirements and to conduct user acceptance testing. Person will be expected to coordinate with Dev-Ops team on releasing items into production and for oversight of non-production environments. The position requires minimum of 6-8 years of applicable experience.

The Feature Lead needs to have knowledge of database structures, theories, principles, and practices. Experience in Java and Knowledge in Hadoop (HDFS/HBASE /SPARKSQL and SPARK/Scala with or prior experience in MapReduce) concepts and ability to write Spark/Scala RDD & SQL jobs. Proven understanding with Hadoop, Spark, Hive, and Hbase and ability to write Shell scripting. Familiarity with data loading tools like Sqoop, Flume, and Kafka. Knowledge of workflow/schedulers like Oozie. Good aptitude in multi-threading and concurrency concept. Loading data from disparate data source sets. Certifications like Cloudera Developer (CCA175)/ Hortonworks Developer (Spark & Hive) /Administrator Certifications added advantage. Hands on experience with at least two NO SQL databases. Ability to analyze, identify issues with existing cluster, suggest architectural design changes. Ability/Knowledge to implement Data Governance in Hadoop clusters.

Required Skills

  • 6+ years of Java development and release experience.

  • 4+ years of Advance SQL/ Spark-SQL programming.

  • 4+ years of Hadoop development & platform experience.

  • 4+ years of Leadership experience.

  • 3+ years of Scala  experience.

  • 2+ years of Spark/RDD/DataFrame/DataSet.

  • Basic Unix OS skills and Shell scripting.

  • Agile methodology experience.

  • Sqoop data from external RDBMS.

  • Autosys scheduler experience.

  • JIRA or alternative.

  • Well versed in Linux Environment.

  • Extensive experience in application development.

  • Excellent analytical and process based skills i.e. process flow diagrams, functional diagrams & etc.

  • History of delivering against agreed objectives.

  • Demonstrate problem solving skills

  • Ability to pick up new concepts and apply your knowledge.

  • Ability to coordinate competing priorities and drive team work.

  • Ability to work in diverse team environments that are local and remote.

  • Strong Communication skills both verbal and written.

  • Work with minimal supervision.

  • Collaborate with business analysts and line of business as needed.

Desired Skills

  • ETL experience.

  • Exposure to data analytic tools / languages.

  • Experience with BIG DATA source control management. (e.g., Bitbucket) or job scheduling (e.g., AutoSys)

  • Experience working with data scientists or analytical business users will be beneficial.

  • Experience in Python programming.

Job Band:

H5

Shift: 

1st shift (United States of America)

Hours Per Week:

40

Weekly Schedule:

Referral Bonus Amount:

0 -->

Job Description:

Position Summary

We are searching for a Feature Lead Hadoop Engineer who will be responsible for development and support of AML consumer platform. The position requires someone with a combination of data and development skills, who has the ability to understand the complex nature of regulatory platforms. The individual will be leading and working with other members of 2 small Agile teams and will be responsible for the development, and application / level-3 support.

The Feature Lead will be expected to perform analysis in support of development, test, and production activities. Individual will be expected to lead / perform limited data modeling & analyst activities (data mapping, data feeds analysis / profiling, data transformation definition, etc.), to work with line of business analysts and quants to understand / capture new requirements and to conduct user acceptance testing. Person will be expected to coordinate with Dev-Ops team on releasing items into production and for oversight of non-production environments. The position requires minimum of 6-8 years of applicable experience.

The Feature Lead needs to have knowledge of database structures, theories, principles, and practices. Experience in Java and Knowledge in Hadoop (HDFS/HBASE /SPARKSQL and SPARK/Scala with or prior experience in MapReduce) concepts and ability to write Spark/Scala RDD & SQL jobs. Proven understanding with Hadoop, Spark, Hive, and Hbase and ability to write Shell scripting. Familiarity with data loading tools like Sqoop, Flume, and Kafka. Knowledge of workflow/schedulers like Oozie. Good aptitude in multi-threading and concurrency concept. Loading data from disparate data source sets. Certifications like Cloudera Developer (CCA175)/ Hortonworks Developer (Spark & Hive) /Administrator Certifications added advantage. Hands on experience with at least two NO SQL databases. Ability to analyze, identify issues with existing cluster, suggest architectural design changes. Ability/Knowledge to implement Data Governance in Hadoop clusters.

Required Skills

  • 6+ years of Java development and release experience.

  • 4+ years of Advance SQL/ Spark-SQL programming.

  • 4+ years of Hadoop development & platform experience.

  • 4+ years of Leadership experience.

  • 3+ years of Scala  experience.

  • 2+ years of Spark/RDD/DataFrame/DataSet.

  • Basic Unix OS skills and Shell scripting.

  • Agile methodology experience.

  • Sqoop data from external RDBMS.

  • Autosys scheduler experience.

  • JIRA or alternative.

  • Well versed in Linux Environment.

  • Extensive experience in application development.

  • Excellent analytical and process based skills i.e. process flow diagrams, functional diagrams & etc.

  • History of delivering against agreed objectives.

  • Demonstrate problem solving skills

  • Ability to pick up new concepts and apply your knowledge.

  • Ability to coordinate competing priorities and drive team work.

  • Ability to work in diverse team environments that are local and remote.

  • Strong Communication skills both verbal and written.

  • Work with minimal supervision.

  • Collaborate with business analysts and line of business as needed.

Desired Skills

  • ETL experience.

  • Exposure to data analytic tools / languages.

  • Experience with BIG DATA source control management. (e.g., Bitbucket) or job scheduling (e.g., AutoSys)

  • Experience working with data scientists or analytical business users will be beneficial.

  • Experience in Python programming.

Job Band:

H5

Shift: 

1st shift (United States of America)

Hours Per Week:

40

Weekly Schedule:

Referral Bonus Amount:

0

Job Description: Position Summary

We are searching for a Feature Lead Hadoop Engineer who will be responsible for development and support of AML consumer platform. The position requires someone with a combination of data and development skills, who has the ability to understand the complex nature of regulatory platforms. The individual will be leading and working with other members of 2 small Agile teams and will be responsible for the development, and application / level-3 support.

The Feature Lead will be expected to perform analysis in support of development, test, and production activities. Individual will be expected to lead / perform limited data modeling & analyst activities (data mapping, data feeds analysis / profiling, data transformation definition, etc.), to work with line of business analysts and quants to understand / capture new requirements and to conduct user acceptance testing. Person will be expected to coordinate with Dev-Ops team on releasing items into production and for oversight of non-production environments. The position requires minimum of 6-8 years of applicable experience.

The Feature Lead needs to have knowledge of database structures, theories, principles, and practices. Experience in Java and Knowledge in Hadoop (HDFS/HBASE /SPARKSQL and SPARK/Scala with or prior experience in MapReduce) concepts and ability to write Spark/Scala RDD & SQL jobs. Proven understanding with Hadoop, Spark, Hive, and Hbase and ability to write Shell scripting. Familiarity with data loading tools like Sqoop, Flume, and Kafka. Knowledge of workflow/schedulers like Oozie. Good aptitude in multi-threading and concurrency concept. Loading data from disparate data source sets. Certifications like Cloudera Developer (CCA175)/ Hortonworks Developer (Spark & Hive) /Administrator Certifications added advantage. Hands on experience with at least two NO SQL databases. Ability to analyze, identify issues with existing cluster, suggest architectural design changes. Ability/Knowledge to implement Data Governance in Hadoop clusters.

Required Skills

  • 6+ years of Java development and release experience.

  • 4+ years of Advance SQL/ Spark-SQL programming.

  • 4+ years of Hadoop development & platform experience.

  • 4+ years of Leadership experience.

  • 3+ years of Scala  experience.

  • 2+ years of Spark/RDD/DataFrame/DataSet.

  • Basic Unix OS skills and Shell scripting.

  • Agile methodology experience.

  • Sqoop data from external RDBMS.

  • Autosys scheduler experience.

  • JIRA or alternative.

  • Well versed in Linux Environment.

  • Extensive experience in application development.

  • Excellent analytical and process based skills i.e. process flow diagrams, functional diagrams & etc.

  • History of delivering against agreed objectives.

  • Demonstrate problem solving skills

  • Ability to pick up new concepts and apply your knowledge.

  • Ability to coordinate competing priorities and drive team work.

  • Ability to work in diverse team environments that are local and remote.

  • Strong Communication skills both verbal and written.

  • Work with minimal supervision.

  • Collaborate with business analysts and line of business as needed.

Desired Skills

  • ETL experience.

  • Exposure to data analytic tools / languages.

  • Experience with BIG DATA source control management. (e.g., Bitbucket) or job scheduling (e.g., AutoSys)

  • Experience working with data scientists or analytical business users will be beneficial.

  • Experience in Python programming.

Shift:

1st shift (United States of America)

Hours Per Week: 

40
 

Recommended Skills

  • Apache Hive
  • Apache Spark
  • Apache Flume
  • Big Data
  • Apache H Base
  • Map Reduce
Apply to this job.
Think you're the perfect candidate?

Help us improve CareerBuilder by providing feedback about this job:

Job ID: 21024599

CareerBuilder TIP

For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.

By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.