Big Data Engineer

Veterans Sourcing Group

Johns Creek, GA

JOB DETAILS
LOCATION
Johns Creek, GA
POSTED
30+ days ago
Big Data Engineer
Alpharetta, GA - hybrid
12 months+


MUST HAVES
Hadoop experience, big data, hands on with Spark and Unix, Python shell scripting
Decent knowledgeof githubs, DevOps and CI/CD pipeline experience
Strong hands-on SQL

Highlight how they have used these tools on the resume for the manager
50% supporting existing Hadoop productions -Hands on
50% framework of migrations- Hadoop migrating to snowflake, understand existingframeworks

NICE TO HAVES
7+ years of experience
Snowflake and AWS is a plus
IDMC is a plus or any other ETL tool
Financial services experience is very desirable for this one

CPS Data CoE - Big Data Engineer

The department is comprised of 10 organizations: Sales, Banking & Corporate-Client Technology, Investment Products & Markets Technology, Client Reporting, Core Processing, Private and International Wealth Management Technology, Technology Integration Office, Enterprise Infrastructure & Production Management, Capital Markets Application & Data Services, Deployment Planning & Release Management, and the Chief Operating Office.

Position Description:
This position is for Big data engineer for Brokerage Wealth Management? Framework CoE team at Brokerage's Alpharetta or New York offices.
CoE team is responsible to define and govern the data platforms.
We are looking for colleagues with strong sense of ownership and ability to drive solutions.
The role is primarily responsible to automate the existing process and bring new ideas and innovation.
The candidate is expected to code, conduct code reviews, and test framework as needed, along with participating in application architecture and design and other phases of the automation.
The ideal candidate will be a self-motivated team player committed to delivering on time and should be able to work with or without minimal supervision.

Responsibilities
* Design & Develop new automation framework for ETL processing
* Support existing framework and become technical point of contact for all related teams
* Enhance existing ETL automation framework as per user requirements
* Performance tuning of spark, snowflake ETL jobs
* New technology POC and suitability analysis for Cloud migration
* Process optimization with the help of automation and new utility development
* Work in collaboration for any issues and new features
* Support any batch issue
* Support application team teams with any queries

Required Skills
* 7+ years of Data engineering experience
* Must be strong in UNIX Shell, Python scripting knowledge
* Must be strong in Spark
* Must have strong knowledge of SQL
* Hands-on knowledge on how HDFS/Hive/Impala/Spark works
* Strong in logical reasoning capabilities
* Should have working knowledge of Github, DevOps, CICD/ Enterprise code management tools
* Strong collaboration and communication skills
* Must possess strong team-player skills and should have excellent written and verbal communication skills
* Ability to create and maintain a positive environment of shared success
* Ability to execute and prioritize a tasks and resolve issues without aid from direct manager or project sponsor
* Good to have working experience on snowflake & any data integration tool i.e. informatica cloud

Primary skills: Python, Big Data & Apache Spark

Desired skills
* Snowflake/Azure/AWS any cloud
* IDMC/any ETL tool

About the Company

V

Veterans Sourcing Group