Help us use technology to bring ingenuity, simplicity, and humanity to banking, to spark this up we're seeking a hands-on Data Engineer that can design, code and provide architecture solutions for the team.
In this role, you will be responsible for building generic data pipelines & frameworks using open source tools on public Cloud platforms. The right candidate for this role is someone who is passionate about technology, interact with product owners and technical stakeholders, thrives under pressure, and is hyper-focused on delivering exceptional results with good teamwork skills. The candidate will have the opportunity to influence and interact with fellow technologists beyond his team and influence technology partners across the enterprise.
The Job & Expectations:
- Partner with product owners, peers, and end users to understand business requirements
- Provide technical guidance concerning business implications of application development projects
- Strong ETL programming skills in Python, Spark and Scala
- Experience with Cloud computing, preferably AWS
- Exposure to AWS native technologies (s3, EMR/EC2 and Lambda functions)
- Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Nexus, Git and Docker
- Ability to handle multiple responsibilities in an unstructured environment where you're empowered to make a difference. In that context, you will be expected to research and develop cutting edge technologies to accomplish your goals.
- Experience working in an agile environment
- Bachelor's Degree or military experience
- At least 1 year of experience developing, deploying, testing in AWS public cloud
- At least 1 years' experience in system analysis
- At least 3 years of professional work experience delivering big data solutions using open-source
- At least 5 years of professional work experience in large scale data projects
- At least 4 years of professional work experience in data management & data engineering
- 3+ years' experience in at least one scripting language (Python, *Scala & Spark)
- 5+ years' experience developing software solutions to solve business problems
- 1+ year of experience in AWS Cloud computing
- 2+ years of experience in an Agile delivery environment
Spark, Scala, Python, Snowflake and proof of concepts
Amazon Elastic Compute Cloud