Data Systems Engineer - ELK/Kafka/Linux
Alpharetta, GA or Menlo Park, CA - Hybrid 3 Days a Week Onsite
12 months+
Interview Process: - Screening of Technical Background - check Linux exp (Team member - 1hr)
- Technical Panel (in person)
Bachelor's Degree: Not a hard requirement but a BIG PLUS
Industry Background: Plus
Years of experience: 7-15 years Exp
Must:- Need to have strong working knowledge on Linux platform
- Application Development Exp - Python (#1) - Ruby or Shell (#2)
- Need to be able to run the application within Linux - debug - understand when they run the application there is a memory cpu, etc
- ELK (Elastic Search) Exp
- Kafka building pipeline with ELK and Kafka
- Strong communication, Team Player and someone who is open to learning and getting their hands dirty.
- Ability to learn - Curiosity
Plus:- Fling
- Snowflake Database Exp
- Spark Data processing
- Good Data Analysis Background
- ELK Certification
- Observability and Data Analysis
D2D:- This person will not only be doing the pipeline development but will work in large scale cluster setting.
- Will requires the understanding of scalability in the Kafka environment
- This person will ensure the jobs are up and running, debug it and support the 100s of customers who use their system
- Be the person to support those people
- Pipeline development using Kafka, ELK and Hycon
- Will need a strong knowledge of linux because that's where they deploy their jobs in the 100s
- Pipeline is within Kafka and almost all the data is writing within ELK (need to understand how the data is stored)
- This is a diverse background!
Misc:- If the candidate is a data engineer, they also need to be a dataengineer/application developer
- They need core applications development. Not only know shell/python development.
- Team works directly within a Linux platform (very important)
- Will do Development, Deployment and Support
Struggles while reviewing resumes:- Sees a lot of cookie cutter resumes that match that JD to a "T".