Big Data Engineer – Spark / Kafka/ Java
Big Data Engineer, with expertise in solutions for processing large volumes of data, using tools Kafka, Spark and Big Data platforms.
Proven experience of at least 5 years in designing and development of different processes around Big Data platforms.
Hadoop Distribution: Cloudera Hadoop - Hands–on experience in Big Data solutions (Hadoop Ecosystem) Kafka, Spark with Java.
Hadoop Framework Experience: Hive, Impala, Kerberos, HDFS, Kafka, Spark
Understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS, SQL experience
Extensive hands on experience on major programming/scripting languages like Java, Linux and/or Python/Scala(Nice to have)
At least 3 years of experience in Clourdera, Hive, Spark, from development/operate and maintain, to designing/architecting a secure Big Data environment
Proficient with SQL, Complex SQL Tuning etc…
Good understanding of Data Integration, Data Quality, and Data Architecture
Other professional skills desired: - To be able to work in teams with different disciplines - To be a self-managed person - To be able to perform detailed analysis of business problems and use designing in Big Data solutions - To be able to work creatively in a problem-solving environment - To be able to do the analysis, processing and visualization of data for processing large volumes of data
Given the constant flux of the big data analytics market, new tools will likely be introduced so the ability and enthusiasm for new technology is also important.
- Big Data