Icon hamburger
Briefcase

Create Job Alert.

Enter your email below to save this search and receive job recommendations for similar positions.
Thank you. We'll send jobs matching these to
Missing Translation: eu_consumer_core.job_alert.duplicate_alert
No Thanks
US
What job do you want?

Create Job Alert.

Get similar jobs sent to your email

Apply to this job.
Think you're the perfect candidate?
Apply Now
Thumsup

You’re being taken to an external site to apply.

Job Requirements

Job description :- 

Required Technologies           

 

Job Description: Detailed overview of functional and technical role expectations  

           Design and develop cutting-edge data solutions using existing and emerging technology platforms

           Leverage sound judgment and problem solving to tackle some of most critical data problems and connect the dots to broader implications of the work

           Effectively communicate with key stakeholders

           Design, architect, and help implement data flow processes to help engineering teams comply with standards, processes, policies and procedures

Preferred Skills

           Bachelor’s/Master’s in Computer Science or related disciplines, with strong technical leadership in software development.

           In-depth knowledge of Apache Spark or Hadoop ecosystem and relevant tools - Sqoop, Flume, Kinesis, Oozie, Hue, Zookeeper, Ranger, Elastic search, and Avro

           Experience in building/migrating data pipelines for on-prem Hadoop

           Experience in building/migrating data pipelines on Google Cloud or AWS Public Cloud platform

           Experience in Continuous integration and Continuous deployment using Maven, Jenkins, Docker, Kubernetes

           Good understanding of Microservice architecture and deployment in On Cloud/On Premise and Hybrid environments.

           Ability to write Data Ingestion ETL code and data analysis code using ETL tools – Informatica BDE, Hadoop Map Reduce jobs, Hive Queries and Spark jobs

           Experience to understand the needs to support large, complex data sets and creating knowledge data sets, analytic reports

Do you have these requirements?

Enter your email below to receive job recommendations for similar positions.
Big Data Engineer with GCP at Ztek Consulting Inc

Big Data Engineer with GCP

Ztek Consulting Inc Work From Home, GA Contractor
$120,000.00 - $169,375.00 / year
Apply Now

Create Job Alert.

Get similar jobs sent to your email

 

Job Title         :  Big Data Engineer with GCP.
Job Type        : Contract
Location         : St Louis/Minneapolis ( Relocation after Covid )
 

Job description :- 

Required Technologies           

 

Job Description: Detailed overview of functional and technical role expectations  

           Design and develop cutting-edge data solutions using existing and emerging technology platforms

           Leverage sound judgment and problem solving to tackle some of most critical data problems and connect the dots to broader implications of the work

           Effectively communicate with key stakeholders

           Design, architect, and help implement data flow processes to help engineering teams comply with standards, processes, policies and procedures

Preferred Skills

           Bachelor’s/Master’s in Computer Science or related disciplines, with strong technical leadership in software development.

           In-depth knowledge of Apache Spark or Hadoop ecosystem and relevant tools - Sqoop, Flume, Kinesis, Oozie, Hue, Zookeeper, Ranger, Elastic search, and Avro

           Experience in building/migrating data pipelines for on-prem Hadoop

           Experience in building/migrating data pipelines on Google Cloud or AWS Public Cloud platform

           Experience in Continuous integration and Continuous deployment using Maven, Jenkins, Docker, Kubernetes

           Good understanding of Microservice architecture and deployment in On Cloud/On Premise and Hybrid environments.

           Ability to write Data Ingestion ETL code and data analysis code using ETL tools – Informatica BDE, Hadoop Map Reduce jobs, Hive Queries and Spark jobs

           Experience to understand the needs to support large, complex data sets and creating knowledge data sets, analytic reports

Recommended skills

Aws
Big Data
Etl
Gcp
Spark
Apply to this job.
Think you're the perfect candidate?
Apply Now

Help us improve CareerBuilder by providing feedback about this job: Report this job

Report this Job

Once a job has been reported, we will investigate it further. If you require a response, submit your question or concern to our Trust and Site Security Team

CAREERBUILDER TIP

For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.

By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.