PROCESSING APPLICATION
Hold tight! We’re comparing your resume to the job requirements…
ARE YOU SURE YOU WANT TO APPLY TO THIS JOB?
Based on your Resume, it doesn't look like you meet the requirements from the employer. You can still apply if you think you’re a fit.
Job Requirements of Sr.Hadoop Developer:
-
Employment Type:
Full-Time
-
Location:
Beaverton, OR (Onsite)
Do you meet the requirements for this job?
Sr.Hadoop Developer
Typically requires a Bachelors Degree and minimum of 5 years directly relevant work experience.
Client is embarking on the big data platform in Consumer Digital using Hadoop Distributed File System cluster. As a Sr. Hadoop developer, you will work with a variety of talented client teammates and be a driving force for building solutions. You will be working on development projects related to commerce and web analytics.
Responsibilities:
- Design and implement map reduce jobs to support distributed processing using Java, Cascading, Python, Hive, and Pig; ability to design and implement end-to-end solutions.
- Build libraries, user-defined functions, and frameworks around Hadoop.
- Research, evaluate, and utilize new technologies/tools/frameworks around the Hadoop ecosystem.
- Develop user-defined functions to provide custom Hive and Pig capabilities.
- Define and build data acquisition and consumption strategies.
- Define & develop best practices.
- Work with support teams in resolving operational & performance issues.
- Work with architecture/engineering leads and other teams on capacity planning.
Qualifications
Qualification:
• MS/BS degree in a computer science field or related discipline.
• 6+ years’ experience in large-scale software development.
• 1+ year experience in Hadoop.
• Strong Java programming, shell scripting, Python, and SQL.
• Strong development skills around Hadoop, MapReduce, Hive, Pig, and Impala.
• Strong understanding of Hadoop internals.
• Good understanding of AVRO and JSON and other compression.
• Experience with build tools such as Maven.
• Experience with databases like Oracle.
• Experience with performance/scalability tuning, algorithms, and computational complexity.
• Experience (at least familiarity) with data warehousing, dimensional modeling, and ETL development.
• Ability to understand ERDs and relational database schemas.
• Proven ability to work cross-functional teams to deliver appropriate resolution.
Nice to have:
- Experience with open source NoSQL technologies such as HBase and Cassandra.
- Experience with messaging & complex event processing systems such as Kafka and Storm.
- Machine learning framework.
- Statistical analysis with Python, R, or similar.
Additional Information
All your information will be kept confidential according to EEO guidelines.
#J-18808-Ljbffr
Recommended Skills
- Algorithms
- Apache H Base
- Apache Hadoop
- Apache Hive
- Apache Kafka
- Apache Maven
Help us improve CareerBuilder by providing feedback about this job: Report this job
Job ID: kwysmxn
CareerBuilder TIP
For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.
By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.