Icon hamburger
US
What job do you want?
Apply to this job.
Think you're the perfect candidate?
Apply on company site

You’re being taken to an external site to apply.

Create an account to get recommended jobs that match your resume and apply to multiple jobs in seconds!
8-15 characters
Contains Number
Contains Lowercase
Contains Uppercase
Contains Special Character
Thumsup

You’re being taken to an external site to apply.

Enter your email below to receive job recommendations for similar positions.
I2v2mj6flktgtqchgm0

Data Engineer II

O'Reilly Auto Parts Springfield Full-Time
Apply on company site
Tentative Schedule: Monday - Friday 8:00 a.m. - 5:00 p.m. This position is responsible for designing, evaluating, and creating systems to support data science projects across the O'Reilly organization, as well as expanding and optimizing our data and data pipeline architecture.  This includes data cleansing, preparation, and ETL. The ideal candidate will identify and work with the appropriate technology and software engineering solutions to facilitate machine learning and analytic pipeline deployment.

Essential Job Functions

• Move, structure, encode, and condense data from disparate database systems and formats.
• Identify, design, and implement internal process improvements such as, automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability.  
• Create data tools for analytics and data science team members that assist them in building and optimizing solutions to become an innovative industry leader.     
• Evaluate performance of machine learning systems and work with data scientists to improve quality.    
• Build processes supporting data transformation, data structures, metadata, dependency and workload management.      
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.   
• Develop software solutions with a focus on maintainability and modularity.

Skills and Qualifications

• Bachelor's degree.
• 2+ years of practical experience with ETL, data processing, database programming and data analytics.  
• Strong knowledge of Python; including Pandas, Numpy, SciKit-Learn, and experience with notebooks such as Jupyter.  
• Demonstrable knowledge of software design and engineering best practices.  
• Experience working with large-scale distributed data systems.  
• Excellent written and verbal communication skills.
• Desire to work in a dynamic and collaborative environment.
 

Recommended skills

Scikit Learn
Jupyter
Pandas
Data Science
Data Pipeline
Metadata

Location

CareerBuilder Estimated Salary

Based on Job Title, Location and Skills
$51K
Below Avg. Average Above Avg.
Apply to this job.
Think you're the perfect candidate?
Apply on company site

Help us improve CareerBuilder by providing feedback about this job: Report this job

Report this Job

Once a job has been reported, we will investigate it further. If you require a response, submit your question or concern to our Trust and Site Security Team

Job ID: 178189

CAREERBUILDER TIP

For your privacy and protection, when applying to a job online, never give your social security number to a prospective employer, provide credit card or bank account information, or perform any sort of monetary transaction. Learn more.

By applying to a job using CareerBuilder you are agreeing to comply with and be subject to the CareerBuilder Terms and Conditions for use of our website. To use our website, you must agree with the Terms and Conditions and both meet and comply with their provisions.