Senior Data Engineer
Full time / Direct Hire ONLY-
The individual will be passionate about data, software development and analytics with first-hand experience in collecting, storing, processing, and analyzing of huge sets of data. The candidate will have personal experience implementing data lakes, real time and batch data movement, modern data warehousing solutions, master data management, data analytics, advanced analytics, business intelligence, and API development.
· Ability to design, build, and maintain the ETL pipeline and data warehouse.
· Knowledge of SQL/database(s), Hadoop-based analytics and ability to create & integrate APIs.
· Ability to use programming language, scripting, reporting and data visualization.
· Demonstrate expertise in data modeling and query performance tuning on SQL Server, Snowflake or similar platforms.
· Design, construct, install, test and maintain data management systems
· Implement ETL process to analyze and import data from existing data sources
· Build data pipelines for ingestion processing, and surfacing of data for large-scale applications
· Research new uses for existing data
· Use many different scripting languages and tools to connect systems together
· Research and discover new methods to acquire data, and new applications for existing data
· Work with other members of the data team, including data architects, data analysts, and data scientists
· Select and integrate Big Data tools and frameworks required to provide requested capabilities
This is a direct hire role ONLY for a Charlotte NC based client , salary range of $120,000 - $140,000- Prefer candidates within 200 mile radius of Charlotte NC for onsite requirements from time to time, but will consider candidate in the EST and CST time zone.
Main Duties/Required Skills:
Very strong in SQL
UNIX/LINUX commands and shell scripting
Previous experience in data engineering including big data and cloud technologies
Strong in Python, API integrations, Data warehousing concepts, ETL design/development
Good knowledge of Big Data querying tools.
Experience with integration of data from multiple data sources
Knowledge of various ETL techniques and frameworks.
Experience with various messaging systems, such as Kafka or RabbitMQ
Experience with building stream-processing systems
Experience with Big Data ML toolkits
Good understanding of GCP Architecture, along with its advantages and drawbacks
Proficient understanding of distributed computing principles
Intellectual curiosity to find new and unusual ways of how to solve data management issues.
Ability to work in a fast-paced environment and manage multiple priorities in parallel.
Nice to have Skills:
Advanced degree in mathematics or data science
Experience in Snowflake, GCP BQ, DBT, Advanced Analytics/Data Science
7-10+ years of relevant experience
Experience with modern cloud-based data pipelines, data modeling, data management and governance, and data architecture is also a plus.
UNIX / LINUX
Bachelor’s Degree Requirement: Yes
- Data Engineer