Senior Data Engineer
Apidel Technologies
Remote(remote)
84.51 Overview:
84.51 is a retail data science, insights and media company. We help , consumer packaged goods companies, agencies, publishers and affiliates create more personalized and valuable experiences for shoppers across the path to purchase.
Powered by cutting-edge science, we utilize first-party retail data from more than 62 million U.S. households sourced through the Plus loyalty card program to fuel a more customer-centric journey using 84.51 Insights, 84.51 Loyalty Marketing and our retail media advertising solution, Precision Marketing.
Join us at 84.51!
Your role will involve collaboration with cross-functional teams, leveraging cutting-edge technologies, and ensuring scalable, efficient, and secure data engineering practices.
Requirements:
Bachelors degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree.
Proven experience working within cross-functional teams and managing complex projects from inception to completion
3%2B years of professional Data Development experience.
3%2B years of experience with SQL and NoSQL technologies.
2%2B years of experience building and maintaining data pipelines and workflows.
2%2B years of experience developing with Python & Py spark.
Experience developing within data bricks.
Experience with CI/CD pipelines and processes.
Experience with automated unit, integration, and performance testing.
Experience with version control software such as Git.
Full understanding of ETL and Data Warehousing concepts.
Strong understanding of Agile principles (Scrum).
Preferred Qualifications
Experience with Snowflake.
Experience in building out marketing cleanrooms.
Knowledge of Structured Streaming (Spark, Kafka, Event Hub, or similar technologies).
Experience with GitHub SaaS/GitHub Actions.
Experience with Service Oriented Architecture.
Experience with containerization technologies such as Docker and Kubernetes.
Key Responsibilities
Responsibilities
Take ownership of systems, processes, and the tech stack while driving features to completion through all phases of the entire 84.51 SDLC. This includes internal and external facing applications as well as process improvement activities:
Provide Technical Leadership: Offer technical leadership to ensure clarity between ongoing projects and facilitate collaboration across teams to solve complex data engineering challenges.
Build and Maintain Data Pipelines: Design, build, and maintain scalable, efficient, and reliable data pipelines to support data ingestion, transformation, and integration across diverse sources and destinations, using tools such as Kafka, Databricks, and similar toolsets. Drive Innovation: Leverage innovative technologies and approaches to modernize and extend core data assets, including SQL-based, NoSQL-based, cloud-based, and real-time streaming data platforms.
Implement Automated Testing: Design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with organizational standards.
Optimize Data Workflows: Optimize data workflows for performance, cost efficiency, and scalability across large datasets and complex environments.
Mentor Team Members: Mentor team members in data principles, patterns, processes, and practices to promote best practices and improve team capabilities.
Draft and Review Documentation: Draft and review architectural diagrams, interface specifications, and other design documents to ensure clear communication of data solutions and technical requirements.
Note to Vendors
Top 3 skills (hard or soft) - Okay operating independently, pyspark, python, cleanroom experience, data bricks experience
Project person will be supporting - 3rd party engagement, data feeds, setting up cleanroom
Work Location (in office, hybrid, remote) - remote
Is travel required - no
Interview process and when will it start - ASAP
When do you want this person to start ASA