Senior Data Engineer

Quilt Software

Provo, UT

JOB DETAILS
SALARY
$105,000–$125,000
SKILLS
Access Control, Amazon Web Services (AWS), Analysis Skills, Apache Kafka, Automation, Best Practices, Business Intelligence, Business Intelligence Software, Channel Strategies, Cloud Computing, Code Reviews, Communication Skills, Continuous Deployment/Delivery, Continuous Integration, Customer/Client Research, Data Analysis, Data Management, Data Modeling, Data Quality, Data Science, Database Design, Documentation Models, Documentation Standards, Financial Transactions, GCP (Good Clinical Practices), GitHub, Identify Issues, Machine Tool, Microsoft Windows Azure, Multiplatform/Cross-Platform, Phishing, Problem Solving Skills, Product Engineering, Production Systems, Python Programming/Scripting Language, Query Optimization, Reconciliation, Risk, SQL (Structured Query Language), Scripting (Scripting Languages), Software Engineering, Team Player, Testing, Usability Engineering, Validation Testing
LOCATION
Provo, UT
POSTED
2 days ago

We're looking for a Senior Data Engineer who is deeply technical and excited to solve hard data engineering problems at scale. This is a highly technical individual contributor role — you will be expected to demonstrate and apply expert-level knowledge across the full data engineering stack, from raw ingestion through to performant, well-modeled data products.

You will contribute to the design, implementation, and optimization of complex data pipelines and data models, operate with a high degree of autonomy, and be a highly technical resource on data engineering problems within the team.

Our data pipelines are heavily influenced by financial transactions — experience with transaction data, ledgers, reconciliation, or fraud pipelines is strongly preferred.

What You'll Do

Design and build data pipelines. Develop, maintain, and optimize ELT pipelines on a cloud-based data platform. Integrate data from multiple internal and external sources into a centralized, governed platform and own end-to-end orchestration.

Own data modeling and architecture. Design and maintain robust data models to support analytics and self-service BI. Define and document modeling standards and apply them consistently across the platform.

Ensure data quality, reliability, and performance. Implement data quality checks, validation frameworks, and pipeline monitoring. Tune jobs and queries for performance and cost efficiency across the data platform and downstream systems.

Collaborate with stakeholders. Partner with data analysts, data scientists, and product and engineering teams to understand data requirements and translate them into precise technical implementations. Participate in design and code reviews as a technical contributor.

Governance and best practices. Implement and maintain data governance practices covering access control, lineage, and data classification. Apply best practices around version control, CI/CD for data, transformations-as-code, and testing standards.

Required Qualifications

  • 7+ years of professional experience as a Data Engineer or Software Engineer — with demonstrable technical depth: you have designed and built complex, multi-source pipelines at scale, diagnosed and resolved hard production issues, and made significant technical contributions to a data platform.
  • Platform and compute: Strong hands-on experience with Databricks in a production environment, including cluster configuration and job orchestration. Advanced PySpark skills for large-scale batch and streaming workloads; capable of diagnosing and tuning jobs independently. Experience on AWS, Azure, or GCP, including relevant storage and networking fundamentals.
  • Languages: Expert-level SQL — complex joins, window functions, query optimization, and execution plan analysis. Expert-level Python for data pipeline development, transformation logic, and scripting.
  • Data engineering practices: Proven experience with dbt or equivalent for transformations-as-code; hands-on experience with a production-grade orchestration tool; and experience operating Apache Kafka or a comparable streaming platform in production.
  • Data modeling: Proven experience designing schemas for analytics and reporting, with the ability to define and apply modeling standards consistently.
  • Software engineering fundamentals: GitHub, testing, code reviews, and CI/CD for data pipelines. Able to collaborate effectively with both technical and non-technical stakeholders.

Nice to Have

  • Payments / fintech domain experience: Transaction data, ledgers, reconciliation, fraud, or risk pipelines.
  • Data governance tooling: Hands-on experience with a catalog or lineage tool in production.
  • BI tools: Working knowledge of a major BI platform and how data models underpin it.
  • Infrastructure as Code: Experience managing cloud or data platform infrastructure programmatically.
  • ML workflow support: Experience structuring feature engineering pipelines or supporting data science teams with training data infrastructure.

Who You Are

You have deep, demonstrable expertise across the data engineering stack — you can go deep on distributed compute internals, query execution, storage layer mechanics, and data modeling tradeoffs without needing to look things up. You take ownership of complex technical problems end-to-end and produce high-quality, production-grade implementations, holding your own work to a high standard.

You think about the downstream consumers of data — analysts, scientists, product teams — and build with their usability in mind. You default to automation, reliability, and scalability over one-off solutions, and communicate technical tradeoffs clearly to both technical and non-technical collaborators.

What we Offer

  • Opportunity to build software that powers real-world commerce at scale.
  • Competitive salary and equity package.
  • Comprehensive benefits (health, dental, vision, 401k, etc.).
  • Strong emphasis on engineering quality and career growth.

Salary: 105-125k Base + Bonus
Location: Provo, Utah or Charlotte, North Carolina

Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time.

Notice - Employment Scams

Communication from our team regarding job opportunities will only be made by a Quilt Software employee with an @quiltsoftware.com email address. We do not conduct interviews over email or chat platforms, and we will never ask you to provide personal or financial information such as your mailing address, social security number, credit card numbers, or banking information. If you believe a scammer is contacting you, please mark the communication as "phishing" or “spam” and do not respond.

About the Company

Q

Quilt Software