We're looking for a Senior Data Engineer who is deeply technical and excited to solve hard data engineering problems at scale. This is a highly technical individual contributor role — you will be expected to demonstrate and apply expert-level knowledge across the full data engineering stack, from raw ingestion through to performant, well-modeled data products.
You will contribute to the design, implementation, and optimization of complex data pipelines and data models, operate with a high degree of autonomy, and be a highly technical resource on data engineering problems within the team.
Our data pipelines are heavily influenced by financial transactions — experience with transaction data, ledgers, reconciliation, or fraud pipelines is strongly preferred.
What You'll Do
Design and build data pipelines. Develop, maintain, and optimize ELT pipelines on a cloud-based data platform. Integrate data from multiple internal and external sources into a centralized, governed platform and own end-to-end orchestration.
Own data modeling and architecture. Design and maintain robust data models to support analytics and self-service BI. Define and document modeling standards and apply them consistently across the platform.
Ensure data quality, reliability, and performance. Implement data quality checks, validation frameworks, and pipeline monitoring. Tune jobs and queries for performance and cost efficiency across the data platform and downstream systems.
Collaborate with stakeholders. Partner with data analysts, data scientists, and product and engineering teams to understand data requirements and translate them into precise technical implementations. Participate in design and code reviews as a technical contributor.
Governance and best practices. Implement and maintain data governance practices covering access control, lineage, and data classification. Apply best practices around version control, CI/CD for data, transformations-as-code, and testing standards.
Required Qualifications
Nice to Have
Who You Are
You have deep, demonstrable expertise across the data engineering stack — you can go deep on distributed compute internals, query execution, storage layer mechanics, and data modeling tradeoffs without needing to look things up. You take ownership of complex technical problems end-to-end and produce high-quality, production-grade implementations, holding your own work to a high standard.
You think about the downstream consumers of data — analysts, scientists, product teams — and build with their usability in mind. You default to automation, reliability, and scalability over one-off solutions, and communicate technical tradeoffs clearly to both technical and non-technical collaborators.
What we Offer
Salary: 105-125k Base + Bonus
Location: Provo, Utah or Charlotte, North Carolina
Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time.
Notice - Employment Scams
Communication from our team regarding job opportunities will only be made by a Quilt Software employee with an @quiltsoftware.com email address. We do not conduct interviews over email or chat platforms, and we will never ask you to provide personal or financial information such as your mailing address, social security number, credit card numbers, or banking information. If you believe a scammer is contacting you, please mark the communication as "phishing" or “spam” and do not respond.