Engineer IV, Data Engineering

Omnicell

Pittsburgh, PA

JOB DETAILS
LOCATION
Pittsburgh, PA
POSTED
6 days ago
Responsibilities:


+ Translate business needs and architectural guidance into detailed designs, data contracts, and implementation plans that break down large initiatives into actionable engineering tasks with reliable estimates

+ Create detailed pipeline designs covering schemas, transformations, partitioning, DLT configurations, orchestration, error handling, and observability that align with the platform architecture through close collaboration with the Data Architect

+ Lead implementation and guide junior engineers on design, coding standards, and best practices

+ Develop metadata-driven and configuration-driven pipeline patterns that reduce custom code and improve consistency

+ Make technical decisions that ensure reliability, performance, maintainability, and scalability. Ensure production readiness with monitoring, lineage, alerting, observability, CI/CD and documentation

+ Define and enforce engineering design patterns, coding standards, testing practices, and operational best practices

+ Evaluate and incorporate new technologies and Databricks capabilities that improve reliability, performance, or developer productivity

+ Validate new technologies with the Data Architect and operationalize them through documentation, examples, and enablement

+ Implement automated data quality checks, rule enforcement, and exception handling

+ Production support of both an existing and new platform including optimization of jobs, incident tracking and other analysis required for production

+ Lead resolution of complex production issues and deliver durable root cause fixes

+ Maintain SLAs for reliability, recovery, idempotency, performance, and cost efficiency

+ Mentor Level 2–3 engineers through pairing, design guidance, code reviews, and technical coaching


Basic Skills:


• Bachelor’s degree preferred; equivalent experience accepted

• 10+ years in data engineering (12+ without a degree)

• 4+ years building production-grade batch/streaming pipelines using PySpark, Spark Structured Streaming, Python, and SQL

• Proven experience with data governance, schema evolution, data lineage, and secure access patterns

• Proven 2 years’ experience with maintaining and sustaining data pipelines


Preferred Skills:


• 3+ years hands-on with Databricks (Delta Lake, DLT, Unity Catalog, workflow jobs) within the last 6 years

• Experience building metadata-driven or configuration-driven pipelines

• Experience with data quality frameworks (DQX, Great Expectations, or equivalent)

• Experience with observability, metrics and query performance analysis

• Strong Spark optimization


Work Conditions:


• Team collaborative hours between 8am to 4pm EST

• Corporate office/lab environment

• Ability to travel 10% of the time
All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.

About the Company

O

Omnicell