Kafka Platform Engineer

GTN Technical Staffing

Columbus, OH(remote)

JOB DETAILS
LOCATION
Columbus, OH
POSTED
30+ days ago
Kafka Platform Engineer

LOCATION: Remote (must work EST hours and live within 60 miles of one of these offices)

  • Draper, UT

  • Columbus, OH

  • Plano, TX

  • Chadds Ford, PA

  • Wilmington, DE

  • NYC (Manhattan)

Working Hours: 9–5 EST
Type: Temp to Perm

Top 3 Must-Haves (Hard and/or Soft Skills)
1. Kafka & Confluent Cloud Expertise

  • Deep understanding of Kafka architecture and Confluent Cloud services.

  • Experience with Kafka Connect, Schema Registry, and stream processing.

2. AWS Infrastructure & Database Management

  • Hands-on experience with AWS services like RDS, Aurora, EC2, IAM, and networking.

  • Ability to integrate Kafka with AWS-hosted databases and troubleshoot cloud-native issues.

3. Terraform & Infrastructure Automation

  • Proficiency in Terraform for provisioning Kafka clusters, AWS resources, and managing infrastructure as code.

  • Familiarity with GitOps workflows and CI/CD pipelines.

Top 3 Nice-to-Haves

  1. Monitoring & Observability

    • Experience with Prometheus, Grafana, Datadog, or Confluent Metrics API.

    • Ability to set up alerting and dashboards for Kafka and cloud services.

  2. Security & Governance

    • Knowledge of RBAC, encryption, and audit logging in Confluent Cloud and AWS.

    • Experience implementing secure data pipelines and compliance controls.

  3. Collaboration & Incident Response

    • Strong cross-functional teamwork with data engineers, SREs, and developers.

    • Skilled in communication during outages, postmortems, and planning sessions.

Certification Preferences

  • Confluent Certified Developer for Apache Kafka

  • AWS Certified Solutions Architect –Associate

  • HashiCorp Certified: Terraform Associate

Role Summary
The Kafka Platform Engineer designs, implements, and supports scalable, secure Kafka-based messaging pipelines that power real-time communication between critical systems such as credit, loan applications, and fraud services. This role focuses on resiliency, reliability, and operations of the Kafka platform in a highly regulated environment, partnering closely with engineering and platform teams to support the migration from on-prem to AWS.

Essential Job Functions

  • Monitor and optimize cloud services;manage access controls;maintain compliance records. (25%)

  • Write and maintain automated deployment scripts;ensure CI/CD pipeline integrity. (25%)

  • Set up observability tools;create dashboards and alerts;provide performance reports. (20%)

  • Collaborate cross-functionally, document requirements, resolve conflicts, and track progress. (15%)

  • Conduct capacity planning, scaling, and resource optimization. (15%)

Minimum Qualifications

  • Bachelor’s Degree in IT, Computer Science, Engineering, or related field (or equivalent experience).

  • At least 1 relevant certification (AWS, Azure, GCP, DevSecOps, Apache Kafka).

  • 2+ years’platform engineering experience.

  • 2+ years’cloud services experience with IaC tools (Terraform, CloudFormation).

Preferred Qualifications

  • 5+ years of cloud engineering experience.

  • 3+ years of Apache Kafka in regulated, mission-critical environments.

  • Proficiency in Java, Scala, or Python for Kafka applications.

  • Experience with Kafka Connect, Schema Registry, Kafka Streams.

  • Containerization (Docker, Kubernetes) and CI/CD pipelines.

  • Security knowledge: Kerberos, SSL, ACLs, IAM integration.

  • Familiarity with financial transaction systems and data privacy regulations.

Skills

  • Programming Languages

  • Cloud Services Management

  • CI/CD & Configuration Management

  • Infrastructure as Code (IaC)

  • DevSecOps

  • Monitoring & Observability

  • Capacity Planning

  • Security Management

  • Technical Communication

  • Cloud Deployment

Day in the Life

  • Morning Check-ins: Review dashboards, monitor incidents.

  • Team Collaboration: Stand-ups with infrastructure, network, and app teams.

  • Strategic Work: Review infrastructure roadmaps, vendor performance, and architecture.

  • Hands-On Engineering: Provision Kafka topics/resources with Terraform, optimize throughput/latency, troubleshoot integrations.

  • Documentation: Update architecture diagrams, runbooks, and standards in Confluence.

  • Stakeholder Engagement: Meet with business units, address escalations.

Interaction Level
High interaction with infrastructure engineers, network admins, project managers, and application owners. Expect daily or near-daily engagement.

Top Priorities (First Weeks/Months)
First Few Weeks:

  1. Understand the existing Kafka ecosystem.

  2. Gain visibility into data flows & integrations.

  3. Review documentation & Jira backlog.

First Few Months:

  1. Stabilize & optimize Kafka infrastructure.

  2. Improve automation & observability.

  3. Collaborate with teams to onboard new use cases.

Biggest Challenges

  • Balancing support for legacy systems while driving modernization and migration to AWS.

About the Company

G

GTN Technical Staffing