AWS Data Engineer Location: Dallas, TX F2F (Hybrid – 3 days onsite)

Plugins Inc

Dallas, TX

Apply
JOB DETAILS
JOB TYPE
Full-time, Employee
SKILLS
AWS Lambda, Agile Programming Methodologies, Amazon Simple Notification Service (SNS), Amazon Simple Storage Service (S3), Amazon Web Services (AWS), Apache, Apache Kafka, Best Practices, Business Intelligence, Business Operations, Cloud Computing, Code Reviews, Continuous Deployment/Delivery, Continuous Integration, Cross-Functional, Data Management, Data Modeling, Data Quality, Data Sets, Database Design, Database Extract Transform and Load (ETL), Electronic Medical Records, Identify Issues, JSON, Performance Tuning/Optimization, Process Improvement, Python Programming/Scripting Language, SQL (Structured Query Language), Sales Pipeline, Scalable System Development, Simple Queue Service (SQS), Software Engineering, Standup Meetings, Test Driven Development (TDD), Testing, Unit Test, XML (EXtensible Markup Language)
LOCATION
Dallas, TX
POSTED
22 days ago
Job Title: AWS Data Engineer         
Location: Dallas, TX (Hybrid – 3 days onsite)                                                                          
Experience: 8–12 years                                       
Interview Process: In-Person                                         
Profiles: Locals & Non-Locals Can Apply                                                                                                                       
Overview

We are looking for an experienced AWS Data Engineer with strong expertise in ETL, cloud migration, and large-scale data engineering. The ideal candidate is hands-on with AWS, Python/PySpark, and SQL, and can design, optimize, and manage complex data pipelines. This role requires collaboration across teams to deliver secure, scalable, and high-quality data solutions that drive business intelligence and operational efficiency.
                                           
Key Responsibilities
- Design, build, and maintain scalable ETL pipelines across AWS and SQL-based technologies.
- Assemble large, complex datasets that meet business and technical requirements.
- Implement process improvements by re-architecting infrastructure, optimizing data delivery, and automating workflows.
- Ensure data quality and integrity across multiple sources and targets.
- Orchestrate workflows with Apache Airflow (MWAA) and support large-scale cloud migration projects.
- Conduct ETL testing, apply test-driven development (TDD), and participate in code reviews.
- Monitor, troubleshoot, and optimize pipelines for performance, reliability, and security.
- Collaborate with cross-functional teams and participate in Agile ceremonies (sprints, reviews, stand-ups).

Requirements
- 8–12 years of experience in Data Engineering, with deep focus on ETL, cloud pipelines, and Python development.
- 5+ years of hands-on coding with Python (primary), PySpark, and SQL.
- Proven experience with AWS services: Glue, EMR (Spark), S3, Lambda, ECS/EKS, MWAA (Airflow), IAM.
- Experience with AuroraDB,DynamoDB Redshift, and AWS Data Lakes.
- Strong knowledge of data modeling, database design, and advanced ETL processes (including Alteryx).
- Proficiency with structured and semi-structured file types (Delimited Text, Fixed Width, XML, JSON, Parquet).
- Experience with ServiceBus or equivalent AWS streaming/messaging tools (SNS, SQS, Kinesis, Kafka).
- CI/CD expertise with GitLab or similar, plus hands-on Infrastructure-as-Code (Terraform, Python, Jinja, YAML).
- Familiarity with unit testing, code quality tools, containerization, and security best practices.
- Solid Agile development background, with experience in Agile ceremonies and practices.

Flexible work from home options available.

About the Company

P

Plugins Inc