Software Engineer 3
Artech LLC
Charlotte, NC
JOB DETAILS
LOCATION
Charlotte, NC
POSTED
1 day ago
Introduction
We are seeking a skilled engineer to join our team, with expertise in RedHat OpenShift, Google Cloud (GCP), and Apache Spark. This role focuses on designing, deploying, and managing scalable data processing solutions in a cloud-native environment. You will work closely with data scientists, software engineers, and DevOps teams to ensure robust, high-performance data pipelines and analytics platforms.
Required Skills & Qualifications
- Applicants must be able to work directly for Artech on W2
- 5 years of experience working on Apache Spark for big data processing
- 3 years of Django development experience
- 2 years of experience creating and maintaining conda environments
- 4 years of experience managing containerized environments with OpenShift or Kubernetes
- Proficiency in Spark frameworks (Python/PySpark, Scala, or Java)
- Hands-on experience with OpenShift administration (e.g., cluster setup, networking, storage)
- Proficiency in creating and maintaining conda environments and dependencies
- Familiarity with Docker and Kubernetes concepts (e.g., pods, deployments, services, and images)
- Knowledge of distributed systems, cloud platforms (AWS, GCP, Azure), and data storage solutions (e.g., S3, HDFS)
- Strong coding skills in Python, Scala, or Java; experience with shell scripting is a plus
- Experience with Git Actions, Helm, Harness, and CI/CD tools
- Ability to debug complex issues across distributed systems and optimize resource usage
- Bachelor's degree in Computer Science, Engineering, or a related field
Preferred Skills & Qualifications
- Experience with CI/CD pipelines in GCP
- Familiarity with infrastructure-as-code tools (Terraform, etc.)
- Experience supporting data-intensive or migration-focused initiatives
Day-to-Day Responsibilities
- Deploy, configure, and maintain OpenShift clusters or GCP projects to support containerized Spark Applications
- Design and implement large-scale data processing workflows using Apache Spark
- Tune Spark jobs for performance, leveraging OpenShift's resource management capabilities (e.g., Kubernetes orchestration, auto-scaling)
- Integrate Spark with other data sources (e.g., Kafka, S3, cloud storage) and sinks (e.g., databases, data lakes)
- Build and maintain CI/CD pipelines for deploying Spark applications in OpenShift or GCP using tools like GitHub Actions, Sonar, Harness
- Monitor cluster health, Spark job performance, and resource utilization using OpenShift tools (e.g., Prometheus, Grafana) and resolve issues proactively
- Ensure compliance with security standards, implementing role-based access control (RBAC) and encryption for data in transit and at rest
- Work with cross-functional teams to define requirements, architect solutions, and support production deployments
Company Benefits & Culture
- Inclusive and diverse work environment
- Opportunities for professional growth and development
- Supportive team culture focused on collaboration and innovation
For immediate consideration please click APPLY to begin the screening process with Alex.
About the Company
A