HPC Storage Solutions Architect
GTN Technical Staffing
Dallas, TX
Apply
JOB DETAILS
LOCATION
Dallas, TX
POSTED
30+ days ago
HPC Storage Solutions Architect
Location: Dallas, TX (Hybrid)
Type: Direct Hire
•Competitive base salary + performance bonus
•100% company-paid benefits
Overview
We are seeking an HPC Storage Solutions Architect to design, integrate, and optimize high-performance storage architectures supporting HPC, AI/ML, and large-scale data-intensive workloads.
This is a customer-facing, technically focused role responsible for guiding clients through the full solution lifecycle—from requirements discovery and workload analysis through architecture design, proof-of-concept, deployment, and long-term optimization. The role ensures that storage systems are aligned with workload demands, delivering high throughput, low latency, and scalable performance.
The ideal candidate brings deep expertise in distributed storage systems, strong experience with multi-petabyte environments, and the ability to translate complex workload requirements into efficient, production-ready storage solutions.
Key ResponsibilitiesCustomer Engagement & Architecture Leadership
•Serve as the primary storage subject matter expert for customers deploying or scaling HPC environments
•Capture storage requirements, performance objectives, and capacity planning needs
•Lead customer workshops, architecture discussions, and technical design sessions
Storage Architecture & Design
•Design and document end-to-end storage architectures including parallel and distributed file systems (Lustre, GPFS, Ceph, VAST), object storage, and tiered storage solutions
•Develop scalable, resilient architectures aligned with HPC, AI/ML, and data-intensive workloads
•Create architecture blueprints, integration guides, and reusable design patterns
Performance Optimization & Workflow Engineering
•Lead proof-of-concept and benchmarking initiatives to validate performance and scalability
•Conduct workflow assessments and storage usage reviews to optimize throughput, latency, and cost efficiency
•Troubleshoot and resolve performance bottlenecks across storage and data pipelines
Integration & Infrastructure Design
•Define integration strategies across compute, networking, and orchestration layers
•Ensure seamless end-to-end performance across HPC environments
•Support file system protocols including NFS, SMB, and POSIX-based systems
Automation & Platform Delivery
•Implement Infrastructure-as-Code and automation practices using tools such as Ansible and Terraform
•Deliver consistent, repeatable storage deployments and operational workflows
Cross-Functional Collaboration
•Partner with engineering, product, and operations teams to refine storage platforms and offerings
•Collaborate with compute, networking, and Kubernetes teams to ensure integrated solutions
•Support multi-vendor environments and evaluate emerging storage technologies
Vendor & Ecosystem Engagement
•Work with vendors such as Dell, VAST Data, HPE, and Rubrik to integrate new capabilities
•Provide customer-driven feedback to influence vendor roadmaps and feature development
•Stay current on emerging storage technologies, protocols, and data management practices
Thought Leadership & Innovation
•Represent the organization in customer workshops, technical reviews, and industry events
•Provide forward-looking guidance on storage trends, scalability strategies, and platform evolution
•Contribute to best practices and standardized architecture frameworks
Required Experience
•Proven experience in storage solution architecture, HPC storage engineering, or large-scale distributed storage design
•Deep expertise in parallel and distributed file systems including Lustre, GPFS, Ceph, and VAST
•Experience designing, deploying, and scaling multi-petabyte storage environments
•Strong knowledge of Linux storage stack tuning and file system protocols (NFS, SMB, POSIX)
•Experience implementing automation and Infrastructure-as-Code practices (Ansible, Terraform)
•Proven ability to troubleshoot and optimize storage workflows for HPC, AI/ML, or data-intensive workloads
•Strong customer-facing communication skills with the ability to present complex architectures clearly
Preferred Experience
•Experience delivering HPC or AI/ML workloads in high-performance storage environments
•Familiarity with data protection, backup, and recovery technologies integrated with HPC storage
•Experience working within multi-vendor storage ecosystems
•Exposure to workflow optimization for data pipelines, simulation, or scientific computing
•Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
•Certifications such as NetApp NCIE, Dell EMC Proven Professional, Red Hat RHCE, or cloud certifications (AWS, Azure, GCP)
About the Company
G