AI Model Optimization Engineer
Advanced Mircro Devices, Inc.
Seattle, WA
Apply
JOB DETAILS
SALARY
$138,000–$207,000 Per Year
JOB TYPE
Full-time, Employee
SKILLS
Algorithms, Analysis Skills, Artificial Intelligence (AI), C Programming Language, C++ Programming Language, CUDA (Compute Unified Device Architecture), Cloud Computing, Computer Science, Computer Software, Data Clustering, Data Management, Data Structures, Debugging Skills, Debugging Tools, Develop Methodologies, Documentation, Embedded Systems, Engineering, GPU (Graphics Processing Unit), Gaming, Go Programming Language (Golang), Information/Data Security (InfoSec), Kernel Programming, Linux Operating System, Metrics, Microsoft C# (C Sharp), Multithreaded Programming, Network Operations Center, Object Oriented Design (OOD), Open Source, Operating Systems, Product Development, Production Systems, Project/Program Management, Python Programming/Scripting Language, Quality Engineering, Recruiting/Staffing Agency, Reporting Dashboards, Rust Programming Language, Service Level Agreement (SLA), Software Architecture, Software Development, Software Engineering, Systems Engineering, Team Player, Technical Support, Telemetry, Topology, User Documentation
LOCATION
Seattle, WA
POSTED
30+ days ago
At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
THE ROLE:
The AMD AI Group is looking for a Senior Software Development Engineer to own the end-to-end model execution stack on AMD Instinct GPUs-spanning training infrastructure at scale and high-performance inference serving. This role demands someone who has shipped LLMs on real hardware, written GPU kernels that moved production metrics, and built the systems infrastructure (orchestration, storage, monitoring) that keeps thousands of GPUs productive. You will be instrumental in ensuring AMD GPUs are first-class citizens for frontier model training and inference across current and next-generation Instinct accelerators.
KEY RESPONSIBILITIES:
Training Infrastructure & Enablement
Enable andoptimizelarge-scale model training (LLMs, VLMs,MoEarchitectures) on AMD Instinct GPU clusters, ensuring correctness, reproducibility, and competitive throughput.
Build andmaintaintraining infrastructure: job orchestration, distributed checkpointing, data loading pipelines, and storage optimization for multi-thousand GPU clusters on Kubernetes.
Debug and resolve training-specific issues including gradient norm explosions, non-deterministic behavior across GPU generations, and compute-communication overlap in distributed training (FSDP,DeepSpeed, Megatron-LM).
OptimizeRCCL collective communication patterns for training workloads, including all-reduce, all-gather, and reduce-scatter across multi-node topologies.
Develop monitoring, alerting, and compliance infrastructure to ensure training cluster health, data security, and SLA adherence at scale.
Inference Optimization & Serving
Write andoptimizehigh-performance GPU kernels (GEMM, attention, quantizedmatmul, GPTQ/AWQ) in HIP, Triton, and MLIR targeting AMD Instinct architectures, withdemonstratedability to outperform open-source baselines.
Drive end-to-end inference enablement on new AMD GPU silicon-be among the first to get frontier models running on each new Instinct generation, creating reproducible guides and reference implementations.
Optimizeinference serving frameworks (vLLM,SGLang,TorchServe) for AMD GPUs: batching strategies, KV-cache management, speculative decoding, and continuous batching for production throughput/latency targets.
Develop novel approaches to inference acceleration, including bio-inspired algorithms, SLM-assisted batching, and custom scheduling strategies that exploit AMD hardware characteristics.
Build quantization pipelines (FP8, FP6, FP4, GPTQ, AWQ) for production model deployment, ensuring quality-performance tradeoffs are well-characterized across AMD GPU generations.
Cross-Cutting
Design observability and debugging tooling: log analysis pipelines, anomaly detection systems, and failure correlation tools for large-scale GPU clusters processing hundreds of terabytes of telemetry per month.
Collaborate with AMD silicon architecture teams to provide software feedback on next-generation Instinct GPU designs for both training and inference workloads.
Contribute to the openROCmecosystem and AMD's developer experience-SDKs, CI dashboards, documentation, and developer cloud enablement.
Collaborate closely with multiple teams to deliver key planning solutions and the technology to support them
Help contribute to the design and implementation of future architecture for a highly scalable, durable, and innovative system
Work very closely with dev teams and Project Managers to drive results
PREFERRED EXPERIENCE:
Strong Industry experience shipping productionAI/ML infrastructure, with hands-on work spanning both training and inference.
Proven experience running LLMs on AMD GPUs (ROCm, HIP) or equivalent depth with CUDA, with strong willingness to work on AMD platforms.
Track recordof writing custom GPU kernels (CUDA, HIP, or Triton) thatdeliveredmeasurable throughput improvements in production systems.
Strong systems engineering skills: Kubernetes, container orchestration, distributed storage, and GPU cluster management at scale (1,000+ GPUs).
Proficiencyin Python and at least one systems language (C++, Rust, Go, C#) with production-quality software engineering practices.
Deep understanding of LLM architecture internals: attention mechanisms, KV-cache, quantization schemes, and distributed parallelism strategies (tensor, pipeline, expert parallelism).
Direct experience enabling frontier models (GPT-4 class) on AMD Instinct hardware end-to-end.
Expert knowledge and hands-on experience in C, C++
Solid understanding of object-oriented-design principles
Solid understanding of Software Engineering principles, Data structure, algorithms, Operating Systems concepts and multithread programming
Excellent design and code development skills, familiarity with Linux and modern software tools and techniques for development
Good analytical and problem-solving skills
ACADEMIC CREDENTIALS:
Bachelor's or Master's degree in Computer/Software Engineering, Computer Science, or related technical discipline
This role is not eligible for visa sponsorship.
#LI-CJ3
#LI-Hybrid
Benefits offered are described: AMD benefits at a glance.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.
AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.
This posting is for an existing vacancy.
AMD does not accept unsolicited resumes from headhunters, recruitment agencies or fee based recruitment services. AMD and its subsidiaries are equal opportunity employers. We consider candidates regardless of age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status.
About the Company
A