Ph3Sr. Software Engineer- AI/ML, AWS Neuron Distributed Training - Performance Optimization /h3 pAWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cutting‑edge cloud computing offerings across the AWS portfolio. /p pAnnapurna Labs (our organization within AWS UC) designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world. /p pAWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud‑scale machine learning accelerators and the Trn1 and Inf1 servers that use them. This role is for a senior software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This role is responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive‑scale large language models like GPT‑2, GPT‑3 and beyond, as well as stable diffusion, Vision Transformers and many more. /p pThe ML Apps team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trn1. Experience training these large models using Python is a must. FSDP, Deepspeed and other distributed training libraries are central to this and extending all of this for the Neuron based system is key. /p h3Key job responsibilities /h3 ul liLead the efforts building distributed training and inference support into PyTorch, TensorFlow, JAX using XLA and the Neuron compiler and runtime stacks. /li liTune models to ensure highest performance and maximize efficiency when running on customer AWS Trainium and Inferentia silicon and the TRn1/Inf1 servers. /li liCollaborate with chip architects, compiler engineers and runtime engineers to optimize workloads for the Neuron platform. /li liDesign and implement solutions that scale large‑model training to meet performance goals. /li liProvide technical mentorship and code review for junior team members. /li /ul h3About the team /h3 pOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge‑sharing and mentorship. Our senior members enjoy one‑on‑one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future. /p h3Basic Qualifications /h3 ul li5+ years of non‑internship professional software development experience. /li li5+ years of programming with at least one software programming language. /li li5+ years of leading design or architecture of new and existing systems (design patterns, reliability and scaling). /li li5+ years of full software development life cycle experience, including coding standards, code reviews, source control management, build processes, testing, and operations. /li liExperience as a mentor, tech lead or leading an engineering team. /li /ul h3Preferred Qualifications /h3 ul liBachelor’s degree in computer science or equivalent. /li liMachine Learning knowledge in frameworks and end‑to‑end model training. /li /ul pAmazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. /p pOur compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $151,300/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job‑related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign‑on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit. This position will remain posted until filled. Applicants should apply via our internal or external career site. /p /p #J-18808-Ljbffr