Apple logo

AIML - Sr. Machine Learning Platform Engineer, MLPT

Apple
Full-time
On-site
Seattle, Washington, United States
Machine Learning
In this role, you'll build and scale core platform capabilities for model management and serving infrastructure while creating seamless integrations with the ML frameworks that Apple's teams depend on. You'll design and build systems that feel native to each framework while providing a unified experience across the platform. Your work will include building backend services, designing Python SDKs and APIs, creating integrations across ML tools and frameworks, and solving complex technical challenges that span multiple systems. You'll work closely with ML engineers to understand their workflows, identify pain points, and build platform features that multiply their productivity. You'll collaborate with teams building customer-facing ML features across iOS, macOS, and other Apple platforms, as well as compute infrastructure teams and ML framework owners. Your platform work directly enables the ML innovations that millions of customers experience daily. This role offers the opportunity to have broad impact across Apple's ML initiatives and to shape how thousands of ML practitioners build the intelligent experiences our customers love.


  • Bachelor's degree in Computer Science, related field, or equivalent practical experience.
  • 10+ years of software engineering experience with strong backend development skills and platform engineering mindset
  • Deep proficiency in Python with proven experience designing SDKs, libraries, and APIs for technical users
  • Experience integrating with complex ML frameworks (PyTorch, TensorFlow, JAX, HuggingFace) and building production-grade backend services (REST/GraphQL APIs, microservices, databases)
  • Track record of building end-to-end workflows that span multiple systems and teams, navigating complex technical landscapes to deliver pragmatic solutions
  • Strong cross-functional collaboration and communication skills to understand diverse stakeholder needs, technical requirements, and articulate design decisions across ML engineering, infrastructure, and product teams
  • Experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Kubernetes)


  • Experience with model serving systems (vLLM, Ray Serve, TorchServe, TensorRT) or inference optimization
  • Contributions to open-source ML frameworks, tools, or libraries
  • Understanding of distributed training, model parallelism, and large-scale ML workflows
  • Familiarity with MLOps practices, model management, and experiment tracking systems