This role offers a unique opportunity to innovate at the intersection of AI and embedded hardware. You will transform advanced ML algorithms into highly optimized, power-efficient code for custom silicon and microcontrollers in Apple products, specifically for robotics. You'll tackle complex challenges like memory constraints, computational budgets, and real-time performance, ensuring ML models deliver exceptional user experiences while adhering to Appleโs privacy and power efficiency standards.
Bachelorโs degree (3+ years experience) or Masterโs degree (1+ year experience) in CS, EE, or a related technical field.
Proficiency in C/C++ for embedded systems development, including RTOS, microcontrollers, and low-level hardware interactions.
roven ability to optimize and deploy ML models for resource-constrained edge devices using techniques like - quantization/pruning and frameworks (e.g., TensorFlow Lite, ONNX Runtime, Core ML).
Strong analytical and debugging skills to resolve performance bottlenecks across hardware, firmware, and ML inference.
Experience with ML inference hardware acceleration (DSPs, NPUs, ASICs).Familiarity with diverse neural network architectures and training methodologies for efficient edge deployment.
Knowledge of computer vision, NLP, or audio processing in an embedded/robotics context.
Experience with embedded Linux or other RTOS in a production environment.
Contributions to open-source embedded ML projects or relevant publications.
Proficiency with Python for automation and data analysis.