- Session
- 13:12 - 13:12
- Duration: 31 mins
- Publication date: 22 Nov 2024
- Location: Conference, Chicago Business School, London, United Kingdom
- Part of event REACH 2024
About the session
As Moore’s Law slows, the challenge of optimizing program performance shifts toward higher-level abstractions like algorithm selection and API decisions - domains traditionally dependent on human expertise. In this talk, I explore how Large Language Models (LLMs) can revolutionize performance optimization by automating complex code transformations. Building on our recent work, I present Performance-Improving Edits, a novel dataset that fuels the ability of LLMs to generate high-performance code edits, outpacing human efforts in competitive programming. This talk dives into the potential of generative AI to augment modern compilers, demonstrating how techniques like fine-tuning, reward-conditioning, and self-play can scale LLM capabilities to handle diverse optimization tasks. I will share insights on how these advancements can be applied to compiler design, achieving substantial speedups and enabling more efficient computing architectures. Finally, I outline a vision for scaling generative AI to autonomously manage code optimizations across platforms and hardware targets.
Amir Yazdanbakhsh, Research Scientist, Google DeepMind, USA