PPoPP 2024
Sat 2 - Wed 6 March 2024 Edinburgh, United Kingdom

This program is tentative and subject to change.

Tue 5 Mar 2024 08:30 - 09:30 at Auditorium - CGO Keynote

Generative AI applications with their ability to produce natural language, computer code and images are transforming all aspects of society. These applications are powered by huge foundation models such as GTP-4 which are trained on massive unlabeled datasets. Foundation models have 10s of billions of parameters and have obtained state-of-the-art quality in natural language processing, vision and speech applications. These models are computationally challenging because they require 100s of petaFLOPS of computing capacity for training and inference. Future foundation models will have even greater capabilities provided by more complex model architectures with longer sequence lengths, irregular data access (sparsity) and irregular control flow. In this talk I will describe how the evolving characteristics of foundation models will impact the design of the optimized computing systems required for training and serving these models. A key element of improving the performance and lowering the cost of deploying future foundation models will be optimizing the data movement (Dataflow) within the model using specialized hardware. In contrast to human-in-the-loop applications such as conversational AI, an emerging application of foundation models is in continuous processing applications that operate without human supervision. I will describe how continuous processing and real-time machine learning can be used to create an intelligent network data plane.

Bio: Kunle Olukotun is a Professor of Electrical Engineering and Computer Science at Stanford University and he has been on the faculty since 1991. Olukotun is well known for leading the Stanford Hydra research project which developed one of the first chip multiprocessors with support for thread-level speculation (TLS). Olukotun founded Afara Websystems to develop high-throughput, low power server systems with chip multiprocessor technology. Afara was acquired by Sun Microsystems; the Afara microprocessor technology, called Niagara, is at the center of Sun’s throughput computing initiative. Niagara based systems have become one of Sun’s fastest ramping products ever. Olukotun is actively involved in research in computer architecture, parallel programming environments and scalable parallel systems. Olukotun currently co-leads the Transactional Coherence and Consistency project whose goal is to make parallel programming accessible to average programmers. Olukotun also directs the Stanford Pervasive Parallelism Lab (PPL) which seeks to proliferate the use of parallelism in all application areas. Olukotun is an ACM Fellow (2006) for contributions to multiprocessors on a chip and multi threaded processor design. He has authored many papers on CMP design and parallel software and recently completed a book on CMP architecture. Olukotun received his Ph.D. in Computer Engineering from The University of Michigan.

This program is tentative and subject to change.

Tue 5 Mar

Displayed time zone: London change

08:30 - 09:30
CGO KeynoteKeynotes at Auditorium
CGO Keynote: Computing Systems for the Foundation Model Era
Kunle Olukotun Stanford University