EECS Special Seminar: Shivaram Venkataraman, "Scalable Systems for Fast and Easy Machine Learning"

Speaker

Shivaram Venkataraman
Computer Science at UC Berkeley

Host

Nickolai Zeldovich
Abstract: Machine learning models trained on massive datasets power a number of applications; from machine translation to detecting supernovae in astrophysics. However the end of Moore’s law and the shift towards distributed computing architectures presents many new challenges for building and executing such applications in a scalable fashion.

In this talk I will present my research on systems that make it easier to develop new machine learning applications and scale them while achieving high performance. I will first present programming models that let users easily build distributed machine learning applications. Next, I will show how we can simplify large scale deployments and understand scalability with low-overhead performance models. Finally, I will describe scheduling techniques that exploit the structure of machine learning algorithms to improve scalability and achieve high performance when using distributed data processing frameworks.

Bio: Shivaram Venkataraman is a PhD Candidate at the University of California, Berkeley and is advised by Mike Franklin and Ion Stoica. His research interests are in designing systems and algorithms for large scale data processing and machine-learning. He is a recipient of the Siebel Scholarship and best-of-conference citations at VLDB and KDD. Before coming to Berkeley, he completed his M.S at the University of Illinois, Urbana-Champaign.