02-26
Learning Systems: Systems and Abstractions for Large-Scale Machine Learning

[[{"fid":"543","view_mode":"embedded_left","fields":{"format":"embedded_left","field_file_image_alt_text[und][0][value]":"Joseph Gonzalez","field_file_image_title_text[und][0][value]":"","field_file_caption_credit[und][0][value]":"%3Cp%3EJoseph%20Gonzalez%3C%2Fp%3E%0A","field_file_caption_credit[und][0][format]":"full_html"},"type":"media","link_text":null,"attributes":{"alt":"Joseph Gonzalez","height":166,"width":250,"class":"media-element file-embedded-left"}}]]The challenges of advanced analytics and big data cannot be address by developing new machine learning algorithms or new computing systems in isolation.  Some of the recent advances in machine learning have come from new systems that can apply complex models to big data problems.  Likewise, some of the recent advances in systems have exploited fundamental properties in machine learning to reach new points in the system design space.  By considering the design of scalable learning systems from both perspectives, we can address bigger problems, expose new opportunities in algorithm and system design, and define the new fundamental abstractions that will accelerate research in these complementary fields.

In this talk, I will present my research in learning systems spanning the design of efficient inference algorithms, the development of graph processing systems, and the unification of graphs and unstructured data.  I will describe how the study of graphical model inference and power-law graph structure shaped the common abstractions in contemporary graph processing systems, and how new insights in system design enabled order-of-magnitude performance gains over general purpose data-processing systems.  I will then discuss how lessons learned in the context of specialized graph-processing systems can be lifted to more general data-processing systems enabling users to view data as graph and tables interchangeably while preserving the performance gains of specialized systems.  Finally, I will present a new direction for the design of learning systems that looks beyond traditional analytics and model fitting to the entire machine learning life-cycle spanning model training, serving, and management.

Joseph Gonzalez is a postdoc in the UC Berkeley AMPLab and cofounder of GraphLab. Joseph received his PhD from the Machine Learning Department at Carnegie Mellon University where he worked with Carlos Guestrin on parallel algorithms and abstractions for scalable probabilistic machine learning. Joseph is a recipient of the AT&T Labs Graduate Fellowship and the NSF Graduate Research Fellowship.

Date and Time
Thursday February 26, 2015 12:30pm - 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Host
Barbara Engelhardt

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List