[[{"fid":"526","view_mode":"embedded_left","fields":{"format":"embedded_left","field_file_image_alt_text[und][0][value]":"Matt Fredrikson","field_file_image_title_text[und][0][value]":"","field_file_caption_credit[und][0][value]":"%3Cp%3E%3Cspan%20id%3D%5C%22OLK_SRC_BODY_SECTION%5C%22%3E%3Cb%20class%3D%5C%22gmail_sendername%5C%22%3EMatt%20Fredrikson%3C%2Fb%3E%3C%2Fspan%3E%3C%2Fp%3E%0A","field_file_caption_credit[und][0][format]":"full_html"},"type":"media","link_text":null,"attributes":{"alt":"Matt Fredrikson","height":348,"width":250,"class":"media-element file-embedded-left"}}]]As data from far-reaching sources is collected, aggregated, and re-packaged to enable new and smarter applications, confidentiality and data security are at greater risk than ever before. Some of the most surprising and invasive threats to materialize in recent years are brought about by so-called inference attacks: successful attempts to learn sensitive information by leveraging public data such as social network updates, published research articles, and web APIs.
In this talk, I will focus on two of my research efforts to better understand and defend against these attacks. First I will discuss work that examines the privacy risks that arise when machine learning models are used in a popular medical application, and illustrate the consequences of applying differential privacy as a defense. This work uncovered a new type of inference attack on machine learning models, and shows via an in-depth case study how to understand privacy "in situ" by balancing the attacker's chance of success against the likelihood of harmful medical outcomes. The second part of the talk will detail work that helps developers correctly write privacy-aware applications using verification tools. I will illustrate how a wide range of confidentiality guarantees can be framed in terms of a new logical primitive called Satisfiability Modulo Counting, and describe a tool that I have developed around this primitive that automatically finds privacy bugs in software (or proves that the software is bug-free). Through a better understanding of how proposed defenses impact real applications, and by providing tools that help developers implement the correct defense for their task, we can begin to proactively identify potential threats to privacy and take steps to ensure that they will not surface in practice.
Matt Fredrikson is a Ph.D. candidate in the department of Computer Sciences at the University of Wisconsin-Madison. His research interests lie at the intersection of security, privacy, and formal methods, covering topics in software security, privacy issues associated with machine-learning models, and applied cryptography. His work has been profiled by Reuters, Technology Review, and New Scientist, and received the best paper award at USENIX Security 2014. He is a recipient of the Microsoft Research Graduate Fellowship Award.