By Chong Xiang and Prateek Mittal In our previous post, we discussed adversarial patch attacks and presented our first defense algorithm PatchGuard. The PatchGuard framework (small receptive field + secure aggregation) has become the most popular defense strategy over the past year, subsuming a long list of defense instances (Clipped BagNet, De-randomized Smoothing, BagCert, Randomized […]
Toward Trustworthy Machine Learning: An Example in Defending against Adversarial Patch Attacks (2)
Toward Trustworthy Machine Learning: An Example in Defending against Adversarial Patch Attacks
By Chong Xiang and Prateek Mittal Thanks to the stunning advancement of Machine Learning (ML) technologies, ML models are increasingly being used in critical societal contexts — such as in the courtroom, where judges look to ML models to determine whether a defendant is a flight risk, and in autonomous driving, where driverless vehicles are […]
How the National AI Research Resource can steward the datasets it hosts
Last week I participated on a panel about the National AI Research Resource (NAIRR), a proposed computing and data resource for academic AI researchers. The NAIRR’s goal is to subsidize the spiraling costs of many types of AI research that have put them out of reach of most academic groups. My comments on the panel […]
Calling for Investing in Equitable AI Research in Nation’s Strategic Plan
By Solon Barocas, Sayash Kapoor, Mihir Kshirsagar, and Arvind Narayanan In response to the Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan (“Strategic Plan”) we submitted comments providing suggestions for how the Strategic Plan for government funding priorities should focus resources to address societal issues such as […]
What Are Machine Learning Models Hiding?
Machine learning is eating the world. The abundance of training data has helped ML achieve amazing results for object recognition, natural language processing, predictive analytics, and all manner of other tasks. Much of this training data is very sensitive, including personal photos, search queries, location traces, and health-care records. In a recent series of papers, […]