April 26, 2024

Princeton Dialogues of AI and Ethics: Launching case studies

Summary: We are releasing four case studies on AI and ethics, as part of the Princeton Dialogues on AI and Ethics. The impacts of rapid developments in artificial intelligence (“AI”) on society—both real and not yet realized—raise deep and pressing questions about our philosophical ideals and institutional arrangements. AI is currently applied in a wide […]

The Rise of Artificial Intelligence: Brad Smith at Princeton University

What will artificial intelligence mean for society, jobs, and the economy? Speaking today at Princeton University is Brad Smith, President and Chief Legal Officer of Microsoft. I was in the audience and live-blogged Brad’s talk. CITP director Ed Felten introduces Brad’s lecture by saying that the tech industry is at a crossroads. With the rise […]

AI Mental Health Care Risks, Benefits, and Oversight: Adam Miner at Princeton

How does AI apply to mental health, and why should we care? Today the Princeton Center for IT Policy hosted a talk by Adam Miner, ann AI psychologist, whose research addresses policy issues in the use, design, and regulation of conversational AI in health. Dr. Miner is an instructor in Stanford’s Department of Psychiatry and […]

Getting serious about research ethics: AI and machine learning

[This blog post is a continuation of our series about research ethics in computer science.] The widespread deployment of artificial intelligence and specifically machine learning algorithms causes concern for some fundamental values in society, such as employment, privacy, and discrimination. While these algorithms promise to optimize social and economic processes, research in this area has […]

Language necessarily contains human biases, and so will machines trained on language corpora

I have a new draft paper with Aylin Caliskan-Islam and Joanna Bryson titled Semantics derived automatically from language corpora necessarily contain human biases. We show empirically that natural language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well. Specifically, we look at […]