Misplaced Pages

Center for Human-Compatible Artificial Intelligence

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Center for Human-Compatible AI) US AI safety research center
Center for Human-Compatible Artificial Intelligence
Formation2016; 8 years ago (2016)
HeadquartersBerkeley, California
LeaderStuart J. Russell
Parent organizationUniversity of California, Berkeley
Websitehumancompatible.ai

The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell. Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach.

CHAI's faculty membership includes Russell, Pieter Abbeel and Anca Dragan from Berkeley, Bart Selman and Joseph Halpern from Cornell, Michael Wellman and Satinder Singh Baveja from the University of Michigan, and Tom Griffiths and Tania Lombrozo from Princeton. In 2016, the Open Philanthropy Project (OpenPhil) recommended that Good Ventures provide CHAI support of $5,555,550 over five years. CHAI has since received additional grants from OpenPhil and Good Ventures of over $12,000,000, including for collaborations with the World Economic Forum and Global AI Council.

Research

CHAI's approach to AI safety research focuses on value alignment strategies, particularly inverse reinforcement learning, in which the AI infers human values from observing human behavior. It has also worked on modeling human-machine interaction in scenarios where intelligent machines have an "off-switch" that they are capable of overriding.

See also

References

  1. Norris, Jeffrey (Aug 29, 2016). "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Retrieved Dec 27, 2019.
  2. Solon, Olivia (Aug 30, 2016). "The rise of robots: forget evil AI – the real risk is far more insidious". The Guardian. Retrieved Dec 27, 2019.
  3. Cornell University. "Human-Compatible AI". Retrieved Dec 27, 2019.
  4. Center for Human-Compatible Artificial Intelligence. "People". Retrieved Dec 27, 2019.
  5. Open Philanthropy Project (Aug 2016). "UC Berkeley — Center for Human-Compatible AI (2016)". Retrieved Dec 27, 2019.
  6. Open Philanthropy Project (Nov 2019). "UC Berkeley — Center for Human-Compatible AI (2019)". Retrieved Dec 27, 2019.
  7. "UC Berkeley — Center for Human-Compatible Artificial Intelligence (2021)". openphilanthropy.org.
  8. "World Economic Forum — Global AI Council Workshop". Open Philanthropy. April 2020. Archived from the original on 2023-09-01. Retrieved 2023-09-01.
  9. Conn, Ariel (Aug 31, 2016). "New Center for Human-Compatible AI". Future of Life Institute. Retrieved Dec 27, 2019.
  10. Bridge, Mark (June 10, 2017). "Making robots less confident could prevent them taking over". The Times.

External links

Existential risk from artificial intelligence
Concepts
Organizations
People
Other
Category

This article about an organization in the United States is a stub. You can help Misplaced Pages by expanding it.

Categories: