Yahoo Web Search

Search results

  1. Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368 - YouTube. 0:00 / 3:17:51. Eliezer Yudkowsky is a researcher, writer, and philosopher on the...

    • 198 min
    • 1.9M
    • Lex Fridman
  2. 159 - We’re All Gonna Die with Eliezer Yudkowsky - YouTube. 0:00 / 1:49:22. Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.------ DEBRIEF | Unpacking the...

    • 109 min
    • 276K
    • Bankless
  3. Live from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University, Join us for an interactive Q&A with Yudkowsky about Al Safety! El...

    • 65 min
    • 56.8K
    • Center for the Future Mind
  4. Mar 29, 2023 · 11 minute read. Illustration for TIME by Lon Tweeten. Ideas. By Eliezer Yudkowsky. March 29, 2023 6:01 PM EDT. Yudkowsky is a decision theorist from the U.S. and leads research at the Machine...

    • 3 min
    • Eliezer Yudkowsky
  5. Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent?

  6. Rationality: A-Z (or "The Sequences") is a series of blog posts by Eliezer Yudkowsky on human rationality and irrationality in cognitive science. It is an edited and reorganized version of posts published to Less Wrong and Overcoming Bias between 2006 and 2009.

  7. Apr 18, 2023 · Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. Is an obedient, even benevolent, AI of superhuman intelligence possible? Yes, Yudkowsky says, but inscrutable large language models like ChatGPT are leading us down the wrong path. By the time the world realizes, he thinks it may be too late.

  1. People also search for