Yahoo Web Search

Search results

  1. The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.

  2. Jun 10, 2022 · Eliezer Yudkowsky explains why he does not think AGI alignment is survivable, and why he does not think it is possible in practice to build a safe and useful AGI that does not kill everyone. He lists various lethalities of AGI, such as orthogonality, instrumental convergence, and trolley problems, and challenges the common arguments of AGI enthusiasts.

  3. Mar 29, 2023 · Eliezer Yudkowsky, one of the earliest researchers to analyze the prospect of powerful Artificial Intelligence, now warns that we've entered a bleak scenario

    • Eliezer Yudkowsky
  4. People also ask

  5. Jun 10, 2023 · In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers. Called rationalists or...

  6. 1 hr 1 min. "AGI Ruin: A List of Lethalities" by Eliezer Yudkowsky LessWrong Curated Podcast. Technology. https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities. Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

  7. How AI can bring on a second Industrial Revolution. Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent?

  1. People also search for