Yahoo Web Search

  1. Ad

    related to: eliezer yudkowsky agi ruin

Search results

  1. This ability to "notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them" currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others.

  2. Jun 10, 2022 · Eliezer Yudkowsky explains why he does not think AGI alignment is survivable, and why he does not think it is possible in practice to build a safe and useful AGI that does not kill everyone. He lists various lethalities of AGI, such as orthogonality, instrumental convergence, and trolley problems, and challenges the common arguments of AGI enthusiasts.

  3. Mar 29, 2023 · Eliezer Yudkowsky, one of the earliest researchers to analyze the prospect of powerful Artificial Intelligence, now warns that we've entered a bleak scenario

    • Eliezer Yudkowsky
  4. People also ask

  5. Jun 10, 2023 · In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers.

  6. Preamble: (If you're already familiar with all basics and don't want any preamble, skip ahead to Section B for technical difficulties of alignment proper.) I have several times failed to write up a well-organized list of reasons why AGI will kill you.

  7. Jun 20, 2022 · Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why?

  1. People also search for