Yahoo Web Search

Search results

  1. Jun 5, 2022 · The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.

  2. Jun 10, 2022 · Eliezer Yudkowsky argues that artificial general intelligence (AGI) will kill you unless alignment is solved with high probability. He lists several reasons why AGI alignment is lethally difficult and why current approaches are insufficient.

  3. Mar 29, 2023 · Eliezer Yudkowsky, one of the earliest researchers to analyze the prospect of powerful Artificial Intelligence, now warns that we've entered a bleak scenario

    • Eliezer Yudkowsky
  4. May 12, 2023 · Yudkowsky calls an action like this which is difficult to execute but significantly reduces existential risk a pivotal act. An AGI would be powerful enough to perform a pivotal act and consequently, the level of existential risk in a post-AGI world could be very low.

  5. Apr 18, 2023 · 188. Threat ModelsAI. Frontpage. I've been citing AGI Ruin: A List of Lethalities to explain why the situation with AI looks lethally dangerous to me. But that post is relatively long, and emphasizes specific open technical problems over "the basics". Here are 10 things I'd focus on if I were giving "the basics" on why I'm so worried: [1] 1.

  6. AGI Ruin: A List of Lethalities | Eliezer Yudkowsky. I personally hold an optimistic view of the future, but this LessWrong post has gained a lot of traction the past few days so it's definitely worth a read if you're interested in the AI alignment control problem. Sam Altman ( CEO of OpenAI) said on Twitter:

  1. People also search for