Yahoo Web Search

Search results

  1. Feb 22, 2024 · This scenario is outlined by Yudkowsky on multiple occasions, most prominently in his post to the LessWrong forum site AGI Ruin: A List of Lethalities. Additional discussions of this scenario are contained in the Lex Fridman Podcast #368, The Logan Bartlett Show, EconTalk, Bankless podcasts along with an archived livestream with the Center for ...

  2. Jul 2, 2022 · Naive Hypotheses on AI Alignment. 2nd Jul 2022. Apparently doominess works for my brain, cause Eliezer Yudkowsky’s AGI Ruin: A List of Lethalities convinced me to look in to AI safety. Either I’d find out he’s wrong, and there is no problem. Or he’s right, and I need to reevaluate my life priorities. After a month of sporadic reading, I ...

  3. Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:

  4. Nov 11, 2021 · The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous". I think this Nate Soares quote (excerpted from Nate's response to a report by Joe Carlsmith) is a useful ...

  5. Jun 15, 2022 · Jun 15, 2022 by Rob Bensinger. A collection of MIRI write-ups and conversations about alignment released in 2022, following the . 72 Six Dimensions of Operational Adequacy in AGI Projects. Eliezer Yudkowsky. 2y. 33. 144 AGI Ruin: A List of Lethalities. Eliezer Yudkowsky.

  6. Frontpage. Thefollowing is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous". I think this Nate Soares quote (excerpted from Nate's response to a report by Joe Carlsmith) is ...

  7. aiadventures.net › summaries › agi-ruin-list-ofAGI Ruin - Reflections on AI

    AGI Ruin A List of Lethalities Eliezer Yudkowsky. May 21, 2023 Source “AGI Ruin” is a fairly famous piece of writing in the world of AI alignment by Eliezer Yudkowsky, who is a bit like the spokesperson for the community of those concerned about AI existential risk (referred to perjoratively as “AI doomers” by those on the other side ...

  1. People also search for