Search results
Feb 22, 2024 · This scenario is outlined by Yudkowsky on multiple occasions, most prominently in his post to the LessWrong forum site AGI Ruin: A List of Lethalities. Additional discussions of this scenario are contained in the Lex Fridman Podcast #368, The Logan Bartlett Show, EconTalk, Bankless podcasts along with an archived livestream with the Center for ...
Jul 2, 2022 · Naive Hypotheses on AI Alignment. 2nd Jul 2022. Apparently doominess works for my brain, cause Eliezer Yudkowsky’s AGI Ruin: A List of Lethalities convinced me to look in to AI safety. Either I’d find out he’s wrong, and there is no problem. Or he’s right, and I need to reevaluate my life priorities. After a month of sporadic reading, I ...
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
Nov 11, 2021 · The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous". I think this Nate Soares quote (excerpted from Nate's response to a report by Joe Carlsmith) is a useful ...
Jun 15, 2022 · Jun 15, 2022 by Rob Bensinger. A collection of MIRI write-ups and conversations about alignment released in 2022, following the . 72 Six Dimensions of Operational Adequacy in AGI Projects. Eliezer Yudkowsky. 2y. 33. 144 AGI Ruin: A List of Lethalities. Eliezer Yudkowsky.
Frontpage. Thefollowing is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous". I think this Nate Soares quote (excerpted from Nate's response to a report by Joe Carlsmith) is ...
AGI Ruin A List of Lethalities Eliezer Yudkowsky. May 21, 2023 Source “AGI Ruin” is a fairly famous piece of writing in the world of AI alignment by Eliezer Yudkowsky, who is a bit like the spokesperson for the community of those concerned about AI existential risk (referred to perjoratively as “AI doomers” by those on the other side ...