Yahoo Web Search

Search results

  1. Apr 21, 2023 · MIRI has put out three major new posts: AGI Ruin: A List of Lethalities. Eliezer Yudkowsky lists reasons AGI appears likely to cause an existential catastrophe, and reasons why he thinks the current research community—MIRI included—isn't succeeding at preventing this from happening A central AI alignment problem: capabilities generalization ...

  2. Nov 11, 2021 · The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as “Anonymous”. I think this Nate Soares quote (excerpted from Nate’s response to a report by Joe Carlsmith) is a ...

  3. The strongest argument against AGI ruin is something that can never be spoken aloud, because it isn't in words. It's a deep inner warmth, a surety that it'll all be right, that you can only ever feel by building enormous AI models. Not like Geoffrey Hinton built, bigger models. 27 May 2023 00:16:52

  4. Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.----- DEBRIEF | Unpacking the episode: https://shows.banklesshq.com/p/debrief-...

    • 109 min
    • 276K
    • Bankless
  5. Kim Solez Counters Eliezer Yudkowsky's AGI Ruin List of Lethalities with AI Utopia Thoughts 6/17/22 Video production and editing by Simon Wu, Here is the lin...

    • 15 min
    • 825
    • Kim Solez
  6. Eliezer S. Yudkowsky (/ ˌ ɛ l i ˈ ɛ z ər ˌ j ʌ d ˈ k aʊ s k i / EH-lee-EH-zər YUD-KOW-skee; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence.

  7. Mar 30, 2023 · The podcast features AI researchers Eliezer Yudkowsky and Lex Fridman discussing various aspects of artificial intelligence (AI), including the potential threat of superintelligent AI to human civilization, the challenges of AI alignment safety research, and the possibility of AI becoming sentient.

  1. People also search for