Yahoo Web Search

Search results

  1. Mar 30, 2023 · Use Muck Rack to listen to #368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization by Lex Fridman Podcast and connect with podcast creators.

  2. The strongest argument against AGI ruin is something that can never be spoken aloud, because it isn't in words. It's a deep inner warmth, a surety that it'll all be right, that you can only ever feel by building enormous AI models.

  3. Nov 11, 2021 · The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees…

  4. Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.------ DEBRIEF | Unpacking the episode: https://shows.banklesshq.com/p/debrief-...

    • 276K
    • Bankless
  5. Eliezer S. Yudkowsky ( / ˌɛliˈɛzər ˌjʌdˈkaʊski / EH-lee-EH-zər YUD-KOW-skee; [1] born September 11, 1979) is an American artificial intelligence researcher [2] [3] [4] [5] and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence.

  6. IMO Yudkowsky's problem (shared by many/most thinkers in the theoretical "superintelligence security" field) results from his stubborn insistence on working within a dead-end paradigm of AI security, what we may call the "value-aligned AI overlord" paradigm.

  7. Apr 22, 2024 · Listen now to #368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization from Lex Fridman Podcast on Chartable. See historical chart positions, reviews, and more.

  1. People also search for