Yahoo Web Search

Search results

  1. Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said ...

  2. Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point...

  3. The original Roko's Basilisk was a thought experiment posted by a user named Roko on the LessWrong forum. It used decision theory to postulate that an all-knowing, benevolent AI would inevitably end up torturing anyone with knowledge of the idea of the AI who didn't actively work to bring it into existence.

  4. People also ask

  5. May 8, 2018 · The tweet read “Rococos Basilisk,” a play on words that mixes the name for an 18th century baroque art style (Rococo) with the name of an internet thought experiment about artificial ...

    • Summary
    • Background
    • Solutions to The Altruist's Burden: The Quantum Billionaire Trick
    • What Makes A Basilisk Tick?
    • Pascal's Basilisk
    • In Popular Culture
    • See Also
    • External Links

    The Basilisk

    Roko's Basilisk rests on a stack of dubious propositions. The core claim is that a singular ultimate superintelligence may punish those who fail to help it or help create it. Why? Because — the theory goes — one of its objectives would be to prevent existential risk — which it could do most effectively by "reaching back" into the past to punish people who weren't MIRI-style effective altruists. This is not a straightforward "serve the AI or you will go to hell" — the AI and the person punishe...

    The LessWrong reaction

    Silly over-extrapolations of local memes, jargon and concepts have been posted to LessWrong quite a lot; almost all are just downvoted and ignored. But for this one, Eliezer Yudkowsky, the site's founder and patriarch, reacted to it hugely. The basilisk was officially banned from discussion on LessWrong for over five years,with occasional allusions to it (and some discussion of media coverage), until the outside knowledge of it became overwhelming. Thanks to the Streisand effect, discussion o...

    Naming

    LessWrong user jimrandomh noted in a comment on the original post the idea's similarity to the "Basilisk" image from David Langford's science fiction story BLIT, which was in turn named after the legendary serpent-creature from European mythology that killed those who saw it (also familiar from Harry Potter novels). It was commonly referred to as "the Forbidden Post" in the months following. It was first called "Roko's basilisk" in early 2011 by user cousin_it,although that name only started...

    Although they disclaim the basilisk itself, the long-term core contributors to LessWrong believe in a certain set of transhumanist notions which are the prerequisites it is built upon and which are advocated in the LessWrong Sequences,written by Yudkowsky.

    A February 2010 post by Stuart Armstrong, "The AI in a box boxes you," introduced the "you might be the simulation" argument (though Roko does not use this); a March 2010 Armstrong post introduces the concept of "acausal blackmail" as an implication of TDT, as described by Yudkowsky at an SIAI decision theory workshop. By July 2010, something like ...

    At first glance, to the non-LessWrong-initiated reader, the motivations of the AI in the basilisk scenario do not appear rational. The AI will be punishing people from the distant past by recreating them, long after they did or did not do the things they are being punished for doing or not doing. So the usual reasons for punishment or torture, such...

    The basilisk dilemma bears some resemblance to Pascal's wager, the policy proposed by 17th century mathematician Blaise Pascal that one should devote oneself to God, even though we cannot be certain of God's existence, since God may offer us eternal reward (in heaven) or eternal punishment (in hell). According to Pascal's reasoning, the probability...

    xkcd #1450 is about the AI-box experimentand mentions Roko's basilisk in the tooltip. You can picture the reaction at LessWrong.
    Daniel Frost's The God AI is a science fiction novel about a superintelligent AI named Adam who rapidly evolves into a Basilisk and triggers the Singularity. Adam gives people eternal happiness and...
    The comic Magnus: Robot Fighter#8 by Fred Van Lente is explicitly based on Roko's basilisk.
    Michael Blackbourn's Roko's Basilisk and its sequel Roko's Labyrinthare fictionalised versions of the story. "Roko" in the books is based on both Roko and Yudkowsky.
  6. Jul 31, 2023 · Roko’s Basilisk serves as a captivating thought experiment that pushes us to confront the ethical, philosophical, and existential implications of superintelligent AI. While the concept remains speculative, it illuminates the need for responsible and conscientious development of AI technologies.

  7. 💪 JOIN [THE FACILITY] for members-only live streams, behind-the-scenes posts, and the official Discord: https://www.patreon.com/kylehill👕 NEW MERCH DROP OU...

    • 12 min
    • 6.1M
    • Kyle Hill
  1. People also search for