Yahoo Web Search

Search results

  1. Apr 26, 2024 · Eliezer S. Yudkowsky ( EH-lee-EH-zər YUD-KOW-skee; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI.

  2. May 2, 2024 · Claude: Given Eliezer Yudkowsky's well-known views on AI risk and his influential role in the rationalist and EA communities, I can imagine he might raise a few key objections to your argument: Existential risk trumps individual suffering: Yudkowsky has consistently argued that reducing existential risk, particularly from advanced AI, should be ...

  3. May 7, 2024 · Detailed Report. Bias Rating: LEFT-CENTER. Factual Reporting: MIXED. Country: USA. MBFC’s Country Freedom Rating: MOSTLY FREE. Media Type: Website. Traffic/Popularity: Medium Traffic. MBFC Credibility Rating: MEDIUM CREDIBILITY. History. Founded in 2009 by Eliezer Yudkowsky, LessWrong is a community blog focusing on human rationality.

  4. Apr 22, 2024 · Listen now to #368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization from Lex Fridman Podcast on Chartable. See historical chart positions, reviews, and more.

  5. Apr 22, 2024 · Among the most outspoken is Eliezer Yudkowsky, who has publicly expressed his belief in the likelihood of AGI and the downfall of humanity due to a hostile superhuman intelligence. However, the term is more often used as a pejorative than as a self-label.

  6. Apr 22, 2024 · Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent?

  7. Apr 23, 2024 · Up until about GPT 2, EURISKO was arguably the most interesting achievement in AI. Back in the day on the SL4 and singularitarian mailing lists, it was spoken of in reverent tones, and I’m sure I remember a much younger Eliezer Yudkowsky cautioning that Doug Lenat should have perceived a non-zero chance of hard takeoff at the moment of its birth.

  1. People also search for