Search results
Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures Eliezer Yudkowsky Machine Intelligence Research Institute Abstract The goal of the field of Artificial Intelligence is to understand intelligence and create a human-equivalent or transhuman mind. Beyond this lies another question—whether
Eliezer Yudkowsky, Creating Friendly Ai 1.0: The Analysis and Design of Benevolent Goal Architectures - PhilPapers. Creating Friendly Ai 1.0: The Analysis and Design of Benevolent Goal Architectures. Eliezer Yudkowsky. The Singularity Institute ( 2001 ) Copy BIBTEX. Abstract. This article has no associated abstract. ( fix it ) Like. Recommend.
- Eliezer Yudkowsky
People also ask
What is friendly artificial intelligence (AGI)?
How to design a friendly AI?
Is AI 'human friendly'?
What is friendly artificial intelligence research?
Jan 1, 2001 · Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures. Eliezer Yudkowsky. 3.71. 14 ratings0 reviews. The goal of the field of Artificial Intelligence is to understand intelligence and create a human-equivalent or transhuman mind.
- (14)
- Eliezer Yudkowsky
Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures — A near book-length description from the MIRI Critique of the MIRI Guidelines on Friendly AI — by Bill Hibbard Commentary on MIRI's Guidelines on Friendly AI — by Peter Voss.
In 2001, he published Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. He is the author of the papers “Cognitive Biases Potentially Affecting Judgment of Global Risks” and “AI as a Positive and Negative Factor in Global Risk” in Global Catastrophic Risks (Oxford, 2008).
E Yudkowsky. 2001. “Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures.” Working paper. MIRI.