Eliezer Yudkowsky is a research fellow at the Singularity Institute, where he researches Friendly AI and recursive self-improvement. In 2001, he published Creating Friendly AI: The Analysis and Design of Benevolent Goal Architectures. He is the author of the papers “Cognitive Biases Potentially Affecting Judgment of Global Risks” and “AI as a Positive and Negative Factor in Global Risk” in Global Catastrophic Risks (Oxford, 2008).
He is a leading member of the LessWrong group blog, a community which seeks to develop the art of human rationality using mathematical models and the latest findings in brain science.