Experts keep talk about the possible existential threat of AI. But what does that actually mean?
Summary
A forum post in r/ControlProblem questioning the specific mechanisms and probabilities of AI-driven existential risks beyond science fiction tropes.
Key quotes
Multiple leading experts in the field of AI point to the potential risks this technology could lead to out extinction, but what does that actually entail?
A user post in the r/ControlProblem subreddit seeking clarification on the nature of AI existential risk. The author asks for realistic scenarios and probabilities regarding planetary-scale threats.