BETA RELEASE

Summary

The paper contrasts the decisive AI existential risk hypothesis (abrupt events) with an accumulative hypothesis (gradual erosion of systemic resilience via smaller, interconnected disruptions).

Key quotes

The accumulative hypothesis posits that AI x-risks do not exclusively materialize as high-magnitude events initiated by artificial general intelligence or artificial superintelligence.
AI x-risks accumulate through a series of lower-severity disruptions over extensive duration, collectively eroding systemic resilience until some stressor event triggers unrecoverable collapse.

The author uses a systems analysis perspective to differentiate between the causal pathways of decisive and accumulative risks. It proposes a ‘perfect storm MISTER’ scenario to illustrate how social risks can compound into an existential catastrophe.