论文标题
AI时间表如何影响存在风险?
How Do AI Timelines Affect Existential Risk?
论文作者
论文摘要
超人类人工通用情报本可以在本世纪创造,可能是存在性风险的重要来源。通过增加人类在AI对准问题上的时间,延迟超级智能AI(ASI)的创建可能会降低总存在风险。 但是,由于ASI可能会降低大多数风险,因此延迟ASI的创建也可能会增加其他存在风险,尤其是从诸如合成生物学和分子纳米技术等先进的未来技术中。 如果AI的存在风险相对于其他存在风险的总和很高,则延迟ASI的创建将倾向于降低总存在风险,反之亦然。 战争和硬件悬垂等其他因素可能会增加AI风险,并且认知增强可能会降低AI风险。为了降低总体存在风险,人类应采取强有力的积极行动,例如进行存在的风险分析,AI治理和安全性,并通过促进差异技术发展来减少所有存在风险的来源。
Superhuman artificial general intelligence could be created this century and would likely be a significant source of existential risk. Delaying the creation of superintelligent AI (ASI) could decrease total existential risk by increasing the amount of time humanity has to work on the AI alignment problem. However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology. If AI existential risk is high relative to the sum of other existential risk, delaying the creation of ASI will tend to decrease total existential risk and vice-versa. Other factors such as war and a hardware overhang could increase AI risk and cognitive enhancement could decrease AI risk. To reduce total existential risk, humanity should take robustly positive actions such as working on existential risk analysis, AI governance and safety, and reducing all sources of existential risk by promoting differential technological development.