LEARN
Books
Articles
AI safety researcher: Shut It All Down
Ex OpenAI employee timeline to Artificial Superintelligence (ASI)
(We only agree with sections I and II. We do not support building ASI ever.)
Suchir Balaji, OpenAI whistleblower found dead, on copyright issues
Videos
“It might be quite sensible to just stop developing these things any further.”
“Mitigating the risk of extinction from AI should be a global priority.”
“Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.”