LEARN

An AI Expert Warning: 6 People Are (Quietly) Deciding Humanity’s Future!

We Must Act Now!

Geoffrey Hinton, Nobel prize winner for work in AI

Expert Quotes

It might be quite sensible to just stop developing these things any further.
— Geoffrey Hinton, Nobel prize winner and 'Godfather of AI'
Mitigating the risk of extinction from AI should be a global priority.
— Statement on AI Risk, signed by hundreds of experts, including the top AI companies and scientists
Rogue AI may be dangerous for the whole of humanity. Banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.
— Yoshua Bengio, most cited AI researcher of all time

Book Recommendations

“AI: Unexplainable, Unpredictable, Uncontrollable”

Roman Yampolskiy

Explores the inherent challenges and risks of artificial intelligence, arguing that advanced AI systems are difficult to understand, predict, and control.

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All”

Eliezer Yudkowsky & Nate Soares

Explains how humanity would lose against an AI system that is generally more competent than they are. It is hard to predict the exact path, as that would mean being as good at achieving goals as the AI system, but there are some paths available to it. Superintelligence would not care about humans, but it would want the resources that humans need. Humanity would thus lose and go extinct.

“Artificial Bodies: How Machines Replace People”

Remmelt Ellen

Explains how Big Tech grows by making workers expendable – ultimately by replacing us fussy humans with consistent, faster, stronger machines. We face a parasitic system of extraction, one that will keep feeding on society to grow, even after a market crash. Unless we act to stop it.