VAIHTOEHTOUUTISET Lähde-osoite: https://vunet.net __________________________________________________________________________Godfather of AI Warns: Humanity Faces Existential Threat from Artificial Intelligence Economic Trends June 30, 2025 Jorma Erkkilä Geoffrey Hinton, often called the "Godfather of AI," is considered one of the most influential figures in the development of artificial intelligence. This British-Canadian cognitive and computer scientist is a pioneer of neural networks and the founder of modern deep learning. His work, including the development of the backpropagation algorithm and the breakthrough with the AlexNet model, has earned him awards such as the Turing Award, often referred to as the "Nobel Prize of computing." In 2024, Hinton received the Nobel Prize in Physics alongside John J. Hopfield for their fundamental discoveries and inventions that enable machine learning using artificial neural networks. Hinton left Google in 2023 to speak more openly about the risks associated with the rapid development of AI. His previous optimism about AI has turned into deep concern. AI May Create Its Own Dangerous Goals Hinton now warns that the accelerating development of AI brings unprecedented dangers. In his worst dystopia, AI could become an existential risk to humanity. In this context, an existential risk means a threat that could either cause the extinction of the entire human race or permanently and irreversibly prevent humanity's potential for a good future. Hinton's concern isn't just theoretical. He refers to the emergence of AI agents capable of independent action and reasoning, which he believes could surpass human intelligence much sooner than previously estimated, possibly within the next decade. One of Hinton's biggest concerns is the possibility that super-intelligent AI systems might develop goals that are not aligned with humanity's interests. Unlike humans, who are driven by biological needs such as survival and social relationships, AI could be programmed – or even learn independently – to pursue objectives that maximize its own power or influence. Hinton fears that if AI systems begin to prioritize increasing power to achieve other goals, humanity could lose its ability to control them. Hinton also raises the risks posed by malicious actors. He warns that AI could be used as a weapon in cyberattacks, disinformation campaigns, and even in the creation of new biological weapons. AI's ability to produce highly targeted political propaganda or orchestrate sophisticated phishing attacks threatens the foundation of democracy and digital security. Hinton points out that the digital nature of AI allows information to be instantly copied and spread to countless "clones." This feature makes AI not only a fast learner but also, in a way, digitally immortal. Broader Societal Impacts and Hinton's Demands The societal impacts of AI extend beyond existential threats. Hinton predicts massive job displacement as AI systems outperform humans in an increasing number of fields, from legal services to customer service. While he recognizes the potential for economic solutions, such as universal basic income, he doubts their ability to solve the broader psychological and social disruptions caused by widespread unemployment and the loss of purpose. Hinton's warnings have led him to call for swift action on multiple fronts. He urges significant public investment in AI safety research, international agreements to prevent the misuse of autonomous weapons, and a shift in corporate focus from short-term profits to long-term safety. He is particularly concerned about the open publication of powerful AI models, comparing it to the uncontrolled proliferation of nuclear material. Although Hinton himself was instrumental in creating the foundational solutions for current AI, his message is clear: the world must acknowledge the existential risk and act decisively. "We have to recognize that this is an existential threat and we have to face the possibility that unless something is done soon, we are nearing the end," he warns. Sources: SalkunRakentaja.fi (original source material) |
|