AI pioneer leaves Google and warns of risks to humanity

AI pioneer leaves Google and warns of risks to humanity

Scientist Geoffrey E. Hinton sees the risks in the use of artificial intelligence systems by bad actors and defends a pause in the development of the technology. British scientist Geoffrey E. Hinton, one of the pioneers in the development of artificial intelligence (AI), has resigned from the technology company Google and said he did so in order to be able to talk about the dangers of this technology without having to worry about the impact it his claims would have had on his employer.




Hinton, often called the “godfather” of AI, said he now regrets dedicating his career to developing the technology. “I console myself with the usual excuse: if it hadn’t been me, someone else would have done it,” he told the New York Times, in an interview published on Monday (1/05).

He joins other experts who have already warned of the risks of AI in the face of launches such as ChatGPT and the investments of large technology companies in this sector. “It’s hard to figure out how to stop bad guys from using it for bad things,” he told the New York Times.

On Twitter, Hinton said Google has always acted very responsibly and denied that he resigned over criticism of his former employer. According to the New York newspaper, Hinton communicated his resignation to Google last month.

For Hinton, the current pace of AI development is appalling. “Look at how it was five years ago and how it is now,” she commented.

threat to humanity

In the short term, he said he fears the internet will be flooded with fake texts, photos and videos, and that it will no longer be possible for people to tell what is real from what is fake.

He added that down the road, AI could replace many workers and even become a threat to humanity.

“The idea that these things could get smarter than people, some people believed in it. But most people thought it was a long way off. I thought it was a long way off, 30 or 50 years or maybe more. Of course I already don’t know don’t think like that anymore,” he said.

This is why he defended, as other experts have already done, that research in this area should be halted until it is fully understood whether it will be possible to control AI.

In March, a group of experts called for a pause in the development of AI systems to allow time to ensure they are secure. The open letter, signed by more than a thousand people, including businessman Elon Musk and Apple co-founder Steve Wozniak, was motivated by the release of GPT-4 software, an even more powerful version of the technology used from ChatGPT.

as/lf (Efe, AFP, Reuters)

Source: Terra

You may also like