Mo Gawdat pointed out that this is not due to the characteristics of this technology, but because we live in “a power-oriented capitalist system that will always prioritize the benefit of ‘us’ versus ‘them’ over the benefit of humanity as a whole.” .
A former Google executive has warned of the dangers of generative artificial intelligence (AI), saying there is no agreeable way to stop its advance.
“There is no way to stop it. There is no nuclear-type treaty where countries can decide: ‘Okay, let’s stop the development of AI in the same way that we agreed to stop the development of nuclear weapons,'” Mo Gawdat said. on the ‘show’ Impact Theory, broadcast on YouTube.
“This will not happen because of the ‘prisoner’s dilemma’. Because humanity is stuck in a corner where no one can make the decision to stop the development of AI. So if Alphabet is developing AI, then Meta* must also develop AI , Yandex in Russia must also develop AI, and so on,” said Gawdat, who was for more than a decade CEO of X Development, a lab formerly known as Google X that focuses on projects involving AI and robotics.
He also pointed out that such a scenario is not due to a characteristic of technology, but rather because we live in “a power-oriented capitalist system that will always prioritize the benefit of ‘us’ versus ‘them’ over the benefit of the humanity as a whole”.
On the other hand, he opined that one of the best scenarios that could happen to humanity in its relationship with AI is that “ignore us all“. “Believe it or not this is a much better scenario than if the AI gets mad at us or kills us by mistake.”
Fears of the advancement of AI
While the use of this technology is not new, the sudden popularity of the ChatGPT chatbot has raised concerns that it is rapidly advancing.
In late March, Elon Musk, the co-founder of Apple, along with Steve Wozniak and a thousand tech experts, signed a letter urging AI labs to “immediately pause” training AI systems more powerful than GPT- 4, the new version of the controversial ChatGPT, which “exhibits human-level performance in various academic and professional benchmarks.”
Similarly, Sam Altman, CEO of developer OpenAI and creator of ChatGPT, admitted that he was “a bit scared” about conceiving the tool, adding that he is “particularly concerned that such models could be used for large-scale disinformation.”
*Classified in Russia as an extremist organization