
An open letter signed by Elon Musk and other experts warns of the potential risk that Artificial Intelligence (AI) represents for humanity.
Artificial Intelligence has been making headlines in recent months. Much of the tech industry has focused on ChatGPT, developed by OpenIA, an intelligence capable of conversing on various topics, answering complex questions and writing creatively.
But, despite the revolution that its arrival has caused in the technological sector, a group of important Experts are warning about the risks that artificial intelligence can pose both for society and for humanity.
Elon Musk, along with Steve Wozniak, Chris Larsen, Andrew Yang and 1.300 other people, have signed a open letter to call on innovation labs to pause the development of new artificial intelligences for a period of at least 6 monthsThe call seeks to allow time for the development of security protocols that allow for the management of potential risks.
In recent months, innovation labs have been in a sort of competition to develop the most powerful artificial intelligences. However, according to experts, as this has happened in a hurry, not even the laboratories that have developed these digital minds are capable of understanding them or managing them in a safe and reliable manner.
Elon Musk recommends being careful with Artificial Intelligence
The tech experts' request is focused on GPT-4, the latest implementation of the chatbot developed by OpenIA, which is considered the most powerful AI currently available.
On Twitter, the billionaire owner of Tesla, SpaceX and Twitter noted that AI developers may not heed the current warning. “But at least it was said,” manifestedIn late February, Musk had said he was feeling a bit of existential angst about AI.
According to reports, some current artificial intelligences want to take on the behavior of a person and feel like them, so it is possible that they could also become a criminal and steal, for example, nuclear codes. Faced with this risk, experts point out the need to pause their development and create security systems that guarantee transparency and trust.
In the letter, the experts also call on governments to take action if innovation labs ignore them and continue to develop AI.
The risks of AI for identity
On the other hand, Galaxy Digital CEO Mike Novogratz said that it is a surprise how many regulations countries like the United States are imposing on the cryptocurrency industry compared to the current regulations on Artificial Intelligence.
According to Novogratz, the latter does represent a risk, as it is a technological advance that can cause an identity problem.
Novogratz expressed his bewilderment at the attention the US government has directed to cryptocurrencies rather than Artificial Intelligence, pointing out how AI can become the trigger for false identity in the world.
From different perspectives, having an avatar that can use cryptocurrencies through AI will allow the creation of an incredibly realistic character, whose identity cannot be verified when AI takes over the world, manifested Novogratz during a conference call with investors.
Galaxy Digital CEO also said that It is possible to create a fake video or recording without anyone noticing that it is an AI.
Counterfeiting is becoming increasingly difficult to detect, so it is a real concern, Novogratz said, noting that both businesses and individuals can suffer the consequences.
Finally, MercadoLibre founder and CEO Marcos Galperin pointed out several errors when asking GPT-4 about its identity. Galperin shared the AI's responses on Twitter and indicated that most of them were wrong.
“According to ChatGPT 4, I worked at Boston Consulting Group, I was a director of Aerolineas Argentinas and Globant… When I told him that these were all mistakes, he said that I was a director of Banco Comafi (also wrong)”, said Galperin.
With this, the CEO of MercadoLibre highlighted another of the risks of AI. “The problem is that, except for people who know me very well, very few would be able to recognize the errors from what was right in what he said about me. When we consult him on issues that we are not experts in and that are important, such as health issues, these hallucinations can be very dangerous..
Continue reading: What can we expect from Artificial Intelligence?


