“This is the face of pain.” This is how the content creator QTCinderella starts a video in which she denounces having been the victim of the dissemination of a hyper-realistic pornographic sequence created with artificial intelligence. The images to which she refers are false, the damage caused is not. The popular 29-year-old internet figure joins a long list of people affected by this type of creation, known as deepfakes. Artificial intelligence applications make them increasingly easy to produce and difficult to identify as false while their regulation and control lags behind the development of these technologies. The number of these contents on the Internet doubles every six months and accounts for more than 134 million views. More than 90% of the cases are pornography without consent. An investigation by Northwestern University and the Brookings Institution (both in the US) warns of its potential danger to security. Other studies warn of the risk of interference and manipulation in democratic political processes.
Sonia Velazquez, a university student from Seville, was curious about how videos work deepfakes and she asked her boyfriend, a graphic design student, to show her a proof and he agreed to do it with his image. “It started as a game, but it is not. I saw myself vulnerable. I knew it wasn’t me, but I imagined all my friends sharing that video, joking around and God knows what else. I felt dirty and even though we deleted it right away, I can’t stop thinking about how easy it can be and how much damage it can cause,” she recounts.
“Seeing yourself naked against your will and spread on the internet is like feeling violated. It shouldn’t be part of my job to have to pay money to have these things removed, to be harassed. The constant exploitation and objectification of women exhausts me,” says QTCinderella after the video was broadcast deepfake of which he has been a victim. Before her, the British artist Helen Mort suffered the same attack with photos taken from her social networks. “She makes you feel powerless, like you’re being punished for being a woman with a public voice,” she says. “Anyone from any walk of life can be the target of this and it seems that people don’t care,” laments another victim of the hyperrealistic videos who asks not to be identified to avoid searches on the networks, although she believes she has managed to delete all trace.
Hoaxes are as old as humanity. Faked photos are not recent either, but became widespread at the end of the last century with easy and popular still image editing tools. Video manipulation is newer. The first public complaint is from the end of 2017 against a Reddit user who used it to undress celebrities. Since then, it has not stopped growing and has moved on to the creation of hyper-realistic audio.
“technologies deepfake present significant ethical challenges. They are developing rapidly and becoming cheaper and more accessible by the day. The ability to produce realistic looking and sounding video or audio files of people doing or saying things they didn’t do or say brings unprecedented opportunities for deception. Politics, citizens, institutions and companies can no longer ignore the construction of a set of strict rules to limit them”, sums up Lorenzo Dami, professor at the University of Florence and author of a study published in Research Gate.
Fake hyper-realistic videos mainly affect women. According to Sensity AI, a research company that tracks fake hyper-realistic videos on the internet, 90-95% of them are non-consensual pornography and nine out of ten of these involve women. “This is a problem of sexist violence,” Adam Dodge, founder of EndTAB, a non-profit organization for education in technological uses, told the Massachusetts Institute of Technology (MIT).
The European Institute for Gender Equality also considers it this way and includes these creations in its report on cyberviolence against women as one of the forms of sexist aggression.
The regulations lag far behind the technological advances that make creations possible deepfakes. The regulation of artificial intelligence (AI Act) of the European Commission is still a proposal and in Spain the development of the state agency for the supervision of artificial intelligence is pending. “The technology, once developed, there is no stopping it. We can regulate it, temper it, but we are late”, warns Felipe Gomez-Pallete, president of Calidad y Cultura Democraticas.
Some companies have gone ahead to avoid being part of criminal acts. Dall-e, one of the digital creation apps, warns that it has “limited the ability to generate violent, hateful, or adult images” and developed technologies to “prevent photorealistic renderings of the faces of real individuals, including those of public figures.” So do other popular artificial intelligence applications for audiovisual or video playback creations. A group of ten companies have signed a catalog of guidelines on how to build, create and share AI-generated content responsibly. But many others jump to simple mobile applications or roam freely on the web, including some expelled from their original servers, such as the one that became popular with the slogan undress your friend and take refuge in other open source or messaging platforms.
The problem is complex because freedom of expression and creation come together with the protection of privacy and moral integrity. “The law does not regulate a technology, but rather what can be done with the technology”, warns Borja Adsuara, university professor and expert in digital law, privacy and data protection. “Only when a technology can only have a bad use can it be prohibited. But the only limit to freedom of expression and information is the law. Technology should not be banned because it can be dangerous. What must be prosecuted are misuses, ”he adds.
The virtual resurrection of Lola Flores
In this sense, the Italian professor Lorenzo Dami identifies positive and negative uses of this technology. Among the former, its use for audiovisual productions, better interaction between machines and humans, creative expression (including satirical), medical applications, culture and education stands out. An example has been the viralized virtual resurrection of Lola Flores for an advertising campaign, which was carried out with the consent of her descendants.
On the other side of the scale are hyper-realistic creations for sexual extortion, insults, pornographic revenge, intimidation, harassment, fraud, discrediting and distorting reality, reputational damage and attacks of an economic nature (altering markets), judicial (falsifying evidence ) or against democracy and national security.
On this last aspect, Venkatramanan Siva Subrahmanian, professor of cybersecurity and author of Deepfakes and international conflictswarns: “The ease with which they can be developed as well as their rapid diffusion point towards a world in which all states and non-state actors will have the ability to deploy hyper-realistic audiovisual creations in their security and intelligence operations,” he warns.
In this sense, Adsuara believes that “more dangerous” than false pornography, despite its majority incidence, is the potential damage to democratic systems: “Imagine that three days before the elections a video appears of one of the candidates saying a barbarity and there is no time to deny it or, even if it is denied, the virality is unstoppable. The problem of deepfakesas it happens with hoaxes, it is not only that they are perfect to make them seem credible, but that people want to believe them because they coincide with their ideological bias and they redistribute them without contrasting, because they like them and they want to think that it is true”.
The current regulation, according to the lawyer, focuses on the result of the actions and the intention of the offender. “If the scene never existed because it’s fake, you’re not uncovering a secret. It should be treated as a case of insults or as a crime against moral integrity by disseminating it with the intention of publicly humiliating another person, ”he explains.
“The solution could be,” adds Adsuara, “in applying a figure provided in cases of minors and that is child pseudo-pornography.” “This would allow crimes against privacy to include not only real videos but also realistic ones with intimate images that resemble those of a person.”
Technological identification systems can also be applied, although it is increasingly difficult because artificial intelligence is also developing formulas to avoid them.
Another way is to require that any type of hyper-realistic content be clearly identified as such. “It is in the digital rights bill, but the artificial intelligence regulation has not yet been approved,” explains the lawyer.
In Spain, this type of identification is common in cases of faked pornography to avoid legal problems. “But these women,” warns Adsuara, “have to put up with their image being merged into a pornographic context and it goes against their right to their own image, even if it is not real, but it is realistic.”
Despite the obvious damage of these videos, the complaints in Spain are few compared to those registered in the United States, the United Kingdom or South Korea, although the proliferation of these videos is proportionally similar. The Spanish expert in digital law believes that it is given less importance since its falsehood is evident and because the complaint sometimes only serves as a speaker for the criminal, who is looking for precisely that. “Besides,” he laments, “this society is so sick that we don’t perceive it as bad and, instead of defending the victim, they are humiliated.”
Josep Coll, director of RepScan, a company dedicated to eliminating harmful information on the internet, also confirms the scarcity of complaints by public figures affected by false videos. However, he does point out that they deal with many extortion attempts with them. He recounts the case of a businessman who was broadcast a false video and included images of a country to which he had recently gone to sow doubts among those in his circle who knew that the trip had taken place. “And of revengeporn”, he comments. “We get cases of those every day.” “In cases of extortion, they look for a hot reaction, that they pay for their withdrawal, even if they know it is false,” he adds.
Source: EL PAIS