In a recent study, the law enforcement agency focused primarily on fraud attempts, misinformation, and cybercrime.
The European Union Agency for Police Cooperation (Europol) warned this Monday about the possible misuse and criminal use of the chatbot ChatGPT, based on artificial intelligence (AI), as stated in a report entitled ‘ChatGPT: the impact of large linguistic models in the application of the law’.
“As the capabilities of large linguistic models such as ChatGPT are actively improved, the potential exploitation of these types of AI systems by criminals paints a bleak picture,” the agency says.

First of all, the document says, ChatGPT’s ability to create highly realistic text can be used to imitate speech style of specific people or groups.
In addition, the bot’s capabilities can be used with propaganda purposes and disinformation, as it allows users to “generate and disseminate messages that reflect a specific narrative with relatively little effort.”
ChatGPT is capable of creating code in several different programming languages. “For a potential criminal with little technical knowledge, this is an invaluable resource for create malicious code“, says Europol.
Beyond this, it is an artificial intelligence tool that has gained great popularity in recent months. On the other hand, the widespread use of its incredible capabilities, such as generating unique texts and completing assigned tasks in seconds, is causing concern in various circles, from educational to financial.
Thus, for example, several Wall Street banking giants are limiting or prohibiting the use of ChatGPT among their employees, under the premise that the popular ‘chatbot’ could provide erroneous information or answers in the field of markets and commercial communications.
Meanwhile, there are other studies that support Europol’s conclusions. An investigation by online news ratings tool NewsGuard confirmed last week that GPT-4, the latest version of the popular system, is more likely to argue false information when asked to do so than its predecessor GPT-3.5.
Even OpenAI CEO Sam Altman recently admitted to being “a little scared” about creating ChatGPT and what people might use it for.
Source: RT