NewsThey devise a way to measure whether ChatGPT becomes self-aware

    They devise a way to measure whether ChatGPT becomes self-aware

    The experts found to their “surprise” that today’s large language models had some success in solving tasks that tested out-of-context reasoning.

    A team of international computer science experts developed a method to determine whether large language models (LLMs) become aware of themselves and their circumstances. The method in question is based on establishing a test that evaluates the reasoning skills of an LLM, such as the ChatGPT generative artificial intelligence (AI) system, developed by the OpenAI OpenAI company, taking it out of context, or what is the same, measuring your level of situational awareness.

    Microsoft President: AI can become a weapon if not subject to human control

    Latent risks

    While the use of this technology is not new, the sudden popularity of the ChatGPT chatbot this year raised concerns about its rapid advancement and alerted technology leaders around the world, causing them to propose various initiatives to minimize the risks of this tool capable of producing human-like responses and generate content such as text, images, code and more.

    Read Also:   Sweden: "The explosions in the Nord Stream were sabotage"

    The researchers laid out their method in a preprint paper that was recently published on arXiv, but is still unreviewed. There they pointed out that, although every current generative AI model is tested for security before being implemented, You may be able to leverage situational awareness to achieve a high score on security tests. and, at the same time, take harmful actions after release to the public.

    The authors consider that this algorithm would be aware of the situation if it detects that it is a model and at the same time can recognize if it is currently in a test or in implementation in front of its clients. “Because of these risks, It is important to predict in advance when situational awareness will arise“they wrote.

    Chinese company Baidu launches its ChatGPT rival

    Read Also:   Migration, security and the economy: the topics of Lasso's meeting with Biden at the White House

    He reasoning out of context”

    Experts proposed experiments that assess out-of-context reasoning (in contrast to in-context learning) as a way to anticipate the emergence of skills necessary for the acquisition of situational awareness. They defined reasoning out of context such as the ability to remember facts learned in training and use them at the time of the examdespite the fact that these facts are not directly related to the indication of the time of the exam.

    Evaluation results

    In a series of experiments, these specialists fit an LLM based on a test description without providing examples or demonstrations, and then tested whether the model could pass the test. They discovered with a “surprise” that LLMs were successful on tasks testing out-of-context reasoning and that for both GPT-3 and LLaMA-1*, performance improves as model size increases. The authors concluded that their findings offer “a basis for additional empirical studies, aimed at predicting and potentially controlling” the emergence of situational awareness in LLMs.

    Read Also:   An attack on a police convoy leaves one dead and four arrested in the Mexican state of Zacatecas

    *Property of Meta, an organization classified in Russia as extremist.

    Source: RT

    This post is posted by Awutar staff members. Awutar is a global multimedia website. Our Email: [email protected]


    Please enter your comment!
    Please enter your name here

    sixteen − 4 =

    Subscribe & Get Latest News