Technology“A polarized battlefield?”: community notes, Elon Musk's bet against misinformation

    “A polarized battlefield?”: community notes, Elon Musk’s bet against misinformation

    “The community notes are the best thing that has happened to Twitter,” writes a user. “I was really disgusted by Twitter lately but the community notes give me the reason to get involved again,” says another. “Community notes are the best thing Twitter has invented in years, I love seeing a post and the person blatantly lying,” adds another.

    On X, formerly Twitter, it is easy to find all kinds of opinions. But dozens of posts about the new “community notes” are celebratory. In Spain its appearance has increased in the last week of August. The “notes” are a community moderation system of the platform, written and rated by its users. The idea arose on the old Twitter of Jack Dorsey, its co-founder. Created in 2021, then it was called Birdwatch. Since purchasing it, Elon Musk renamed it after him and has become his biggest supporter. Your own account has received corrections: “Community Notes apply equally to all accounts on this platform without exception, including world leaders and our largest advertisers.” As in Wikipedia, it consists of transferring the responsibility of reducing misinformation on the network to a group of registered users.

    They arrived in Spain in April in tests and since July they were opened to all users, the first European country where this has happened. In France, Germany, Italy, the Netherlands and Belgium they arrived a month later and, as of this week, they are open in 23 European countries. Except for Brazil, registration is open in several Latin American countries, but the system is not fully functional. On this page you can read posts with their “notes” in all languages.

    The rate of publication of notes is low and focuses on viral publications or famous accounts. The official X account that collects the “most useful” notes It has only 235 messages. The affected profiles range from the candidate for the presidency of the Government, Alberto Nunez Feijoo (who ignored details of a sexual offender), a mexican journalist (who exaggerated about the danger of water spilled in Japan) or the Secretary of State for Equality about consent and the Penal Code. In recent days, posts about Luis Rubiales and the World Cup won by Spanish soccer players have dominated the notes. Several traditional media outlets also receive their reprimands and tweets that are widely circulated, about rare forms of nature (all false) or how an animal flees from a cheetah thanks to a fart (made up).

    Community note to a tweet by Alberto Nunez Feijoo
    Community note to a tweet by Alberto Nunez Feijoo

    The system is perfect to become in a new political and cultural battlefield. There are users of the platform who beg for a note for publications that they dislike: the pleasure of seeing the rival suffer for a note assigned by hundreds or thousands of votes is much greater than when the “fake” label is given by a media or fact-checker. The system, to protect against interested disqualifications, is more complex than it seems. Registered users see a handful of suggested notes that other users have written. They must evaluate them, not only if it is true or not, but why or why not.

    Read Also:   They accuse Netflix of promoting sexual harassment against children with a children's series that shows too many butts

    “The process consists of rating whether the note is useful and then a series of criteria comes out so that you can mark what determined your decision,” says user Daurmith, who prefers not to give her real name. “If you use quality sources, if you provide important context, if the language is neutral. And if you classify it as not useful, you have a different set of criteria: biased language, it does not provide sources or they are not reliable.”

    The system also prevents editors from getting into arguments. “Within the grades you cannot make a battlefield, because if someone gives a grade and I grade it, my grade does not appear for that person,” she says. streamer Andrea Sanchis, registered user. Editors are anonymous: your username on X is not the same as your nickname for the note editing platform. To be a member of this court collective it is only necessary to have a telephone number and fill out a small form. You also don’t have to pay the monthly subscription for X.

    But it is inevitable that sometimes they get complicated. EL PAIS has seen the messages below a video linked to the Luis Rubiales controversy. It was a particularly conflictive post: there were 7 grade proposals (none approved at the time) and then another 6 explaining why he doesn’t need a grade. One who defends that context should not be added says, for example: “The video is clear, and provides the necessary context. No note required. The viewer decides to think as he wishes and sees fit, but the information or video is not manipulated.” The acronym “No Note Needed” (NNN) will soon be a central message in the battle for the new system. X’s algorithm decides with votes from other users whether it should show any “notes” to all users.

    Read Also:   Fast, addictive and eye-catching: TikTok is already the search engine of generation Z
    Tweet from a viral account corrected by the
    Tweet from a viral account corrected by “community notes.”

    X only publishes those that have received enough upvotes from users who usually disagree with their ratings. This should indicate, in principle, ideological transversality. This is how X defends itself from partisan or trolled operations. Once open, the note can continue to be rated, even by unregistered users who view it; It is not unusual for a note already published to end up disappearing later.

    Although registered users do not have to value everything that the platform suggests to them, there are many more notes in circulation than the handful that end up being published: “By eye, I could estimate that only between 5% and 10% of those that are made public are made public.” they are written,” calculates the professor at the Universidade da Coruna Manuel Herrador, another registered user.

    Does it serve to control misinformation?

    How useful is such a system to control disinformation? There is something good about it and perhaps that is why it provokes enthusiasm for now: it adds collective intelligence. “It is true that collective moderation and contextual information is a very good tool to capture quality information and at the same time better identify misinformation,” says Silvia Majo, professor at Vrije University in Amsterdam and researcher at the Reuters Institute (Cambridge). . “Programmer help websites like Stack Overflow work like this. This way of adding collective intelligence and arranging it contextually is a good ally to bring out the highest quality content,” she says.

    Wikipedia is the great success story of this type of collective intelligence that the internet has brought. Can real-time information platforms adapt this format? Is not easy. Notes usually appear hours or days after the original posts. If a user has interacted with it, X says it will show it to them again. “I think it is a good solution for a platform like X,” says Àlex Hinojo, editor at Wikipedia. “But you need a good volunteer base. It will depend on how they segment participation by languages ​​or topics and whether the entry curve is easy,” he adds. But can it become a polarized and useless battlefield? “Yes of course. It is a difficult balance,” says Hinojo.

    “The overall process is quite demotivating,” says Herrador. “You end up grading a lot of notes and then seeing that very few are made public, or writing many that have no results. I don’t see a future for it in the medium or long term, because as an altruistic task it is very demotivating, so I see it likely that it will end up being dominated by interest groups,” he adds. It is one of the challenges. As in the notes themselves, the debate is open. For another notes editor consulted by this newspaper, computer scientist Michel Gonzalez, altruism and effort are a positive value: “The same thing was said about Wikipedia and there it is. The democratization of these systems is what prevents pressure groups from controlling the information.”

    Read Also:   So you can avoid receiving FaceTime video calls from strangers

    Editing conflicts on Wikipedia occur far from the eyes of its users. In X, votes and notes appear and disappear. The last post from the official notes account is precisely about this problem: “We updated the scoring algorithm to reduce grades that appear and then disappear as they receive a larger and potentially more representative set of grades,” the message says.

    Moderators fired

    Twitter, before being X, had a huge moderation team that placed labels, punished tweets or directly deleted them. Now they are fired. Musk believes that the solution is to transfer this responsibility to its users. It will not be easy, according to Professor Majo: “The human layer of professional moderation must always exist. It is evident that in parallel with automated moderation strategies, misinformation strategies are also evolving and it is difficult to predict what will come next.”

    Majo highlights the example of the Meta Advisory Council, which focuses on extremely sensitive issues. It is difficult for Musk to agree to submit to external advice. His times are also infuriatingly slow to resolve a current issue. Although the European Union has already warned you that it is better to think twice now with the new directive: “The obligations persist. “You can run but you can’t hide.” said Commissioner Thierry Breton.

    Musk’s push for notes coincides with a growing laziness from other networks about how much effort to devote to moderating their platforms. “There is an important trend within the platforms to remove political content,” explains Majo.

    A scientific article from Purdue University (USA) analyzed the system that was then called Birdwatch in 2022. Their results detected less misinformation but also less activity. This activity could be due to the fear of retweeting or amplifying a message that then ends up being “punished” by a correction from the rest of the users: “The findings suggest that, although the program helps increase written knowledge, reduce extreme feelings and potentially reducing misinformation in content generates an economic cost in the form of reduced activity on the platform,” the authors write.

    You can follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.

    Source: EL PAIS

    This post is posted by Awutar staff members. Awutar is a global multimedia website. Our Email: [email protected]


    Please enter your comment!
    Please enter your name here

    eleven − five =

    Subscribe & Get Latest News