Artificial intelligence already allows you to create and edit images with great results. As you can imagine, this can have dire consequences: recently, for example, fake photographs of an attack on the Pentagon went viral. To try to avoid further problems, Twitter will expand Community Notes.
Community Notes is a feature that allows Twitter users themselves to alert that a particular post contains false information. The alert is just below the tweet, with comments made by moderators.
Now, Twitter is going to take that to photos as well. When a user adds a comment, they can choose whether the additional information is tied to the post or image.
According to the company, the comments will apply to the flagged image and also to future copies. So even if someone downloads or prints and republishes the file, the comments keep popping up.
AI is in Twitter’s crosshairs
In the text announcing the change, Twitter explicitly mentions fake images created using artificial intelligence.
Last week, a fake photo showed an explosion at the Pentagon, the headquarters of the U.S. Department of Defense.
The image was shared by several Twitter Blue subscriber accounts, including one posing as Bloomberg News.
Before that, montages with Pope Francis in different situations — with a huge jacket and riding a motorcycle, among others — also went viral on the social network.
The Community Notes were released in 2021, on an experimental basis, under the name Birdwatch, as a way to combat fake news. Thus, moderators could debunk incorrect or misleading information.
Until December 2022, the notes could only be seen by Twitter users in the US. Starting that month, they began tracking tweets globally, and more countries gradually gained moderators.