How it all happens and how we should respond

While much of the discussion around AI image generators focuses on political misinformation, a more immediate threat which is already widespread on Facebook (or known as Meta) is that spammers and scammers are using AI-generated images to gain significant social media traction. A recent Harvard Kennedy School Misinformation Review study highlights that these images often appear unlabelled in users’ feeds, even to those who don’t follow the posting Facebook pages, leading to widespread misperception and a pressing need for transparency.

The digital and social media platforms always follow a cycle: First, financial opportunism often drives the earliest adoption of new technologies. Second, bad actors quickly exploit emerging tools for profit before any implementation of effective countermeasures. Recognising this pattern/cycle is crucial for anticipating misuse in new technology development and platform policy. Ultimately, this helps us maintain a safe, responsible and trustworthy online ecosystem.

How AI Fuels Online Scams

AI-generated content offers a significant advantage to bad actors, given how easy it is to create. This low barrier to entry enables the production of high-volume, compelling content designed to trick unsuspecting social media users. Here, spam pages employ clickbait tactics, posting AI images and then directing users to off-platform content farms and low-quality domains, typically via URLs in the first comment. For example, some pages used AI images of cabins or tiny homes to lure users to websites purportedly offering building instructions. These pages often increased their posting volume and shifted from sharing links to posting AI images, likely perceiving that Facebook’s algorithm would favour image-based content. By using these AI tools, spammers and scammers are allowed to efficiently produce sensational, low-effort content to maximise engagement metrics.

The Harvard Kennedy School Misinformation Review study found that 50 of the analysed pages had changed their names, often from unrelated subjects, and showed a massive jump in followers after the name change but before any new organic activity. One of the highlights is the “Life Nature” page on Facebook, originally a band page called “Rock the Nation USA” with around 9,400 followers. After being hacked and renamed on December 29, 2023, it gained an astonishing 300,000 followers in just one week while transitioning to posting AI-generated content. This practice not only provides a ready-made audience but also makes the misleading information seem trustworthy, rendering fraud harder to discern on Facebook.

Facebook’s Algorithm Accidentally Boosts Irrelevant Content

A key finding is that the Facebook Feed algorithm often recommends unlabelled AI-generated images to users who don’t follow the posting pages. This is likely driven by the algorithm’s inherent tendency to promote content that generates engagement. This algorithmic shift is notable in Facebook’s own data: content views from “unconnected posts” — from sources social media users aren’t directly connected to — in Facebook Feeds dramatically rose from 8% in Q2 2021 to 24% in Q3 2023. This means nearly a quarter of what users see originates from sources they don’t follow, making them highly susceptible to algorithmically boosted AI spam.

This increase in “unconnected posts” signifies a departure from a traditional social media model where content primarily came from direct connections or following. By increasingly prioritising content from unknown sources for engagement — likely a strategic move to compete with platforms like TikTok — Facebook is dismantling a protective barrier or filter. Social media users are now routinely exposed to unscreened, potentially hostile content, increasing their vulnerability to deception.

Conclusions

Nowadays, we often see comments on Facebook posts that suggest users’ unawareness, with social media users congratulating an AI-generated child for an AI-generated painting, for example. Scam accounts intend to exploit the unquestioning nature of online users, in order to seek personal information or sell non-existent products. The increasing difficulty in distinguishing real from artificially created content will likely exacerbate issues with trust in media and information.

There is an urgent need for social media platforms, such as Facebook, to prioritise the employment of detection and transparency measures. Researchers, especially those specialising in information science, have to continue to study, highlight and report the online dynamics of AI misuse. Today, we often talk about how to collaboratively build a safe, responsible and trustworthy AI ecosystem. To do so, in this case, we have to construct a more transparent and resilient social media space by addressing AI-powered exploitation and deception.


Thanks for reading my takes on social media and society. If you would like to learn more about (mental) health, personal development, and/or (online) education from me, please feel free to subscribe to my newsletter below. Also, please feel free to browse my blog — Society & Growth — for more content at https://societyngrowth.co.site.

Sign up for the newsletters of Society & Growth

Leave a comment

Trending

Discover more from Society & Growth

Subscribe now to keep reading and get access to the full archive.

Continue reading