Security Stop Press : X Users Profiting From US Election Misinformation

Security Stop Press : X Users Profiting From US Election Misinformation

The BBC has reported that its own investigation has uncovered a number of X users profiting substantially by sharing US election misinformation, AI-generated images, and conspiracy theories. The investigation reportedly revealed networks of accounts amplifying each other’s content, boosting both reach and earnings potential.

The BBC reports finding that dozens of accounts that regularly re-share each other’s posts, are creating an engagement cycle that drives revenue. It alleges that some claim earnings from hundreds to thousands of dollars, with payouts based on premium user engagement (likes, shares, and comments) following changes to X’s payment model in October. In one example (from the BBC), an X content creator with the user account "Freedom Uncut," reportedly garners millions of views for its account holder (known as ‘Free’) and earns thousands monthly by sharing provocative AI images and satirical depictions of political figures.

X has faced criticism for what some say is a lenient stance on misinformation. For example, unlike other platforms that restrict earnings from misleading content, X appears to lack strong guidelines on misinformation, raising concerns about whether it incentivises provocative, often false, content. False claims about election fraud and defamatory allegations against candidates are spreading, with some content reaching wider audiences on Facebook and TikTok.

Politicians are increasingly turning to influencers on X for visibility. “Freedom Uncut” reportedly (by the BBC) claimed that local politicians have approached him, while another account, "Brown Eyed Susan," a Kamala Harris supporter, noted that her account, too, has attracted political campaign requests. Many of her most viral posts appear to involve conspiracy theories, illustrating how rapidly misinformation can spread.

To counteract misinformation networks, businesses and users should practise critical fact-checking before engaging with or sharing content. Social media platforms could implement stricter monetisation rules and transparent misinformation policies to limit the profitability of sensationalist content. For organisations, promoting media literacy and responsible digital interaction can help shield against the destabilising impact of online disinformation.

Share Buttons
Hide Buttons