Instagram Reels reportedly shows sexual content to users who only follow children

Share
  • November 28, 2023

It’s been a rough few days for Meta. First the tech giant was accused of deliberately targeting children under 13 to use its platforms. Then it seemed to be rejecting ads for period care products on the basis that they were “adult” and “political.” Now it’s facing allegations that Instagram’s Reels algorithm delivers overtly sexual content to accounts which only follow children — as well as ads for big brands alongside them. Overall, it isn’t a great look.

In a new report from The Wall Street Journal, the publication tested Instagram’s algorithm by creating accounts which only followed “young gymnasts, cheerleaders, and other teen and preteen influencers” — content involving children and which was devoid of any sexual connotation. Even so, the Journal‘s experiment found that Meta’s TikTok competitor subsequently recommended sexual content to its test accounts, including both provocative adult videos and “risqué footage of children.”

The Journal further found that child users such as those its test accounts followed were also often followed by accounts owned by adult men. Following such accounts appeared to prompt Instagram’s algorithm to show it “more-disturbing content.”

All of this is bad enough, but it gets even worse for Meta. The report further found that Instagram Reels displayed ads for companies such as Disney, Walmart, Pizza Hut, Bumble, Match Group, and even the Journal itself alongside such unsolicited, algorithmically-delivered sexual content. 

In response, dating app companies Bumble and Match Group have both suspended advertising on Instagram, objecting to their brands being placed alongside inappropriate content.

SEE ALSO:

Meta apparently rejects period care ads for being adult or political

According to Meta’s Samantha Stetson, the Journal‘s test results are “based on a manufactured experience that does not represent what billions of people around the world see.” Meta’s Vice President of Client Council and Industry Trade Relations stated that over four million Reels are removed every month for violating its policies. A Meta spokesperson further noted that instances of content that breaches its policies are relatively low. 

“We don’t want this kind of content on our platforms and brands don’t want their ads to appear next to it. We continue to invest aggressively to stop it — and report every quarter on the prevalence of such content, which remains very low,” Stetson said in a statement to Mashable. “Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions.”

Earlier this year Meta rolled out an AI tool designed to determine whether content meets its monetisation policies, classifying it into suitability categories and disabling ads if it falls outside all of them. This tool was expanded to Reels in October.

SEE ALSO:

Complaint alleges that Meta ‘pursued’ children to use its platforms

It’s been a rough few weeks for brands trying to advertise on social media. Earlier this month big advertisers such as Apple and IBM fled Twitter/X after owner Elon Musk expressed support for an anti-Semitic conspiracy, and a Media Matters report found it displayed ads alongside Nazi content.

Twitter/X made the same argument that Meta is mounting now, namely that the tests that resulted in inappropriate content being shown alongside advertisers were “manufactured.” Yet just as in Twitter/X’s case, the issue is less about how many people saw it or how it occurred, and more about it being able to happen at all. 

Instagram Reels also differs from Twitter/X’s issue in that while Media Matters’ testing had it follow accounts that posted “extreme fringe content,” the Journal only followed young athletes and influencers. The sexual content offered up seemed to be entirely due to inferences drawn by Instagram’s algorithm.

As such, it seems as though said algorithm could do with some significant adjustments.

Source : Instagram Reels reportedly shows sexual content to users who only follow children