
The Wall Street Journal recently released a report that claims the social media site Instagram is helping connect pedophiles and that the company’s algorithms actively promote child predatory content.
KNX News’ Charles Feldman spoke with Jessica Ghilani, professor and researcher at the University of Pittsburgh Disinformation Lab, about how algorithms are responsible for promoting the information and how the same could be true for the other Meta site, Facebook.
Ghilani shared that the Journal and an institution at Stanford University found that on Instagram, there were “networks of grooming and child predatory seeking people,” connecting through content that could be considered child abuse.
“One of the things that they encountered was that rather than being effective at stamping [the content] out, by going through the traditional reporting mechanisms that these platforms have built into them, that they say make these platforms safe, instead what happened was the algorithm, which is of course automated, would produce for them, as they were trying to research this problem, more intensely troubling content that is in the same vain,” Ghilani said.
The professor shared that when users search on Instagram, it takes it as a “suggestion” for what an account wants to see and then shows them more.
So, while researchers continued to try and pursue these issues on the platform, researchers continued to get pushed the same content they were trying to stop from existing.
But Ghilani says that wasn’t the most troubling finding.
The most concerning piece about what Ghilani read in the report was that it was aware the content it was showing to researchers could be sensitive to child abuse.
But, instead of removing the content from the platform, it would only warn users and give them the ability to click “see content” anyway or avoid it.
“It would warn you,” Ghilani said, calling the findings “alarming.”
Since the report has come out, Ghilani says that Meta has claimed to rectify this situation, removing the option to view such content on the platform, but she is left asking why it was there in the first place.
As for other social media sites possibly having the same thing happening, Ghilani says that she knows it’s happening on at least one other Meta-owned platform, noting that “It certainly is occurring on Facebook.”
But it remains up in the air if other platforms, like Twitter or TikTok, have put measures in place to remove such content from their sites. Still, with automation, she says anything is possible.
“I think this could happen in any space where there is algorithmically, in other words, automated, content that is shown to users,” Ghilani said.