Amid the national discussion about AI safety and non-human-originated content in the US, an app researcher spotted an effort by the social media app to flag AI posts for its 2+ billion users.

Dark Reading Staff, Dark Reading

August 2, 2023

2 Min Read
a photo of the instagram app on various devices
Source: Aleksey Boldin via Alamy Stock Photo

Instagram appears to be implementing a feature that would label social media posts created by ChatGPT and other artificial intelligence as "AI-generated content." It's a move that security researchers say is an important step to making the Web safer.

The feature was spotted recently by app researcher Alessandro Paluzzi, and it comes on the heels of Instagram parent Meta and six other Big Tech companies meeting at the White House to announce voluntary commitments to securing AI. Those commitments include implementing a watermark to flag content that originates from "synthetic" users.

"We've already seen a dramatic increase in abuses of deepfake images and videos circulating online," said Eduardo Azanza, CEO of Veridas, via email. "As artificial intelligence advances, it will become more and more challenging to distinguish between authentic and artificially generated media. Without some sort of label, the public is left to rely on their personal intuition alone."

Deepfakes and AI-authored media have become a topic of national discussion thanks to the SAG-AFTRA writer's strike in Hollywood, ongoing moves by the Biden Administration to create cohesive national policies for secure AI development and use, and AI's increasing presence in both online and real-world crime. In a testament to how concerning the situation is on the cybercrime front in particular, the FBI recently issued an alert about a sextortionist ring using fake social-media posts to con children and adults. And in another example, a cybercriminal earlier this year attempted to extort $1 million from an Arizona woman whose daughter he claimed to have kidnapped, using a clone of the child's voice in a deepfaked plea for help

Meanwhile, AI-generated content detections by security tools have a fairly high success rate for now, but researchers are warning that cybercriminals are getting better and better at evading those protections.

For now, helping everyday people differentiate between what comes from a chatbot and what doesn't, and what's real and what's not, is a crucial first step to mitigating AI's varied spectrum of threats, researchers say.

"We view this move towards a more transparent media landscape as extremely positive," Azanza said of Instagram's labeling effort. "If we want to integrate AI successfully into our daily lives, it is important for large, impactful companies to lead the charge in aligning with standards and regulations that enforce accountability and responsibility."

Neither Meta nor Instagram immediately returned a request for comment by Dark Reading.

About the Author(s)

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights