Instagram is to test a new tool which will allow users to report content they believe is misinformation.
The Facebook-owned social media platform said it would launch the new trial feature at the end of August.
It said a new “False Information” tag would be added to the existing reporting tools, within the section where users can flag content as inappropriate.
Instagram said it will use reports from the new tool to train artificial intelligence in how to proactively find and rate misinformation on the platform without the need for a report from users.
The photo and video sharing platform has previously been heavily criticised for failing to remove other forms of harmful content, including posts around the themes of suicide and self-harm.
Ian Russell, the father of Molly Russell – the teenager who took her own life after viewing disturbing material online – has said he believes harmful content on social media was a contributory factor in his daughter’s death, after finding material relating to depression and suicide on her accounts.
He, alongside a number of charities and online safety groups, have urged social media firms such as Facebook to take stronger action against such content.
Earlier this year and in response to that criticism, Instagram announced a ban on graphic images of self-harm and the removal of non-graphic images of self-harm from searches, hashtags, and the explore tab.
Instagram said the new false information tool was an initial step in a more comprehensive approach from Facebook, which it said was investing heavily in tackling misinformation across its apps.