TikTok Implements New User Safety Measures: Just The Facts

TikTok recently announced that it was introducing new measures to prevent harmful content from making its way onto the popular short-form video app. These changes make it the latest social media platform to implement user and brand safety measures. The updates to TikTok’s user safety went into effect July 9.

What Are The New User Safety Measures From TikTok?

Developed with the U.S. Content Advisory Council, TikTok’s new user safety measures use automation not only to identify posts with violations, but automatically remove them. “Over the next few weeks, we’ll begin using technology to automatically remove some types of violative content identified upon upload, in addition to removals confirmed by our Safety team,” explains Eric Han, Head of U.S. Safety for TikTok, in a post about the new measures.

Currently, all TikTok videos pass through an automated upload process that scans for content which could violate TikTok’s code of conduct. Once flagged, videos are then further investigated by safety moderators and taken down if necessary. Han notes that the new automatic removal system is “reserved for content categories where our technology has the highest degree of accuracy,” freeing up moderators from looking at distressing videos and allowing them to focus more on nuanced violations like bullying and misinformation, as well as community reports of violations.

TikTok has updated its in-app display for violations and suspended accounts, including reminders indicating when an account is close to being permanently removed. Creators will have the right to appeal a video removal. TikTok said that since it launched testing of the automated removal system, the “false positive rate for automated removals is 5% and requests to appeal a video’s removal have remained consistent.”

Why Is TikTok Implementing User Safety Measures?

Andrew Hutchinson for Social Media Today reports that, with a third of TikTok’s user base 14-years old or younger, “these [user safety changes] are important measures,” especially considering TikTok has come under scrutiny and faced lawsuits for violations in the past. TikTok needs “to implement more measures to protect users from dangerous exposure, and these new tools should help to combat violations, and stop them from ever being seen.” TikTok likely wants to stay as free from scrutiny and negative news as possible, enjoying and leveraging its ascent to the top of the heap among social media platforms for video sharing and creator content.

What Do The New User Safety Measures Mean For Advertisers?

Whether it’s called user safety or brand safety, brands don’t want their advertising running alongside anything that doesn’t align with their brand messaging or that might be considered questionable or offensive to their consumers. “When asked by Integral Ad Science (IAS) to assess various types of digital media in October 2020, 55% of U.S. digital media professionals said that social media was most likely to experience brand risk incidents in the next 12 months,” reported a January eMarketer article on the state of brand safety. (Additionally, many consumers assume a brand that appears near negative content is endorsing that content.) Brands are more likely to spend media dollars with a platform they can trust. By prioritizing user and brand safety, TikTok can reassure brands advertising on the platform that the app aims to be as free from harmful content as possible.

Click here to view original web page at insights.digitalmediasolutions.com

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.