Last week (March 4th), TikTok, which just launched a new Q & A feature for creators to answer questions from fans, announced on March 10th that it will introduce a new commenting feature. This allows creators to allow or control postings before comments on their content are published. Another new feature is aimed at commenting users and pops up a box prompting them to reconsider posting inappropriate or mean comments.

According to TikTok, the new feature aims to maintain a supportive and positive environment where people can focus on being creative and finding a community.

Instead of passively deleting offensive comments later, creators who choose to use the new “Filter All Comments” feature will be able to choose which comments will appear in the video. When this feature is enabled, each comment must be individually checked and approved using a new comment management tool.

This feature is an extension of TikTok’s existing comment management capabilities. In the past, creators could filter spam and other offensive comments, as well as other social apps like Instagram, or by keyword.

However, “Filter All Comments” means that no comments will be published unless the creator approves. This gives creators complete control over their presence on the platform and prevents bullying and abuse. However, it also allows creators to disseminate false information without any backlash, or to pretend to be preferred over reality. This can be a nuisance. Especially when a brand is trying to decide with which creator to promote a product, it can give a false impression of user liking.

Another feature is instead to encourage users to rethink malicious comments, that is, posting comments that appear to be bullying or inappropriate. In addition, it will remind users of TikTok’s community guidelines so that they can edit their comments before sharing them.

This kind of “nudge” helps slow down people’s actions and slow them down instead of reacting quickly by giving them time to stop and think about what they are saying. Already, TikTok is using Nudge to try to slow the spread of false alarms by asking users if they want to share unfounded claims that fact checkers can’t confirm.

Other social networks have taken years to add prompts to users to stop and think before posting. Instagram, for example, started in 2010, but it took nearly a decade to decide to try a feature that encourages users to rethink before posting offensive comments. Twitter, meanwhile, just last month said it was testing a new feature that requires users to rethink harmful replies. The company has been running the same test variation for nearly a year.

Social networks have hesitated to incorporate such prompts into their platforms, even though they have demonstrated their powerful ability to influence user behavior. For example, since Twitter encouraged users to read articles linked to their tweets before they retweet, users have increased the frequency of opening those articles by 40%. But most networks often choose to rely on features like Instagram’s “View Hidden Comments” and Twitter’s “Hide Repies” to rank down or hide negative comments.

TikTok says it is seeking advice from industry partners in developing new policies and features, but in addition to that, the Cyberbullying Research Center, which is conducting research on cyberbullying, abuse and abuse, It also announced a partnership with CRC). The company will continue to work with CRC to develop other efforts to promote a positive environment.

Categorized in: