Facebook has released a messenger app for children over the age of six. Communication via the instant messaging service should be ensured by special filter algorithms and moderator support.
Facebook advertises with the reference to a moderator team of real people who look after the Messenger app around the clock. This is to ensure that “quick action in reporting is ensured” – inappropriate content should be quickly edited and deleted. The US network said that the algorithm for filtering content that is harmful to minors is also a more stringent policy and therefore different criteria than the regular Facebook software. However, the company does not comment on the speed with which these messages are processed and content is deleted, reports gizmodo.
The question of security and transparency of the features of the messenger app is the most important aspect of such a software offering for children: Facebook has shown, however, that it sometimes can not meet the requirements. Firstly, protection against racist and sexist content in the social network is not always guaranteed. On the other hand, Facebook had problems adequately dealing with the potential influence of Russian social bots on the 2016 US election campaign. The question of how the group can protect the most worthy individuals of a society – children – from potential dangers such as violence, sexual harassment and pedophilia remains unanswered.
Ever since the incident about the lack of security in the YouTube Kids app, online offerings for younger people have come under special scrutiny: Fake videos, which were allegedly harmless cartoon characters in violent scenes and pornography, have come under heavy criticism of Google’s children’s software , Facebook now has to prove that it is better positioned than the competition.