When and where the YouTube war is born
A multitude of media and newspapers this spring have embarked on a real global war against YouTube, triggering the so-called Adpocalypse. Everything started with a screenshot that immortalized Netflix’s pre-roll advertising to a video, which contained content judged inappropriate: the video reported a terrorist message.
Immediately broken algorithm
A large proportion of the Mountain View company’s investors decided to withdraw their investment shares, shaking YouTube’s top plans, which, even if they didn’t say so, ordered their experts to develop a technology that would ensure high standards of brand safety for advertisers: an algorithm was therefore born, whose task was to categorise “good” videos and “bad” videos, through the recognition of the language used, by
The implementation of this new algorithm has sent creators into panic, who have seen their revenues compromised (very often advertising entries are the only revenue that allow a channel to stay alive and to continue their work), due to a system that is not very precise, not so much for the technology used, but more for the way in which content is categorized.
The algorithm was not able to recognize which videos could actually be dangerous and which were actually performing information and dissemination tasks. It is interesting to note that these content on television are broadcast without any kind of problem, while with this new “system” they become material not recommended or violent, meeting in most cases barriers from the platform and actually undermining the Youtubers’ advertising revenue.
The new algorithm and rules for advertising
YouTube, aware of the limitations of the solution initially proposed, announced that it had updated its algorithm and adapted its guidelines to allow the Youtubers to create more ad-friendly material. In the algorithm’s calculation and procedure it changes little, the way in which the algorithm intervenes in the video categorisation has been “simply” refined: the list of penalized channels, because they are considered dangerous, has thus decreased by 30%.
The new algorithm has been created by following and studying the interactions that users have had in the last 3 months: videos that had been erroneously reported as dangerous videos can then be retroactively re-evaluated by providing creators with the possibility of monetization in retrospect.
In any case, Google continues to encourage users to report those that are judged to be incorrect penalties, since this new algorithm represents an improvement, but it is still, according to YouTube, still imperfect.
Revelation is not yet averted
It cannot therefore be said that with this algorithm the improvements have been significant, there is still a risk that channels with the highest number of subscribers choose to move to other platforms (such as Twitch) or even to proprietary platforms, while smaller creators continue to report to their public the possibility of closing down their channels once there are no significant variations that meet the needs of creators.