YouTube has updated its moderation techniques and regulations in response to multiple content creators’ concerns about an increase in spam, bots, and abusive language in comments.
The changes will result in improved spam detection in the comments. The new automated feature’s effectiveness was demonstrated after it removed over 1.1 billion spam comments in the first half of this year.
However, some spammers used different techniques, which is why YouTube has now integrated machine-learning models to combat them effectively.
The same automated detection has been implemented in the chat section of the live streams.
YouTube is adding a removal warning as well as timeouts in the case of abusive comments from human users.
Harsh comments will be removed as part of this update, and commenters will be warned for breaking community guidelines.
If the same user continues to use abusive language in their comments, they will be banned for 24 hours. These methods have been shown in beta testing to reduce repeat offenders.
Furthermore, it will now show an estimate of when a submitted movie will finish processing and be ready in full resolution, whether 1080p, 2160p, or 4320p.
To read our blog on “Pakistanis say YouTube is the no.1 online video and music platform,” click here