This website uses cookies and similar technologies to understand visitors' experiences. By continuing to use this website, you accept our use of cookies and similar technologies,Terms of Use, and Privacy Policy.

Apr 10 2019 - 02:27pm
Detecting Inappropriate Content

The recent mass shooting in New Zealand reignited a discussion about how companies remove inappropriate content. The New Zealand shooter broadcasted the shooting via Facebook live. Since Vialougues users upload a range of videos, I explored how big tech companies deal inappropriate content and their mechanism for removing it from their sites.

Here are my findings:

Inappropriate content may include:

  • Words or images that personally attack, humiliate or defame an individual
  • Content that threatens, discriminates, and harasses
  • A Fake profile of an individual or organization
  • Content that is illegal, provide instructions for illegal activity or advocates violence
  • Depictions of nudity, pornography, or child abuse
  • Depiction of excessive violence


How does Facebook and YouTube detect and remove inappropriate content?

  • User reporting content to the site administrator
  • Hire content moderators to find and review questionable content
  • Investments in Artificial Intelligence, Machine Learning, Deep Thinking and Machine Detection Software

Software being used to recognize inappropriate contents


Other Security Measures being used

  • Content/ Web Content Filtering
  • Intrusion Detection Systems / Intrusion Prevention Systems (IDS / IPS)
  • Access control List on a Network


|By: Ahmed Bagigah|295 Reads