As part of an ongoing effort to combat the spread of extremist content, Facebook has launched the Online Civil Courage initiative in the U.K. By creating a dedicated support desk for local non-government organizations, the initiative gives such organizations a direct relationship with Facebook, as well as funding and training to ferret out and report extremist activity.
Last week, the company authored a news post which released details on how it attempts to thwart terrorism on its website. Chief among their methods is the utilization of AI, an already omnipresent component of Facebook's inner workings. Through the use of image matching, the neural networks behind their AI actively scan photos and videos to ensure they don't match the content of previously posted terrorist media; a similar algorithm is employed for language to identify propaganda and praise of terrorism. Facebook's post also mentions that, as they do in person, terrorists tend to cluster together, facilitating the company's AI to follow threads of similarity in related pages, groups, profiles, and even inter-app activity - enabling their searches to extend to WhatsApp and Instagram, as well.
Facebook's ratcheted up, all fronts attack can be a vital tool in the fight against terrorism - especially as the technology and people grow savvier - though the company holds no illusion of a fix-all solution.
We’ve been cautious, in part because we don’t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe.
The company's efforts in growth and collaboration of both technology and people-to-people organizations prove Facebook's dedication to the cause. Due to social media's large role in both harmful and beneficial social movements, the proliferation of more widespread initiatives is likely to be seen.