Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

7 C
Cheshire
Thursday, February 27, 2025

Facebook’s AI To Delete Hate Messages Doesn’t Actually Work

Internal Facebook documents reveal that the tools for detecting content that violate the rules are inadequate for such a task.

This is important because, for years, Mark Zuckerberg has justified the use of algorithms to fight illegal and abusive content on his platform; to the point that the Facebook CEO came to assure before 2020 that his Artificial Intelligence would be able to eliminate “most of the problematic content” on his platform. This way, Facebook doesn’t hire as many human employees, and you can leave its platform moderation on “automatic.”

In practice, these algorithms ignore most of the content that violates Facebook’s rules ; and the company knows it, according to internal documents published by the Wall Street Journal . The figures of the internal investigation do not leave the AI ​​well, stating that it eliminated publications that generated only between 3% and 5% of hate messages, and only 0.6% of the publications that violated the rules of violence .

These are figures that have absolutely nothing to do with those that Facebook itself publishes in its reports; According to the latest one, published in February to deny its involvement in the assault on the US Capitol, the “super efficient” Artificial Intelligence would have detected and deleted 97% of hateful posts , even before any human had detected them.

The difference is not surprising to researchers and organizations who have studied Facebook, who have long warned that Facebook’s figures do not match those of third-party studies; but they have also denounced the difficulties that Facebook puts to obtain this type of data, and the lack of transparency about how it reached its conclusions.

Facebook has responded to the Wall Street Journal publication, clarifying that these are old and outdated documents, and that they demonstrate that their work is “a journey of several years.”

His defense has centered on the fact that it is “more important” to see how hate messages, “generally”, are being reduced on Facebook, rather than in the elimination of hate content. That is why he believes that the most “objective” metric is prevalence, since it represents the content that has ‘escaped’ them, and that has been reduced by 50% in the last three quarters according to his data.

Therefore, Facebook alternatives would be tacitly accepting that it is not capable of removing hateful content from its platform, but that the important thing is that it gets very few people to see it .

This is by no means the first time that Facebook has had to defend itself from its own internal studies this month alone; A former Facebook employee leaked documents that would show that Facebook does nothing against hateful content because it would hurt them financially.

spot_imgspot_img

Latest

Zutec Acquires Operance to Advance Digital Solutions for Building Safety and Compliance in the UK

Zutec, a key provider of construction and property management...

Brothers jailed following stabbing in Winsford

Two brothers have been jailed following the stabbing of...

Cheshire firefighters to hold charity car washes

Fire stations across Cheshire are holding charity car washes...

West Cheshire and Wirral charity recognised for supporting families of premature babies

A regional social care charity has been recognised for...
spot_imgspot_img

Newsletter

Don't miss

Brothers jailed following stabbing in Winsford

Two brothers have been jailed following the stabbing of...

Wallasey golf pro faces fear to raise cash for charity

A golf coach will face up to his fears...

Why a Refurbished iPhone Might Be a Smarter Choice Than Apple’s iPhone 16E

Apple’s latest iPhone 16E is now available at £599,...

SPS Pouches Advances Sustainable Packaging as UK Transitions to a Waste-Free Economy

As the packaging sector moves towards greater sustainability, SPS...

More News

Zutec Acquires Operance to Advance Digital Solutions for Building Safety and Compliance in the UK

Zutec, a key provider of construction and property management software across the UK and Ireland, has announced its acquisition of Operance, reinforcing its commitment...

Cheshire Police’s sexual consent campaign to air on BBC

A documentary featuring Cheshire Police’s sexual consent campaign will air on BBC One and BBC Three this week. This comes following the publication of the...

SPS Pouches Advances Sustainable Packaging as UK Transitions to a Waste-Free Economy

As the packaging sector moves towards greater sustainability, SPS Pouches proudly announces that it has achieved its goal of providing customers with flexible packaging...