Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

7.3 C
Cheshire
Wednesday, April 9, 2025

Facebook’s AI To Delete Hate Messages Doesn’t Actually Work

Internal Facebook documents reveal that the tools for detecting content that violate the rules are inadequate for such a task.

This is important because, for years, Mark Zuckerberg has justified the use of algorithms to fight illegal and abusive content on his platform; to the point that the Facebook CEO came to assure before 2020 that his Artificial Intelligence would be able to eliminate “most of the problematic content” on his platform. This way, Facebook doesn’t hire as many human employees, and you can leave its platform moderation on “automatic.”

In practice, these algorithms ignore most of the content that violates Facebook’s rules ; and the company knows it, according to internal documents published by the Wall Street Journal . The figures of the internal investigation do not leave the AI ​​well, stating that it eliminated publications that generated only between 3% and 5% of hate messages, and only 0.6% of the publications that violated the rules of violence .

These are figures that have absolutely nothing to do with those that Facebook itself publishes in its reports; According to the latest one, published in February to deny its involvement in the assault on the US Capitol, the “super efficient” Artificial Intelligence would have detected and deleted 97% of hateful posts , even before any human had detected them.

The difference is not surprising to researchers and organizations who have studied Facebook, who have long warned that Facebook’s figures do not match those of third-party studies; but they have also denounced the difficulties that Facebook puts to obtain this type of data, and the lack of transparency about how it reached its conclusions.

Facebook has responded to the Wall Street Journal publication, clarifying that these are old and outdated documents, and that they demonstrate that their work is “a journey of several years.”

His defense has centered on the fact that it is “more important” to see how hate messages, “generally”, are being reduced on Facebook, rather than in the elimination of hate content. That is why he believes that the most “objective” metric is prevalence, since it represents the content that has ‘escaped’ them, and that has been reduced by 50% in the last three quarters according to his data.

Therefore, Facebook alternatives would be tacitly accepting that it is not capable of removing hateful content from its platform, but that the important thing is that it gets very few people to see it .

This is by no means the first time that Facebook has had to defend itself from its own internal studies this month alone; A former Facebook employee leaked documents that would show that Facebook does nothing against hateful content because it would hurt them financially.

spot_imgspot_img

Latest

Police appeal to find missing Neston man

Officers are appealing for help from the public in...

Merseyside man takes on charity challenge for Wirral Hospice

Just weeks after losing his father to cancer, Merseyside...

Limited100 Reaches 400 Customers as Demand for Handmade Automotive Art Accelerates

Handcrafted car print brand Limited100 has reached an exciting...

Joe Fraser Opens Innovative Gymnastics Club in Lichfield with Support from LoveAdmin

Olympic gymnast and World Champion Joe Fraser has officially...
spot_imgspot_img

Newsletter

Don't miss

ECB Intraday Liquidity Framework Offers Direction—But Practical Compliance Still a Major Challenge

The European Central Bank’s (ECB) newly established intraday liquidity...

Tooltap Launches in Manchester to Bring Equipment Sharing to Local Communities

Manchester-based platform is making it easier for residents to...

Limited100 Reaches 400 Customers as Demand for Handmade Automotive Art Accelerates

Handcrafted car print brand Limited100 has reached an exciting...

Enviro Waste Management Rebrands to Champion Business-Focused Waste Solutions

Enviro Waste Management has revealed a fresh brand identity...

More News

Police appeal to find missing Neston man

Officers are appealing for help from the public in locating a missing man from Neston. Grant Oswald was reported missing on Tuesday, April 8. He...

Planning consent granted in Congleton for McGoff Group

The McGoff Group has received planning permission for a new, multi-generational development on Morley Drive in Congleton. The former John Morley site plans were given...

£10,000 reward to identify mother of baby found in Kirkham brook

Crimestoppers is offering a reward of up to £10,000 to aid the investigation into the tragic discovery of a newborn baby. The boy was found...