Meta ‘misled’ the public through a campaign that downplayed the amount harmful content on Instagram and Facebook, court documents show

0 45


  • A New unsealed complaint They accused Meta of misleading the public about harmful content on the platform.
  • While Meta publicly cites low rates of malicious content, internal data shows higher rates.
  • Meta used these reports to convince the public that the platform was safer than they were.

Meta may have reduced the price significantly. Misinformation, hate speech, discriminationand other harmful content on its platforms, such as a New unsealed complaint 33 states on behalf of the company represented.

The complaint was filed Meta Creating quarterly reports known as the Community Standards Enforcement Report (CSER) that show minimal violations of community standards on the platforms, but exclude key data from user experience surveys about users encountering harmful content.

For example, Meta says it captures only 10 or 11 hate speech for every 10,000 content views on its platform, or about 0.10% to 0.11%, with data from July to September 2020 in CSER. Report. Meta defines hate speech by CSER as “violent or dehumanizing speech, expressions of inferiority, calls for exclusion or segregation based on protected characteristics or insults.”

But the complaint came from Meta’s internal user survey known as the TRIPS (Trips) – an internal memo. Instagram Once called “our north star, our ground-truth barometer” – months ago he reported a significantly higher level of hate speech. An average of 19.3% of users on Instagram and 17.6% of users on Facebook reported witnessing hate speech or discrimination as of May 2020, according to the TRIPS report.

Similarly, an average of 12.2% of Instagram users and 16.6% of Facebook users reported seeing graphic violence on these platforms, and more than 20% of users saw bullying and harassment, according to the TRIPS report.

Meta defines graphic abuse as content “on Facebook and Instagram that glorifies violence or glorifies the pain or humiliation of others,” and also that bullying and harassment are “very personal in nature,” so “using technology to proactively identify these behaviors can be more challenging than other types of abuse” by CSER. .

The complaint — which cites several other statistics on harmful content gathered from internal reports — argues that Meta hid these figures and used public reports such as CSER “to create a net impression that harmful content is not ‘proliferated’ on the platforms.”

The complaint was compiled using “excerpts from internal emails, employee conversations and company presentations.” basis for the New York Times, and did not go into much detail on the methodology of such surveys as TRIPS, or the other he referred to as the Bad Practices and Encounters Framework (BEEF). It only noted that both were “robust surveys” to ask users about issues such as suicide and self-harm, negative comparisons, misinformation, bullying, unwanted sexual advances, hate speech or discrimination.

A Meta spokesperson told Business Insider in an email that the data collected from these surveys does not accurately measure “spread” — which is Meta’s percentage of views of infringing content out of total views — in the way the complaint alleges. Instead, the company uses the data from these surveys to develop information such as Harmful Comment or Content Notifications (published in 2020) or kindness reminders that encourage people to act respectfully when interacting with strangers. 2022)

“We’ve spent a decade working on these issues and employing people dedicated to keeping young people safe and supported online,” a Meta spokesperson said in an email. “The complaint misrepresents our work by using selective quotes and cherry-picked documents.”



Source link

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More