paint-brush
Meta Has Misled Its Users and the Public by Boasting a Low Prevalence of Harmful Contentby@metaeatsbrains

Meta Has Misled Its Users and the Public by Boasting a Low Prevalence of Harmful Content

by Save the Kids From MetaNovember 2nd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Meta has used Community Standard Enforcement Reports to present a false image of safety on its platforms, emphasizing low prevalence rates of harmful content. Despite these claims, users, including young users, continue to encounter harmful content. The reports and public statements are seen as attempts to downplay the widespread issues, especially for young users, on Instagram and Facebook.

People Mentioned

Mention Thumbnail
featured image - Meta Has Misled Its Users and the Public by Boasting a Low Prevalence of Harmful Content
Save the Kids From Meta HackerNoon profile picture

The United States v Meta Platforms Court Filing October 24, 2023 is part of HackerNoon’s Legal PDF Series. You can jump to any part in this filing here. This is part 26 of 100.

C. Meta has misled its users and the public by boasting a low prevalence of harmful content on its Social Media Platforms [Redacted]

458. Through its public representations, Meta has created the false impression that Facebook and Instagram are safe Platforms on which users rarely encounter harmful content. [Redacted]


459. In the face of criticism from parents, experts, and policymakers that its Social Media Platforms are harmful for young users, Meta has endeavored to persuade its users and the broader public that its Social Media Platforms are safe and suitable for young users.


460. To that end, Meta regularly publishes Community Standard Enforcement Reports (CSER or Reports) that boast very low rates of its community standards being violated [Redacted]


461. The Reports, published quarterly, describe the percentage of content posted on Instagram and Facebook that Meta removes for violating Instagram and Facebook’s Community Standards or Guidelines. Meta often refers to that percentage as its “prevalence” metric.


462. Meta often amplifies the reach of the Reports and its “prevalence” metrics by announcing them through press releases, distributing them in advance to members of the press, and holding conference calls with the press to tout their release.


463. [Redacted]


464. Meta has publicly represented that the “prevalence” statistics in the Reports are a reliable measure of the safety of its Social Media Platforms—even going so far as to assert that the CSER “prevalence” numbers were “the internet’s equivalent” of scientific measurements utilized by environmental regulators to assess the levels of harmful pollutants in the air. For example, in a May 23, 2019 post on its website entitled “Measuring Prevalence of Violating Content on Facebook,” Meta stated the following:


One of the most significant metrics we provide in the Community Standards Enforcement Report is prevalence. . . . We care most about how often content that violates our standards is actually seen relative to the total amount of times any content is seen on Facebook. This is similar to measuring concentration of pollutants in the air we breathe. When measuring air quality, environmental regulators look to see what percent of air is Nitrogen Dioxide to determine how much is harmful to people. Prevalence is the internet’s equivalent — a measurement of what percent of times someone sees something that is harmful. [Second emphasis added.]


465. Zuckerberg told Congress on March 25, 2021 that Meta’s “prevalence” numbers serve as a “model” for companies’ transparency efforts.


466. The Reports are intentionally used by Meta to create the impression that because Meta aggressively enforces its Community Standards—thereby reducing the “prevalence” of community-standards-violating content—Meta’s Social Media Platforms are safe products that only rarely expose users (including young users) to harmful content and harmful experiences.


467-471. [Redacted]


472. Nevertheless, Meta publicly represents that Instagram and Facebook are safe because Meta enforces its Community Standards.


473. For example, the third quarter 2019 Report touts Meta’s “Progress to Help Keep People Safe.” Likewise, the second quarter 2023 Report states that “[w]e publish the Community Standards Enforcement Report . . . to more effectively track our progress and demonstrate our continued commitment to making Facebook and Instagram safe.”


474. Each of the Reports—whether they contain an express representation about safety—create the net impression that harmful content is not “prevalent” on Meta’s Platforms and that the Platforms are therefore safe for users, including young users.


475-479. [Redacted]


480. The impression that the Reports create—that Meta’s Platforms are safe and users only rarely encounter harmful content—is false and misleading.


481. Meta’s third quarter 2021 Report estimated that on Instagram, “less than 0.05% of views were of content that violated our standards against Suicide & Self-Injury.” That representation created the impression that it was very rare for users to experience content relating to suicide and self-injury on Instagram.


482. [Redacted]


483. In other words, while a reader of the CSER Reports could reasonably understand that self-harm content on Instagram is rarely encountered by users—far less than 1% of the time— [Redacted]


484-489. [Redacted]


490. The third quarter 2021 Report concluded that only “0.05-0.06%” of views on Instagram were of content that violated Meta’s standards on bullying and harassment. This representation created the impression that it was very rare for users to experience bullying or harassment on Instagram.


491-497. [Redacted]


498. [Redacted] Meta affirmatively misrepresented that fact through its Reports.


499. Meta’s Reports similarly misrepresented the frequency that its users experienced harmful content on Facebook. For example, in its Report for the fourth quarter of 2020, Meta represented that only about 0.05% of views of content on Facebook were of violent and graphic content. [Redacted]


500. [Redacted]


501. Relatedly, Zuckerberg’s public statements about “prevalence” of harmful content creates a misleading picture regarding the harmfulness of Meta’s Social Media Platforms. Zuckerberg and other company leaders focus on “prevalence” metrics in public communications because those metrics create a distorted picture about the safety of Meta’s Social Media Platforms.


502-506. [Redacted]


507. On information and belief, Meta issued the Reports and made other public statements to minimize the public’s awareness of the harmful experiences that are widespread on Instagram and Facebook—particularly for young users.



Continue Reading Here.


About HackerNoon Legal PDF Series: We bring you the most important technical and insightful public domain court case filings.


This court case 4:23-cv-05448 retrieved on October 25, 2023, from Washingtonpost.com is part of the public domain. The court-created documents are works of the federal government, and under copyright law, are automatically placed in the public domain and may be shared without legal restriction.