paint-brush
Hooked on Algorithms: Meta's Impact on Mental Healthby@metaeatsbrains
364 reads
364 reads

Hooked on Algorithms: Meta's Impact on Mental Health

by Save the Kids From MetaNovember 1st, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Delve into the shadowy world of social media algorithms, particularly Meta's, and uncover their potential harm on users' mental well-being. Learn how deceptive design practices lurk behind the scenes of your favorite platforms.

People Mentioned

Mention Thumbnail
featured image - Hooked on Algorithms: Meta's Impact on Mental Health
Save the Kids From Meta HackerNoon profile picture

The United States v Meta Platforms Court Filing October 24, 2023 is part of HackerNoon’s Legal PDF Series. You can jump to any part in this filing here. This is part 18 of 100.

4. The Recommendation Algorithms are harmful to young users’ mental health, notwithstanding Meta’s representations to the contrary.

191. Meta falsely represents that its Recommendation Algorithms are benign and designed for young users’ well-being. For example, during a congressional hearing on March 25, 2021, Zuckerberg denied that Meta “make[s] money off creating an addiction to [its] platforms.” At the same hearing, Zuckerberg stated that “the way we design our algorithms is to encourage meaningful social interactions” and denied that Meta’s teams “have goals[] of trying to increase the amount of time that people spend [using Meta’s Platforms].”


192. Elsewhere, Meta has reiterated that its Recommendation Algorithms are optimized to yield “positive experience[s]” or “meaningful interactions” as opposed to maximizing “time spent” by users on the Platforms. For example, on September 30, 2021, Davis testified before Congress that Meta “made changes to our News Feed to allow for more meaningful interactions, knowing it would impact time spent” and that Meta did this “because we were trying to build a positive, more positive experience.”


193. But as described above, the Recommendation Algorithms are far from benign: they promote young users’ compulsive social media use in a sophisticated and individualized manner and are designed to capture and retain young users’ attention—often to the detriment of their mental and physical health.


194. These harms are pervasive and often measurable.


195-206. [Redacted]


207. Instagram researchers (who are ultimately funded by and report to Meta) have also observed that “[s]ocial comparison exacerbates problems teens are dealing with” in that, “[a]lthough others’ behaviors online can hurt, the self-scrutiny and anxiety associated with personal consumption patterns is more damaging to mental health.”


208-218. [Redacted]


209. [Redacted] But in its public communications with current and prospective users, Meta conceals these aspects of its Recommendation Algorithms.


220. Meta understands the psychologically manipulative nature of its Platforms’ functionality, has knowledge that its minimally constrained Recommendation Algorithms promote harmful content, and is aware that users “wish[] Instagram [gave] them better control over what [content] they [see].”


221. [Redacted]


222. At the same time Meta was prioritizing engagement over safety (and in turn, increasing its profits), Meta continued to insist that user well-being (especially teen well-being) was its top priority, including through a January 2018 statement by Zuckerberg that the company was “focused on making sure Facebook isn’t just fun to use, but also good for people’s wellbeing,” as reported by the Guardian.


223. For example, on October 5, 2021, Zuckerberg reacted to former Facebook product manager Frances Haugen’s whistleblower revelations and testimony to Congress—which sent Meta’s stock price down over 10% in the six weeks following the initial revelations—by publicly stating in a post on his Facebook profile: “At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That’s just not true.”


224. Despite its knowledge that Meta’s Recommendation Algorithms harm young users’ health, Meta does not disclose these harms to young users or their parents in its public communications or in its user registration processes for its Social Media Platforms.


225. Meta denies that its Recommendation Algorithms are designed to be addictive and that the algorithms promote emotionally distressing content, but Meta knows that it designs its algorithms to be addictive and to promote such content. Meta’s misrepresentations and omissions regarding its Recommendation Algorithms’ promotion and amplification of harmful content deprives users, including the parents of young users, of informed decision-making authority regarding whether and how to engage with Meta’s Social Media Platforms.



Continue Reading Here.


About HackerNoon Legal PDF Series: We bring you the most important technical and insightful public domain court case filings.


This court case 4:23-cv-05448 retrieved on October 25, 2023, from Washingtonpost.com is part of the public domain. The court-created documents are works of the federal government, and under copyright law, are automatically placed in the public domain and may be shared without legal restriction.