paint-brush
Lessons From Next-Gen Social: Strategies for User-Centric AI Deploymentby@andreafrancesb
237 reads

Lessons From Next-Gen Social: Strategies for User-Centric AI Deployment

by Annie BrownMay 29th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Social media platforms like Lips, Landing, and Diem are addressing AI challenges in data privacy and bias through user-centric data annotation and ethical AI practices. Engaging users in moderation and recommendation processes improves transparency and inclusivity. Innovative strategies like federated learning and differential privacy help maintain user trust and privacy. These approaches set new standards for responsible AI deployment in social media.
featured image - Lessons From Next-Gen Social: Strategies for User-Centric AI Deployment
Annie Brown HackerNoon profile picture


As a social media founder who scaled my platform, Lips, to 50,000 users, early on, I knew I would need to utilize AI to automate moderation and recommendation, however, the decision to do so came with some daunting challenges related to data privacy, bias, and implementation. Additionally, as regulations around ethical AI evolved, platforms like mine were left without practical solutions for how to comply.


In 2021, I and my team built a participatory data annotation solution that we implemented into Lips and soon began customizing this approach for other platforms. In my work as an AI researcher at the University of California San Diego, I found that bringing users into the process of training algorithms via data annotation had a significant impact on not only improving ML transparency but also reducing instances of bias, particularly when it came to large scale moderation algorithms.


As I continued this research, I spoke with several platform owners who are also facing the challenge of ethical AI implementation. Many of them expressed similar concerns regarding the ethical implications of AI, especially in the context of content moderation. With the ever-growing volume of user-generated content, ensuring that platforms remain safe and inclusive spaces has become a prioritized concern.

Bringing Users Into the ML Pipeline

User-Centric ML

One of the key insights I gleaned from these conversations was the importance of involving users in the moderation and recommendation processes. By empowering users to contribute to the annotation and labeling of content, platforms can not only improve transparency but also mitigate biases inherent in automated moderation systems.


By engaging users in the process, platforms can foster a sense of ownership and accountability among their user communities, ultimately leading to safer and more inclusive online environments.


For example, Landing is a community-powered social platform that enables users to create and share virtual mood boards and collages. As Miri Buckland, COO of Landing told me:


“Our primary use case for algorithms/machine learning all center around personalization, both in surfacing content that is relevant to you but also in assisting you during the creation process with suggestions and a helping hand based on where you can use it most. In support of personalization and creation assistance, we will also be leveraging AI to assign relevant data to visual assets on platform to supplement user generated information.”


Diem is another next-gen social media platform that is thinking about AI in fresh ways. Diem’s CEO Emma Bates describes the platform as:


“A social search engine, powered by community conversations & AI.” Bates continues, “members in the community can ask Diem AI their personal, pressing, funny and important questions to discover both factual information (AI summary) and validation (community conversations).”


With regards to Diem’s future plans for AI, Bates says, “We will see Diem AI evolve into a guiding light on your discovery journey in Diem (a big sister if you will!) We’re also exploring using AI/ML to help present conversational data differently - in a less hierarchial way - for personalization and of course, for moderation.”


Driving the digital revolution, Large Language Models (LLMs) such as OpenAI's GPT, are pioneering the transformation of social platforms' engagement with user-generated content. Their advanced capabilities to generate text-based summaries, suggest recommendations, and direct content creation — all based on extensive datasets they are trained on — are fundamental for managing the titanic scale of data required for personalizing experiences and moderating content.


LLMs are integral in revolutionizing user engagement. Thanks to the concept of transfer learning, models initially devised for a specific task can be repurposed to optimally personalize user feeds, proposing content that aligns closely with individual user preferences. This tailored user approach leads to heightened user satisfaction and prolonged engagement on the platform.


Embedded training is another leading-edge approach that integrates the AI training process directly with the devices running the models. This method empowers the models to learn and adapt on the fly by interacting directly with the environment and data they are designed to manage.


By localizing data processing on the device, embedded training curtails the need for data transmission to centralized servers, significantly accelerating training and bolstering data privacy.


This state-of-the-art training method is particularly beneficial in applications such as personal assistants, autonomous vehicles, and IoT devices where prompt data processing is indispensable. Embedded training not only boosts response times but also curbs bandwidth usage, and upholds the privacy of user data by retaining sensitive data on the device itself.

Efficiency Gains From Large Language Models

Social media platforms need to be at the forefront of adopting and refining LLMs due to their significant role in shaping public discourse and their massive user bases. The ability of LLMs to efficiently process and generate natural language can revolutionize how content is managed on these platforms, enhancing user interactions and content relevancy.


The rapidly evolving landscape of model architecture and training techniques is spearheading innovations in AI, enabling Language Learning Models (LLMs) to reach unprecedented accuracy with minimal data. The advent of strategies such as few-shot learning and transfer learning has transformed the conventional requirement for extensive data, making accurate model predictions possible through selective examples and dual-task optimization.


The progressive computational potential of GPUs and TPUs, combined with algorithmic enhancements, has considerably compressed model training durations, thereby accelerating the creation of potent models. This unprecedented speed in training and reduced need for data serves as a catalyst, dismantling barriers and making sophisticated AI solutions feasible for businesses irrespective of their size.


In particular, LLMs are playing a pivotal role in real-time content moderation for social media platforms. As these platforms grapple with increasing accountability for handling harmful and misleading content, the adaptability of LLMs, armed with few-shot learning, emerges as a formidable asset. These models can swiftly acclimate to novel types of problematic content, eliminating the need for exhaustive retraining.


Thus, they provide an agile, dynamic response to the ongoing challenges in content moderation.

Ethical Considerations

By actively participating in the development of AI governance frameworks, social media platforms can help set industry standards that balance innovation with responsibility. This proactive approach not only protects users but also ensures that the company's AI implementations are sustainable and legally compliant.


Next-generation social platforms have long recognized the critical role that data privacy and transparency play in building user trust and ensuring ethical AI implementation. These platforms are adopting innovative approaches to address these concerns, setting a positive precedent along the way.


One such approach involves implementing privacy-preserving techniques such as federated learning and differential privacy. Federated learning allows AI models to be trained across multiple decentralized devices without raw data leaving users' devices, thus preserving their privacy.


Differential privacy, on the other hand, adds noise to data before it is analyzed, making it difficult to identify individual user information while still providing valuable insights.


Additionally, next-generation platforms are prioritizing transparency by providing users with greater visibility into how their data is being used and by whom. This includes clear and concise explanations of AI algorithms and moderation processes, as well as opportunities for users to opt out of certain data collection practices.


Platforms like Kintsugi, a Web3 community for anime creators, is developing methods of providing more data agency to users.


Ron Scovil, CEO at Kintsugi, explains, “When streaming users think about their data, they often ask ‘why am I paying to watch commercials?’ Opting in would allow users to spend less money out of pocket, and give them the option to make only certain information available. There are certain advertisers and sponsors that are willing to pay for that information, and as an individual, you can end up saving - or even making - more money.”


Moreover, these platforms are embracing a culture of collaboration and open dialogue with users and stakeholders to co-create solutions that address ethical concerns around AI bias and moderation. By soliciting feedback and input from diverse perspectives, these platforms are better able to identify and mitigate potential biases in their algorithms and moderation practices.


“A big concern we have with AI/ML as a whole is that models are trained on ‘default male’ data,” says Bates. “This is why we created Diem AI—the database we’re accruing doesn’t exist anywhere else which is a huge opportunity to build something more inclusive for our community.”


Buckland adds, “We’re super focused on balancing the power of AI with maintaining that feeling of creating something. For example, we don’t see a world where you create a collage in one click, but we do see endless possibilities for ways in which we can leverage AI and our vast unique data set of trends, aesthetics, products and users to make smart recommendations during the creative process and the browsing experience.”

Lessons for Enterprise

It's evident that there's a wealth of knowledge to be gained from the innovative strategies employed by next-generation social media platforms. These trailblazers have showcased the massive potential of involving users in the AI process, leading to significant enhancements in algorithmic outputs and solidifying community trust in machine learning technologies.


The cutting-edge, proactive approach these platforms are taking toward data privacy and AI bias mitigation is setting a trend for the entire industry. By emphasizing the values of transparency, accountability, and user empowerment, they're not only creating safer and more inclusive online communities but also raising the bar for responsible AI usage in our digital world.


One of the notable practices includes enabling users to contribute directly to data labeling, ensuring AI models epitomize a broader range of perspectives and experiences, and mitigating the risk of perpetuating biases. This level of engagement cultivates a sense of ownership and accountability among users, often culminating in elevated levels of trust in the platform.


Businesses taking similar routes can deploy intelligent systems more swiftly, adeptly adapting to real-time changes and offering superior user experiences. Particularly in fields like healthcare, real-time data processing can aid in quicker diagnoses and the development of personalized treatment plans.


By embracing privacy-preserving technologies like federated learning and differential privacy, these platforms manage to train AI models without infringing upon user privacy – a crucial practice when handling sensitive data.


Being transparent about data usage, AI model operations, and steps taken for fairness in AI outputs helps to unravel the complexities of AI processes for users. When companies like Landing involve their communities in the AI development process, they reveal how AI can enhance user experiences while respecting individual data.


As we build AI-driven systems, it is crucial to prioritize user-centric approaches that emphasize transparency, privacy, and participation. By learning from the practices of pioneering social media platforms, we can design more ethical and effective AI systems that not only respect user data but also enrich the user experience.


This proactive approach will set a new standard for responsible AI use, ensuring that enterprises using this technology are inclusive and empowering for all users.