paint-brush
L1ght Leverages AI To Defeat Online Toxicityby@brianwallace
427 reads
427 reads

L1ght Leverages AI To Defeat Online Toxicity

by Brian WallaceOctober 23rd, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

L1ght is an anti-toxicity startup that leverages intelligence (big data, deep learning) and human knowledge, in order to analyze and predict online toxicity. It analyzes texts, images, video, voice and sound to protect children from harmful incidents. The startup made headlines for removing over 130K pedophiles from public groups on WhatsApp, and for getting Google and Facebook to purge apps that were monetizing links to questionable WhatsApp groups. They later got Bing to remove underage porn from its search results. The AI component is leagues ahead of dictionary blacklists most of us are familiar with from social networks.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - L1ght Leverages AI To Defeat Online Toxicity
Brian Wallace HackerNoon profile picture

Some people debate whether AI will be the ultimate win or the ultimate loss for humanity, so what’s better than AI that is being developed purely for the sake of our children?

There is a dark side to the online experience that is deeply concerning for parents and children, and that is the toxicity it presents. 

These forms of toxicity include cyberbullying, hate speech, relentless criticism, false rumors, inappropriate content, racism, sexism, and sexual predators. 

Online toxicity is also found in arenas that branch out of the general Internet, such as group chats and online forums (especially within the gaming community), as well as public groups on messaging services. 

Sadly, young people don’t tend to report it. As a result, it goes on for long periods of time, and they end up being long-term victims that are silently taking the abuse. The anonymity aspect plays a major role as a motivator and enabling factor for toxic players, and when one (fake) account closes, another one opens, to continue where the previous one left off. 

Game publishers, console makers, and government agencies are aware of these issues and working to confront them, however there are some setbacks. 

Aside from the obvious complications stemming from modern technology and gaming practices, there are also concerns surrounding the freedom of speech, and a lack of chargeable offenses on the legal side. 

It’s one of those things that are easier said than done- there are very few ways to punish perpetrators because of ambiguities within the legal systems have that end up constraining the efforts of law enforcement during the era of online gaming and social media. 

Plus, criminal charges may not necessarily be the necessary fix in every case. Sometimes it can be mental-health assistance, intervention, or a dissuasion interview.

For years, all these forms of online toxicity within the digital realm went on the rise, and there was no clear solution.

L1ght is an anti-toxicity startup that leverages intelligence (big data, deep learning) and human knowledge, in order to analyze and predict online toxicity.

It analyzes texts, images, video, voice and sound to protect children from harmful incidents. L1ght exists to be the standard of child safety in the new world, thereby defining the “Anti-Toxicity” category.

L1ght was founded by CEO Zohar Levkovitz and CTO Ron Porat in an effort to create a reality where harmful people are forced to consider the consequences, realizing toxicity has a cost. 

In just a few months after launch, and under its previous name “AntiToxin”, L1ght made headlines for removing over 130K pedophiles from public groups on WhatsApp, and for getting Google and Facebook to purge apps that were monetizing links to questionable WhatsApp groups. They later got Bing to remove underage porn from its search results. 

One product of theirs plugs directly into the back-end of popular games and applications, where it can then help defend millions of kids in nano-seconds. A second product by them helps hosting providers monitor and prevent the existence of toxic content within their millions of hosted sites.

The AI component is leagues ahead of dictionary blacklists most of us are familiar with from social networks, as it was designed to think both like a kid and its potential predator.

For instance, they pick up on different nuances in textual exchanges, such as when teenagers are throwing ‘fighting words’ at one another while competing in a game, versus if one teenager is repeatedly victimizing another in a harassing manner while using words that seemed harmless at first glance.

With all factors considered, it’s safe to say that L1ght may be on the way to make online toxicity a thing of the past, so parents can let their kids play any game, stream any service and chat through any social app - and still be able to sleep well at night.