Hacking Crime With AI [Infographic]
Founder @ NowSourcing. Contributor @ Hackernoon, Advisor @GoogleSmallBiz, Podcaster, infographics
Did you ever daydream about fighting crime like Batman? You know, using the Bat computer to get intel on a certain criminal or track their whereabouts, to be alerted when a crime is in progress, or even remotely deploy robotic measures to stall or detain a villain. Now your fantasies are closer to reality with all sorts of great AI-powered tech that fights crime.
AI is used to predict all sorts of things like conversational responses, the weather, business markets, and a slew of other probabilities. Why not apply AI to predicting crime? They already did with systems such as PredPol, which was developed by the LAPD and UCLA, and launched in 2008.
The crime prediction software adapts existing AI models with historical crime data, using epidemiological (spread of disease) and geological (earthquakes) models that are already proficient in making predictions. Another tool in the arsenal is mass surveillance systems, which AI can use for facial recognition, license plate reading, and listening for gunshots, hastily informing police the location of known criminals and crimes in progress.
When law enforcement can’t be there, autonomous AI can step in the ring. Ford has recently developed a self-driving police car. It can detect traffic violations, pursue violators identifying vehicle and driver, and even give a warning or ticket.
On the beat, in Huntington Park, CA we have Knightscope K5, an autonomous robot patrolling the streets. It uses license plates, mobile device identifiers, and facial recognition to detect blacklisted individuals.
While the 500 pound robot does not have physical measures to stop crime, it has been invaluable in obtaining evidence for convictions and generally deterring crime.
Where should you avoid acting like a hoodlum? Well, everywhere, but already over 50 police departments in the U.S. use PredPol, as well as some college campuses. One of the flaws of facial recognition tends to be the misidentification of minorities like Asians, African Americans and Native Americans. Additionally, AI crime prediction is capable of harboring existing biases or can become self-fulfilling, when police increase enforcement, the program interprets the area as bad.
While crime-fighting AI is not a futuristic cure-all being limited, like most computer applications, by user input and interpretation, it does have great potential for making our cities safer.
Subscribe to get your daily round-up of top tech stories!