The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "evil robot"
Earlier this year, Scottsdale, Arizona mom Jennifer DeStefano experienced a terror no mother should ever have to face — the sound of her daughter’s sobbing voice crying that she’d been kidnapped.
But it wasn’t her daughter on the phone. It was an AI deepfake so convincing that DeStefano was prepared to hand over $50K to the scammers, who told her they would kill her daughter if she didn’t pay up.
Today, DeStafano gave a heartfelt testimony before the US Senate relaying her harrowing and terrifying story to the Judiciary Subcommittee on Human Rights and the Law.
“The longer this form of terror remains unpunishable, the farther more egregious it will become. There’s no limit to the depth of evil AI can enable”
“Artificial intelligence is being weaponized to not only invoke fear and terror in the American public, but in the global community at large as it capitalizes on, and redefines, what we have known as familiar,” said DeStefano.
“AI is revolutionizing and unraveling the very foundation of our social fabric by creating doubt and fear in what was once never questioned — the sound of a loved one’s voice,” she added.
After retelling the story of her horrific experience with the kidnapping and extortion scammers she gave to AZ Family back in April, the Arizona mom explained how real the deepfake voice clone seemed to be:
“It was my daughter’s voice. It was her cries; it was her sobs. It was the way she spoke. I will never be able to shake that voice and the desperate cries for help out of my mind.”
“No longer can we trust ‘seeing is believing,’ or ‘I heard it with my own ears,’ or even the sound of your own child’s voice“
Opining on the future of generative AI used for nefarious purposes, DeStefano warned, “The longer this form of terror remains unpunishable, the farther more egregious it will become. There’s no limit to the depth of evil AI can enable.”
She went on to say, “As our world moves at a lightning-fast pace, the human element of familiarity that lays foundation to our social fabric of what is known and what is truth is being revolutionized with AI — some for good and some for evil.
“No longer can we trust ‘seeing is believing,’ or ‘I heard it with my own ears,’ or even the sound of your own child’s voice.”
When DeStefano found out that her daughter had not been kidnapped, she called the police, but they told her there was little they could do because it was just a prank call and there was no actual kidnapping or money exchanged.
“Is this our new normal?” she questioned.
“Is this the future we are creating by enabling the abuses of artificial intelligence without consequence and without regulation?”
“If left uncontrolled, unregulated, and we are left unprotected without consequence, it will rewrite our understanding and perception of what is and what is not truth.”
Following her opening remarks, DeStefano would not be called upon for questioning until the very end of the hearing where she reiterated that “not all AI is evil” and that there were a lot of “hopeful advancements in AI” that could improve people’s lives.
While DeStefano clearly outlined meaningful harm arising from bad actors using AI in terrifying ways, Microsoft chief economist Michael Schwarz told the World Economic Forum (WEF) in May that AI shouldn’t be regulated until there was meaningful harm.
“We shouldn’t regulate AI until we see some meaningful harm that is actually happening — not imaginary scenarios”
Microsoft Chief Economist Michael Schwarz at the WEF Growth Summit 2023
Speaking at the WEF Growth Summit 2023 during a panel on “Growth Hotspots: Harnessing the Generative AI Revolution,” Microsoft’s Michael Schwarz argued that when it came to AI, it would be best not to regulate it until something bad happens, so as to not suppress the potentially greater benefits.
“I am quite confident that yes, AI will be used by bad actors; and yes, it will cause real damage; and yes, we have to be very careful and very vigilant,” Schwarz told the WEF panel.
When asked about regulating generative AI, the Microsoft chief economist explained:
“What should be our philosophy about regulating AI? Clearly, we have to regulate it, and I think my philosophy there is very simple.
“We should regulate AI in a way where we don’t throw away the baby with the bathwater.
“So, I think that regulation should be based not on abstract principles.
“As an economist, I like efficiency, so first, we shouldn’t regulate AI until we see some meaningful harm that is actually happening — not imaginary scenarios,” he added.
On January 23, 2023, Microsoft extended its partnership with OpenAI — the creators of ChatGPT — investing an additional $10 billion on top of the “$1 billion Microsoft poured into OpenAI in 2019 and another round in 2021,” according to Bloomberg.
This article was originally published by Tim Hinchliffe on The Sociable.