Doc Huston


News — At The Edge — 3/24

Our future conundrums today — social media, data, laws and drone swarms — have a queasy and ominous feel to them.


Our future conundrums today

Everything is terrible: an explanation —

“Facebook is a breeding ground for fake news and polarized outrage, accused of corrupting democracy and…[Twitter] has become a seething battleground of widespread, targeted abuse…[while] YouTube videos are messing with the minds of children… decided to pass the buck to Wikipedia, without telling them….

What the hell is going on?….[3] somewhat related theories…

  1. Uncanny Social Valley Theory ….’Social media is poison’…[because] online interactions…[reduce] people to awful caricatures of themselves… in the form of hastily constructed, low-context text…amplified by the outrage…algorithms which think that ‘engagement’ is the highest goal….
[But] this isn’t just a story of lack of compassion…[rather] genuinely awful people doing truly, genuinely awful things….

2. Intransigent Asshole Theory…. Internet was always full of awful….[Now] the assholes are more organized, their victims are often knowingly and strategically targeted, and many seem to have calcified from assholedom into actual evil….

[So] if only 3% of the online population really wants [it]…to be horrible, ultimately they can force it to be, because the other 97% can …live with a world in which the Internet is often basically a cesspool, whereas those 3% apparently cannot live with a world in which it is not….

3. Outrage Machine Money Maker Outrage equals engagement equals profit…[since] days of yellow journalism and ‘if it bleeds, it leads.’ This is explicit for the politically motivated, for Russian trolls and…hate groups, …[and] for Facebook and Twitter and YouTube [who]…rake in huge amounts of money….

[Since] the social costs are immense … they have to be externalized…[thus] YouTube deciding that Wikipedia is the solution…[which] simply doesn’t scale….

[T]he actual solution…is to stop optimizing for ever-higher engagement …[is] anathema… and instead claim ‘we don’t know what to do’….[It’s] not like things can get much worse than they already are. Right? Right?”

When an AI finally kills someone, who will be responsible? —

“[Now] self-driving car…has hit and killed a pedestrian, with huge media coverage…but what laws should apply….[and] whether an AI system could be held criminally liable for its actions….

Criminal liability usually requires an action and a mental intent…[and] three scenarios that could apply….

  1. perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal…deemed to be innocent. But anybody who has instructed the…person or animal can be held criminally liable….[So] ‘AI program could be…an innocent agent, with either…programmer or the user…held to be the perpetrator-via-another’….
  2. natural probable consequence, occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act…[like] intelligent robot in a…factory that killed a human worker….The key question here is whether the programmer…[knew] this outcome was a probable consequence of its use….
  3. direct liability…requires both an action and an intent. An action is straightforward to prove if the AI system [action]… results in a criminal act or fails to take an action when there is a duty to act. The intent is much harder to determine….

Then there is the issue of defense….Could a program that is malfunctioning claim…defense of insanity? Could an AI infected by…virus claim defenses similar to coercion or intoxication?….

[Finally] issue of punishment. Who or what would be punished for an offense for which an AI system was directly liable, and what form would this punishment take?

For the moment, there are no answers to these questions….

[Might] have to be settled with civil law. Then a crucial question will be whether an AI system is a service or a product….

  • [If] product, then product design legislation would apply based on a warranty, for example….
  • [If] service, then the tort of negligence applies…[and] plaintiff would usually have to demonstrate three elements to prove negligence…[First, if] defendant had a duty of care …. [Second] defendant breached that duty….[Third if] breach caused an injury to the plaintiff.

And if all this weren’t murky enough, the legal standing of AI systems could change as their capabilities become more human-like.”

Trump-Linked Political Data Firm Offered to Entrap Politicians —

“Alexander Nix, who [ran]…Cambridge Analytica, had a few ideas for a…client looking for help in a foreign election [like]…send an attractive woman to seduce a rival candidate and secretly videotape…or send someone posing as a wealthy land developer to pass a bribe…[but] client, though, was actually a reporter….

[Firm] was founded by Stephen K. Bannon and Robert Mercer, a wealthy Republican donor…[with] psychographic modeling techniques…built in part with the data harvested from Facebook [and]…work for the Trump campaign….

Cambridge Analytica…parent company, the SCL Group…[has] clients that don’t want to be seen to be working with a foreign company…[so] set up fake IDs and …employs front companies and former spies on behalf of political clients.

The information that is uncovered through such clandestine work is then put ‘into the bloodstream to the internet….’Then watch it grow, give it a little push every now and again, over time, to watch it take shape….

[Must] happen without anyone thinking, ‘That’s propaganda.’ Because the moment you think ‘that’s propaganda’…question is, ‘Who’s put that out?’”

Think One Military Drone is Bad? Drone Swarms Are Terrifyingly Difficult to Stop —

“[Use] of military drones… has already had a major impact on the future of warfare planning…[and] Pentagon has…program to build a new artificial intelligence for controlling its own drone efforts ….

A simple drone built out of plywood can carry and drop a hand grenade. Russian drones…destroyed two Ukrainian ammo depots last year…[and] swarm of primitive drones struck Russian forces in Syria…[by] militias and guerilla organizations working with minimal tools….

[As] drones become more proficient at making decisions on their own, the need for a remote uplink could vanish altogether…[and] simply shooting at…drones isn’t a great option for stopping them…[because] overwhelm most existing kinetic countermeasures’….

[U.S.] is working on…drone swarms, including a recent test of…100 robin-sized micro-drones…with a distributed intelligence…. It’s not clear yet what an effective response to this attack strategy will look like.

The military still grapples with fighting guerilla and insurgent forces [so]…advent of cheap, easily assembled drone swarms serving as a micro-bombing fleet could make such situations worse.

How do you identify and strike military centers when the “air force” attacking…can be assembled largely from scrap…components, powered by a “brain” equivalent to…[smartphone] launched from a parking lot?

Find more of my ideas on Medium at, A Passion to Evolve.
Or click the Follow button below to add me to your feed.
Prefer a weekly email newsletter — no ads, no spam, & never sell the list — email me with
“add me” in the subject line.
May you live long and prosper!
Doc Huston

More by Doc Huston

Topics of interest

More Related Stories