paint-brush
4 Real-Life Examples of Potential AI Pitfalls by@jordangurrieri
496 reads
496 reads

4 Real-Life Examples of Potential AI Pitfalls

by JordanMay 24th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Businesses need to approach AI with an understanding that many potential problems can stem from misuse, with over-automation and copyright infringement being the primary concern. Here, we will discuss a few common ways businesses might use AI that are or will become, problematic.
featured image - 4 Real-Life Examples of Potential AI Pitfalls
Jordan HackerNoon profile picture

Machine learning (ML) and artificial intelligence (AI) have been delighting people with some of the fascinating tools we’ve seen hit the market recently.

Like all powerful tools, AI has both good and bad elements – understanding the different issues that can (and will) emerge will help significantly mitigate the inevitable problems that are bound to emerge.

Businesses need to approach AI with an understanding that many potential problems can stem from misuse, with over-automation and copyright infringement being the primary concern. Here, we will discuss a few common ways businesses might use AI that are or will become, problematic.

1. Using generative design to create imagery could save time, but potentially harm artists

Given the massive amount of output possible with an AI compared to a human making drawings from scratch, it’s no contest. As such, businesses can see the allure of using such systems to create graphics and other visual assets for their brand.

If you sell rings, you should probably look elsewhere for now.

However, many find the idea of generative art and design to be problematic.

And rightfully so – given that AI is trained on existing images found around the web, with some drawing on ML learning processes to make unique pieces, many systems can emulate the styles of well-known artists.

Using images from certain systems that may pull heavily from some piece of existing, possibly copyrighted, artwork is a legitimate problem. It will be up to digital storefronts to police products that freely pull from everywhere on the web, including licensed works.

How to prevent the misuse of generative design in art

One of the most unfortunate aspects is that this capability will rely on market compliance. While Google and Apple may play ball, there will always be tools running amock on the web with less-than-ethical operating parameters.

Ethical usage examples of generative design include processes such as: assisting with creating graphics using internal assets, helping with layout, and refining your custom artwork or branding assets. However, using these tools to freely incorporate IPs, whether they’re well-known or not (it especially hurts smaller designers), is out of the question.

It will be up to businesses to avoid these and stand against entities found to be using tools that infringe on the IP of others. Agencies like the DCMA and those it cooperates with outside the US must step up their game to ensure they can effectively combat such issues since, like anything else, not everyone will comply.

Perhaps most important is that businesses like OpenAI and Google ensure that this kind of usage of their systems, freely through the web apps and via backend APIs, isn’t being used for malicious purposes. This will help avoid issues of businesses using these tools to create graphics that might unwittingly copy published artwork.

2. Some AIs are already being used to spoof brands

We’ve all seen rip-off brands at some point, but unlike the quality you see on street corners, many brand forgeries online can look like the real McCoy.

While these are obvious, online brand forgeries that will attempt to defraud your customers can look indistinguishable from the real thing. Source: Perception Partners

Like catfishing, where scammers, create a false image of themselves to become the love of someone’s life, many also masquerade as brands to defraud business customer bases. 

Much like using copyrighted artwork, emulating a business style for fictitious communications is already one way AI is helping those who mean to do harm.

Helping customers validate your identity

Those who make phonies also leverage the same technology that helps businesses segment and understand their audience.

Of course, this is primarily a problem for retail and businesses that might reach out to users with offers or discounts. In any case, helping your customers not fall victim to a fake requires education.

The federal IRS has an entire page on “how to know” since they’re commonly used in scams.

The IRS has a good example, but you should be more specific for business. For example, you can kill two birds with one stone when inviting people to sign up for loyalty rewards or marketing communications:

  • What do your promotions look like?
  • How often do you send out information, and from where (i.e., email address, SMS number, etc.)?
  • Any unique traits to look for?

Providing such information and making it easy to access will help cut down on possible issues stemming from users receiving false communications using your brand identity.

3. Using AI before it’s ready can lead to disaster

Would you use the autopilot in a Tesla if it got you to your destination 90% of the time?

You don’t even have to drop a lit cigarette for it to burst into flames automatically. | Source: FOX13

Much like a human, when a human gets the first step in some process wrong, the issue snowballs and can lead to radically incorrect answers.


Better yet, combine the loose constants first. | Source: Alexander Tutoring


In humans and machines, biases work much the same way. A particular belief will function as a starting point for reasoning – when the process is algebraic, an early computational error will stack and lead to an incorrect final answer.

However, many other processes aren’t quite so empirically rigid and instead aim for a certain range of acceptable output.

While we can use AI to check itself, there is still an immense need for human oversight as AI lacks the intuition (and cognitive stimuli) that causes humans to re-evaluate their approach.

AI needs a lot of testing now and for the foreseeable future

We already see the negative impact of using AI for different tasks, especially when it is a first and last defense.

Some businesses that have adopted chatbots and other AI-driven tools, such as for customer service, find that they’ve become too reliant on these systems, causing the experience to suffer.

Even though AI is excellent for many processes, it makes mistakes and unlike a human, most systems can’t reconcile once an issue has taken root. While some systems can be trained to provide comprehensive services, it takes time.

Across the board, AI training still needs much more time. Virtually nothing in any AI discipline is perfect, especially regarding communication and real-time navigation. Though AI should universally continue to improve, gains will begin to dimmish, and most issues should become infrequent and less severe.

Until then, most highly sophisticated AI systems will need regular supervision from developers and users. 

4. Hallucinations can easily derail an AI into the obscure

When we hallucinate as a human, it’s either self-inflicted via hallucinogenic compounds as a result of some kind of abnormality with the body.

The main intersection between the human and machine kind of hallucination is that a powerful belief influences behavior. In a human hallucination, behavior usually changes to accommodate the hallucinated element, with the mind treating it as real.

Though language models and other systems appear fluent in their mediums, complex issues tend to be susceptible to hallucinations.

What this looks like is confidently treating some untrue condition as if it were; depending on the subject, it can have varying consequences.

Things can go wrong when identifying an object, typical for computer vision uses in manufacturing.

Quite simply, some stuff looks like other stuff, as seen in the video example above. Slight deviations in perspective or lighting conditions can dramatically alter what a computer vision system detects.

As such, these systems, like everything else AI at the moment, will need human oversight as innovators continue to work on improving the reliability of these systems. The same applies to language models that sometimes erroneously hold two or more truths about any given subject.