paint-brush
Creative AI ‘Shakes’ the Core of Humanity and Requires a Broader Discussion About Ethicsby@latner
8,646 reads
8,646 reads

Creative AI ‘Shakes’ the Core of Humanity and Requires a Broader Discussion About Ethics

by Avi LanterJanuary 9th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

We are entering a new era, the era of the creative machine. How we work, create and interact will change. This opens great new possibilities but also threats, and unrest has begun with artists protesting against copyright infringement. However, the implications are much broader than protecting jobs and copyright. Creativity is at the core of our humanity, and if creativity is made effortlessly in a click, our humanity is at question. To operate in this new era, we need to discuss ethical AI in a broader sense. We also need to introduce the right policies: new licensing type, public ownership of AI and factoring the true cost of training AI and AI inference.
featured image - Creative AI ‘Shakes’ the Core of Humanity and Requires a Broader Discussion About Ethics
Avi Lanter HackerNoon profile picture

For the past decade AI has been about automating the most mundane and repetitive tasks. Humans were still in charge of doing the creative work and the thinking. The boundary between humans and machines was clear. Machines can compute but humans write the algorithm. Or AI can help fix a photo that a human took. That type of automation brought a lot more opportunities than challenges. As a society, it held a promise that if we were able to utilize it properly, it would free up more time for relationships, self expression and creativity. Those things that give us meaning, things that are the core of our humanity.


Then recently and more so in the past year, generative AI came into fruition. Stable Diffusion, Dalle, Midjourney and the likes can create professional images based on a text prompt and GPT-chat can answer questions and write essays. The machine has become creative! And that will shake up the boundary between humans and machines. It is going to make us question the core of our humanity. How will we operate in a world where a machine can effortlessly make every drawing and write any essay? What do we value and what will give us meaning? And why even bother to put an effort into anything?

This is not about protecting jobs

For the artists among us, there’s an added ironic twist. Not only could generative AI threaten their livelihood and make them question the meaningfulness of their life pursue, but also the machine does so by copying their work. Any artists’ work that’s put online is scraped without copyright approval, trained on and reproduced into new variations. If you copy from one person that’s plagiarism, but if you copy from everyone it’s potentially considered fair use and is also untraceable.


No wonder that widespread protest has begun. Artists in Artstation, a website for showcasing CGI art and illustrations, changed their profile picture to ‘no AI’, and that has forced Artstation to change it’s policy and now every item can be marked as ‘no AI’ indicating that it is not allowed to use it for training AI. That’s a change in the right direction, it forces AI companies to adhere to ownership rules. However it will not stop the automation of art all-together and it shouldn't.


Changing professions because of a technological breakthrough is inevitable. We’ve been through that cycle many times before. Elevator operators are a thing of the past. There aren’t many travel agents left, and there are many more examples. There is even precedence in creative work, the invention of photography drastically reduced the number of realistic artists, and later on the digital camera changed the photography industry.


Moreover, it’s not ethical to try to stop technology to save jobs. That will be preferring the rights of a few, those whose jobs are threatened, over the rights of many others, those who can enjoy the fruits of this automation. But there are ways to tackle this threat.

There are ways to replace jobs that will be lost to generative AI

In a Pew Research survey conducted in 2017 most experts believed that just as some jobs become obsolete even more new jobs are created. Based on historical trends, the World Economic Forum estimated in 2020 that automation will create Twelve million more jobs than it relinquishes.


I tend to think more like Jason Hickel that under the current economic system, where capital always seeks to grow, all cost savings gained by automation is reinvested into production and people keep producing. But that people produce more to no end. New jobs are created, more products and materials are consumed, yet quality of life doesn’t increase. Therefore Jason, and other progressives, think that by slowing growth we can use those cost savings to provide everyone with a universal basic income and free up their time. Time that people can use for creative pursuits.


Whether we look at the near term, where jobs are transformed but not lost, as the World Economic Forum thinks, or at the longer term where we work less and consume less as Jason Hickel hopes, there’s a roadmap for tackling job loss. But a conversation has to start about how to preserve human creativity in an age of creative machines.

Ethics in generative AI

Ethics in AI used to be about creating unbiased systems, about upholding privacy rights while under constant surveillance by software with face and object detection. Then when generative machine learning started ethics expanded to preventing the spread of fake in a world with deep fakes, and about copyright. All of those are true but there’s something wider at stake. Ethics in creative machine learning should also be about keeping humanity in humans.


In dystopian science fiction, such as space odyssey, the threat is always that the machine grows a self-will and violently turns against people. Reality is not turning out anything like that. Who would have thought that the machine would threaten us by drawing pictures and writing essays?


Here are a few suggested policies that we should consider if we want to shape a fair, just and meaningful future. Some policies are more near term, others are harder to attain. But if human ingenuity can invent those machines it can also invent the policies to govern them. It must, our future depends on it.

Update creative licenses - for the use in AI training

Every piece of original creation: photos, art or code has a license that defines what can be done with it. Some creations are free, others offered for royalties. Some can be used for any use including commercial usages and others are just for private use and research. And so on.


Using third party work to train a machine learning model is a new use-case. Therefore it needs to be a new category in licensing. A work is either allowed for machine learning training (for free or for royalties) or not depending on the license. Sloyd (the author is a co-founder) has artists creating 3D models and model parts especially for AI and they are compensated for it.


When training data becomes more scarce, creators who will come up with new ways of expressions will have more time to enjoy the fruits of that new creative avenue before it is imitated by a machine. Or, those creators can decide to sell their creations for training AI at a fair value and cash out on their new creations quickly.

Then there are models already trained on vast amounts of creations without consent. ‘The ship has sailed’ and it cannot be undone. Litigation on what to do with past work that’s been used in training will go on for years, and the machine learning that’s out there now will be used billions of times before the litigation is settled.


Moreover, there’s no fair way of redistributing the value created from those machines back to the creators. If the wealth cannot be redistributed, the next best thing to do is to offer it for free to the public. LAION a dataset of links to images and matching descriptions used for text to image training, is already a non-profit (though it does not filter out based on copyright restrictions). Stability AI, the creator of stable diffusion, is an open source generative machine learning company. Using their machine is free and they make money out of add-ons. Similarly, as a settlement to get out of litigation, other companies will need to offer their machine learning tools for free or split up that product to a separate company who runs it as open source.


That might sound far fetched, but the FTC is already considering such policies of breaking big tech companies to increase competition. Using the same policies to govern the biggest technology shift of our lifetime is not far-fetched.

Value AI that boosts creativity rather than diminishes it

Companies and individuals designing and training AI should think about ways to empower people to be more creative. Technology review brings a few examples where the machine enhances creativity and does not replace it: Text-to-image plugin into Photoshop; a Stable Diffusion plugin to Blender, and text-to-image widget for Office.


At Sloyd we think about AI assisted creation rather than created by AI. We started with a focus more on the automation of 3D models for environment and game props since most 3D artists love working on characters for their games but could use help automating the rest. We use paid community creators to create the training database and that database will always expand so there will always be creative human input into it. We try to provide ways to have direct manipulation of objects with user inputs so that users have better control of the output. In this case, the suggestion is an ethical guideline and not a policy suggestion. However, if we have a clear ethical guideline, in time we’ll be able to suggest a policy to support it.

Factor in the true cost of training - including environmental impact

We pay for electricity based on the cost of extraction of raw material and production, the environmental cost is not factored in. That’s true for every use, for heating and light and also for data centers running AI.


Data centers consume about 2% of the total US energy and that share is expected to grow to 8%, to a large part driven by  AI energy consumption that doubles every 3 months. Training machine learning consumes a lot of energy, but also inference (using a trained model to create an output) is a resource intensive job. Right now it’s hard to find data on how much energy inference consumes and that’s part of the problem. With home appliances we know how much energy is consumed, we even have energy ratings. Why don’t we have similar energy ratings in software? (this was proposed by Øyvnind Sørøy, my colleague in Sloyd).


If we start measuring and if we have to pay the true cost of energy, then in many cases we will find that the same objective could be achieved much more efficiently by a different type of automation or by more efficient AI. This is part of a bigger green revolution, but with data centers expected to be 8% of consumption and that in addition to home and office software consumption, software and AI in particular cannot be exempt.

The discussion around generative AI ethics and policy has only just begone

The options creative machines open are profound but so are the implications. If we can imagine the future we want, a future where humans find meaning through creativity, we can start putting the right guidelines to ethical generative machine learning. The ideas provided here can be a baseline for future policies, or maybe better ideas will come up. One thing that certain is that the understanding of implications and the discussions on the measures we’ll take is just starting.