For the past decade AI has been about automating the most mundane and repetitive tasks. Humans were still in charge of doing the creative work and the thinking. The boundary between humans and machines was clear. Machines can compute but humans write the algorithm. Or AI can help fix a photo that a human took. That type of automation brought a lot more opportunities than challenges. As a society, it held a promise that if we were able to utilize it properly, it would free up more time for relationships, self expression and creativity. Those things that give us meaning, things that are the core of our humanity.
Then recently and more so in the past year, generative AI came into fruition. Stable Diffusion, Dalle, Midjourney and the likes can create professional images based on a text prompt and GPT-chat can answer questions and write essays. The machine has become creative! And that will shake up the boundary between humans and machines. It is going to make us question the core of our humanity. How will we operate in a world where a machine can effortlessly make every drawing and write any essay? What do we value and what will give us meaning? And why even bother to put an effort into anything?
For the artists among us, there’s an added ironic twist. Not only could generative AI threaten their livelihood and make them question the meaningfulness of their life pursue, but also the machine does so by copying their work. Any artists’ work that’s put online is scraped without copyright approval, trained on and reproduced into new variations. If you copy from one person that’s plagiarism, but if you copy from everyone it’s potentially considered
No wonder that widespread protest has begun. Artists in Artstation, a website for showcasing CGI art and illustrations,
Changing professions because of a technological breakthrough is inevitable. We’ve been through that cycle many times before. Elevator operators are a thing of the past. There aren’t many travel agents left, and there are many more examples. There is even precedence in creative work, the invention of photography drastically reduced the number of realistic artists, and later on the digital camera changed the photography industry.
Moreover, it’s not ethical to try to stop technology to save jobs. That will be preferring the rights of a few, those whose jobs are threatened, over the rights of many others, those who can enjoy the fruits of this automation. But there are ways to tackle this threat.
In a Pew Research
I tend to think more like Jason Hickel that under the current economic system, where capital always seeks to grow, all cost savings gained by automation is reinvested into production and people keep producing. But that people produce more to no end. New jobs are created, more products and materials are consumed, yet quality of life doesn’t increase. Therefore Jason, and other progressives, think that by slowing growth we can use those cost savings to provide everyone with a universal basic income and free up their time. Time that people can use for creative pursuits.
Whether we look at the near term, where jobs are transformed but not lost, as the World Economic Forum thinks, or at the longer term where we work less and consume less as Jason Hickel hopes, there’s a roadmap for tackling job loss. But a conversation has to start about how to preserve human creativity in an age of creative machines.
Ethics in AI used to be about creating
In dystopian science fiction, such as space odyssey, the threat is always that the machine grows a self-will and violently turns against people. Reality is not turning out anything like that. Who would have thought that the machine would threaten us by drawing pictures and writing essays?
Here are a few suggested policies that we should consider if we want to shape a fair, just and meaningful future. Some policies are more near term, others are harder to attain. But if human ingenuity can invent those machines it can also invent the policies to govern them. It must, our future depends on it.
Every piece of original creation: photos, art or code has a license that defines what can be done with it. Some creations are free, others offered for royalties. Some can be used for any use including commercial usages and others are just for private use and research. And so on.
Using third party work to train a machine learning model is a new use-case. Therefore it needs to be a new category in licensing. A work is either allowed for machine learning training (for free or for royalties) or not depending on the license. Sloyd (the author is a co-founder) has artists creating 3D models and model parts especially for AI and they are compensated for it.
When training data becomes more scarce, creators who will come up with new ways of expressions will have more time to enjoy the fruits of that new creative avenue before it is imitated by a machine. Or, those creators can decide to sell their creations for training AI at a fair value and cash out on their new creations quickly.
Then there are models already trained on vast amounts of creations without consent. ‘The ship has sailed’ and it cannot be undone.
Moreover, there’s no fair way of redistributing the value created from those machines back to the creators. If the wealth cannot be redistributed, the next best thing to do is to offer it for free to the public. LAION a dataset of links to images and matching descriptions used for text to image training, is already a non-profit (though it does not filter out based on copyright restrictions). Stability AI, the creator of stable diffusion, is an open source generative machine learning company. Using their machine is free and they make money out of add-ons. Similarly, as a settlement to get out of litigation, other companies will need to offer their machine learning tools for free or split up that product to a separate company who runs it as open source.
That might sound far fetched, but the FTC is already considering
Companies and individuals designing and training AI should think about ways to empower people to be more creative. Technology review brings a
At Sloyd we think about AI assisted creation rather than created by AI. We started with a focus more on the automation of 3D models for environment and game props since most 3D artists love working on characters for their games but could use help automating the rest. We use paid community creators to create the training database and that database will always expand so there will always be creative human input into it. We try to provide ways to have direct manipulation of objects with user inputs so that users have better control of the output. In this case, the suggestion is an ethical guideline and not a policy suggestion. However, if we have a clear ethical guideline, in time we’ll be able to suggest a policy to support it.
We pay for electricity based on the cost of extraction of raw material and production, the environmental cost is not factored in. That’s true for every use, for heating and light and also for data centers running AI.
Data centers
If we start measuring and if we have to pay the true cost of energy, then in many cases we will find that the same objective could be achieved much more efficiently by a different type of automation or by more efficient AI. This is part of a bigger green revolution, but with data centers expected to be 8% of consumption and that in addition to home and office software consumption, software and AI in particular cannot be exempt.
The options creative machines open are profound but so are the implications. If we can imagine the future we want, a future where humans find meaning through creativity, we can start putting the right guidelines to ethical generative machine learning. The ideas provided here can be a baseline for future policies, or maybe better ideas will come up. One thing that certain is that the understanding of implications and the discussions on the measures we’ll take is just starting.