Thanks to artificial intelligence, machines now possess basic human skills: speak, write, see. Logically, the next evolving step is the process of creation. And music creation by AI is no exception.
Back in 1951, Alan Turing, an English mathematician, invented the machine that could generate simple melodies.
In the 90-s, David Bowie was one of the first musicians to experiment with random lyrics generators, and Brian Eno wrote his album Generative Music 1 with the help of software engineering Koan.
In 2016, Sony invented AI Flow Machines that also was a great help for musicians.
The IIIiac Suite has become the first score composed by an electronic computer.
Artificial intelligence is not only a co-writer, sound engineer, or even a sound producer (like streaming services Spotify or Apple Music), it’s a huge influence on the music industry in general.
In order to create accurate recommendations for you, Spotify’s AI analyzes the listening history of the entire subscriber base against the songs in your heavy rotation.
AI compares the playlists you frequent to the playlists of other Spotify subscribers to locate users with tastes that are close to yours. After that, the AI finds song recommendations based on your own taste and on the music that users with similar tastes listen to.
Both the ‘Discover Weekly’ and ‘Release Radar’ playlists on Spotify are actually algorithm-based features. These automatically generated weekly playlists are highly praised for their spot-on recommendations. There is no magic behind them, only powerful AI technology.
There are a lot of outstanding moments in the history of music co-creation by artificial intelligence and human artists. In 2017 musician Taryn Southern composed an entire album in collaboration with AI; at the beginning of 2020, Björk worked with Microsoft to create AI-generated music that changes with the weather. Sometimes AI even goes solo, like AIVA, a deep learning music composing service, that released two albums and became the first virtual composer recognized by music society.
While AI-generated music seems like a lazy and ‘weird new’ way of creation to some, David Bowie, one of the most unique and original musicians, used the AI-like Verbasizer script to generate the lyrics for his songs back in the 90s. Check out the clip of Bowie discussing his creative process utilizing Verbasizer here.
However, AI music generation isn’t as trailblazing as it seems. Essentially, it just provides access to sophisticated yet easy-to-use tools and limitless tunes to play around with. Whether a musician or not, you can spark your creativity and have fun with AI-based services for music generation like Amper, AIVA, and Boomy. With the help of AI, music analysis can be performed significantly faster and more accurately.
AI-based music analysis services can detect the most emotionally impactful track out of a playlist, album, or other song compilations. It helps musicians select the best track for a single release as well as aids playlist curators create the most soul-crushing playlists. The services also function as recommender systems for music streaming services due to their ability to detect similarities in BPM, mood, and style between tons of songs.
Before songs become available for our listening pleasure, they go through numerous editing steps. Many hours of attentive listening, making mixing and mastering choices, with each changing the final sound of a song or a record. It’s a lengthy process that requires a lot of resources from human specialists — another music-related task that can be accelerated and enhanced by artificial intelligence.
LALAL.AI, an AI-powered stem separation service, aids in deconstructing mixed songs into their constituent contributions, vocal and instrumental tracks. Machine learning algorithms help to isolate each stem accurately and speed up the entire track splitting process to mere seconds. Separated vocal and instrumental stems can be used to create song covers, DJ mixes, karaoke backtracks, etc.
Smart: EQ2 is an AI-driven service users can utilize for music equalization. It automates the process of balancing out the specific frequencies of each track on an album or an EP to make the tracks complement each other. The EQ2 machine learning algorithm determines details that may pass the human ear, automatically corrects tonal imbalances, and increases the clarity of mixes.
Izotope and Landr use machine learning to replicate the processes performed by mastering engineers. Both services utilize the technology to automate the final mix preparation process and provide almost instantaneous mastering results beyond the capabilities of any human mastering engineer. Though the majority of the process is automated, users still have control over their mixing preferences and sound influences.
Music analysis is a process of retrieving music data and breaking songs down into their characteristics. It helps musicians, labels, producers, publishers, and playlists curators organize and recommend music. With the help of AI, the analysis can be performed significantly faster and more accurately.
The artificial music intelligence of Cyanite listens to millions of songs in just minutes and simultaneously derives information from music to give each track specific characteristics. The service uses two types of analysis: symbolic analysis gathers information about the rhythm, chord progression, harmony, etc. while audio analysis deals with timbre, genre, and emotion of music.
Creating algorithms (optimized models) from data is the most important aspect of machine learning. In contrast to systems that follow strict rules and perform tasks, in the same way, every time, machine learning algorithms improve with experience by learning from more input data.
Computers are modelling different things, processes, and phenomena all the time. For example, any text editing software is a model of a typewriter, a digital calendar is a model of a paper calendar, Excel is a model of a checkered notebook. Although these models are significantly more advanced than their objects (real-life equivalents), they cannot be considered ‘intelligent’ since they do not learn and only repeat pre-programmed behaviour.
Similarly, the stem separation quality provided by DAW plugins or any conventional software won’t be improving as you add more data, while the results of the LALAL.AI stem splitting is going to get better over time because of the machine learning algorithms.
Also published here.