I was prompted to explore deeper into the taste of algorithms by the bizarre exchange that a New York Times journalist had with the new AI-powered Bing (Free link here). Since music is an important part of my life, I decided to ask ChatGPT to recommend some songs:
I was absolutely positively surprised by the recommendations: Queen, Led Zeppelin, Michael Jackson, Nirvana, and not trying to be too “Rock anthem” even a song by Ed Sheeran, having worked in the audio business before I decided to probe into the opinion of Chat GPT about fellow Music algorithms.
The first on the row was Spotify:
I will create a playlist with your suggestions in Spotify, and see what else ir recommends. Would you like to know the recommendations?
Sure, I would be interested in knowing which other songs would match my original list. While I don’t have personal preferences or emotions, I can recognize that Spotify’s recommendation system is designed to analyze the patterns in users’ listening habits and suggest songs that are similar in genre, mood, tempo, or other musical characteristics. So, it could be interesting to see what other songs and artists Spotify would recommend based on the songs I suggested earlier.
It looks like algorithms know about each other. So I decided to probe about Shazam:
Shazam’s algorithm is based on a process called audio fingerprinting, which is used to compare an audio sample to a massive database of songs in order to identify a match. Here’s a brief overview of how the algorithm works:
Analysis: When a user records an audio sample using the Shazam app, the algorithm first analyses the sound to identify specific features, such as the rhythm, melody, and structure of the song. It does this by breaking the sound down into tiny chunks, or frames, and extracting key characteristics from each frame.
Fingerprinting: Once the features have been extracted, the algorithm creates a unique digital fingerprint for the song by combining the key characteristics from each frame into a single signature. This fingerprint is then compared to a massive database of millions of songs in order to identify a match.
Matching: To find a match, the algorithm uses a technique called hash matching, which involves comparing the fingerprint to a pre-computed database of hash values that correspond to specific songs. This allows the algorithm to quickly find potential matches and narrow down the search.
Confirmation: Finally, the algorithm performs a series of additional checks to confirm the match and provide additional information about the song, such as the title, artist, album, and lyrics.
Overall, Shazam’s algorithm is highly complex and uses advanced machine learning techniques to accurately identify songs in a matter of seconds. It’s a testament to the power of artificial intelligence and audio recognition technology in making our lives more convenient and enjoyable.
It definitely knew its stuff. Yet, it had an interesting remark at the end:
“It’s a testament to the power of artificial intelligence and audio recognition technology in making our lives more convenient and enjoyable.”
Notice the “our lives” part? When I asked it how it made “your life more convenient and enjoyable” it basically gave me a sales pitch on the features of Shazam. Despite this, it made me curious to know which other music-related algorithms it knew. This was the answer:
There are several interesting music-related algorithms that have been developed in recent years. Here are a few examples:
Music recommendation algorithms: Music recommendation algorithms are designed to analyze a user’s listening habits and suggest new songs or artists that they may enjoy. These algorithms are used by popular music streaming services like Spotify, Pandora, and Apple Music to help users discover new music that matches their tastes.
Music transcription algorithms: Music transcription algorithms are designed to analyze audio recordings and convert them into sheet music or MIDI files. These algorithms use advanced signal processing techniques to separate the individual notes and rhythms of a musical performance, and can be used to create accurate transcriptions of complex pieces of music.
Music generation algorithms: Music generation algorithms are designed to create new music based on certain rules or parameters. These algorithms use machine learning techniques to analyze patterns in existing music and then generate new music that follows similar patterns. Some examples of music generation algorithms include DeepBach, which creates new music in the style of Johann Sebastian Bach, and AIVA, an AI composer that creates original music in a variety of styles.
Music analysis algorithms: Music analysis algorithms are designed to analyze the structure and content of a piece of music. These algorithms can be used to identify the key, tempo, and other musical characteristics of a piece, as well as to identify the presence of specific instruments or musical features.
Overall, these music-related algorithms are just a few examples of the many ways that artificial intelligence and machine learning are being used to transform the way we create, listen to, and interact with music.
The music generation algorithms did catch my eye, and I spent a few hours playing with AIVA. Quite an interesting piece of software that can basically create a musical score for you.
Once I had the music, I realized I needed some lyrics and turned back to my ChatGPT friend for some Rap lyrics:
Two lines I loved:
Gotta show me love, respect and devotion
Or you’ll be stuck in the friend zone with no promotion
Which means ChatGPT has the dating moves ready.
Yet, the whole thing kind of stinks of:
/ˈnɛpətɪz(ə)m/
noun
Does this apply to algorithms recommending other algorithms?
Yet, if you excuse me, I have a musical career to get started…