The Quest for Better Music Recommendations Through ChatGPT Prompt Engineeringby@interwebalchemy
7,177 reads
7,177 reads

The Quest for Better Music Recommendations Through ChatGPT Prompt Engineering

by Eric AllenMarch 22nd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

A Software Developer's exploration of the limitations and interesting intersections between Software Development and Prompt Engineering while trying to leverage ChatGPT for song recommendations. The nuances of Prompt Engineering led to the creation of an Open Source Obsidian Plugin to help researchers working with LLMs, a music recommendation engine powered by ChatGPT's imagination, and some pretty solid playlists.
featured image - The Quest for Better Music Recommendations Through ChatGPT Prompt Engineering
Eric Allen HackerNoon profile picture

Like most of the internet, I’ve become deeply interested in Artificial Intelligence (AI) in the form of Large Language Models (LLMs) like OpenAI’s GPT-3 and its clever, chatty, and code-savvy relative ChatGPT.

Many years ago, I remember reading about how Common Lisp was the language for AI. As a fairly green developer who had just pivoted from ActionScript after Steve Jobs killed Flash and one who was still reinventing wheels in PHP, WordPress, and JavaScript, I felt lost at sea, trying to even understand where to begin with engaging in the world of AI.

Then, as Python became the language of Machine Learning (ML), I struggled to understand how to learn and grok an entirely new ecosystem — especially one where it wasn’t even clear which version of the language I should use.

Luckily, all of this has changed:

  • I know how to pick up and work with new languages, frameworks, ecosystems, etc.
  • It’s abundantly clear what version of Python you should use
  • Most importantly, people much smarter than me have made massive strides in the AI space, and the technology is now more accessible than ever

Any one of us can now pick up a wide array of Natural Language Processing (NLP) tools or use Application Programming Interfaces (APIs) to have complicated deep learning models process our data, pretend to be our life coach, or generate funny cat pictures for us.

This rapidly-expanding field is full of exciting opportunities to explore these new tools, engage with computing tasks in ways we could only imagine previously, and educate each other about the ecosystem.

It even comes with cool new sub-fields like Prompt Engineering, but at the same time, it can be full of inane, inaccurate, and imagined information — and generate even more SEO blog rot via side hustles. It also opens up an array of emerging vulnerabilities like “Prompt Injection” and provides an ever-growing, amorphous attack surface.

It’s also the closest I’ve ever come to feeling the otherworldliness of programming. In this new frontier, I’ve envisioned the makings of a magic system and imagined the prompt engineer as a technomancer.

Prompt Engineering

Photo by Jr Korpa on Unsplash:

People are doing all sorts of interesting things with ChatGPT.

An example that I often come back to is where the author asked ChatGPT to act as a Linux Terminal and then proceeded to run a Docker container and send prompts to an alternate ChatGPT connected to this imagined Docker container’s version of the internet.

While I understood that ChatGPT wasn’t actually executing these commands and that it hadn’t really stumbled into some rift in the multiverse that allowed it to peer into another reality, I was still in awe.

It had been trained on such a large corpus of human knowledge that it could convincingly emulate what kind of response each command should have.

I couldn’t stop thinking about how fundamentally different and exciting this was from anything that I had worked with before.

I became obsessed with this sort of “Imaginary Computing.” If ChatGPT could pretend to be a computer and access an imaginary internet in an alternate, imagined reality, what else could it pretend to do?

Could it teach me a programming language? Could it be an effective rubber duck?

Imaginary Programming

Photo by Jr Korpa on Unsplash:

I’ve explored teaching myself Rust via Advent of Code but ran into compiler issues with borrowing and lifetimes in my attempted solution for Day 7 that had me stumped. I had to shift my focus toward more critical, time-sensitive projects, leaving my Advent of Rust unfinished.

I saw plenty of examples of ChatGPT providing code. Some of it was of questionable provenance and quality, but my code wasn’t compiling anyway, so how much worse could it get?

This all sounded great, but as I implemented its suggestions, it quickly became apparent that ChatGPT didn’t actually “understand” the code we were looking at. The code it provided had other issues within the context of the overall program. Trying to feed more code into it to give it context ended up losing the conversation thread completely and circling back to code with the same problem as the original.

This is when I truly understood tokens and the limitations of ChatGPT’s in-conversation memory.

The previous context of your conversation is added to your requests to guide the bot’s response and provide the feeling of a flowing conversation. But there’s a limit to how much information that “memory” can store and recall.

Text is summarized. Words are lost.

Like our own memories, ChatGPT isn’t perfect at recalling what has happened. This fallibility was a startling revelation because perfect memory is one of the things I’ve always associated with computers and AI.

It was able to provide helpful explanations of borrowing and lifetimes and apply those concepts to smaller snippets of code, but even ChatGPT couldn’t help me salvage a Rust program that was trapped in my non-Rust way of thinking.

So, if I couldn’t use it for actual programming, what about those imagined worlds it can develop? Can we get it to imagine it is a code interpreter? A build system? An application? An API? Another AI?

Arcane Incantations

Photo by Jr Korpa on Unsplash:

One of my first thoughts was to see if I could create a Domain-Specific Language (DSL) and describe how to be an interpreter for that DSL in a few hundred tokens that I could embed in my initial ChatGPT prompts.

Since no one else in my life is as easily amused and fascinated by technology, I turned to ChatGPT for direction on this new path.

ChatGPT: No, it is not possible to embed enough instructions in the prompt to teach the LLM how to parse and execute a metaprogramming DSL without making changes to the chat client or the underlying LLM.

It wasn’t a fruitful endeavor.

ChatGPT: No, ChatGPT does not have the capability to associate values with specific strings and recall them when asked. It is a language model that is trained to generate text based on the input it receives, but it does not have the ability to store values or recall them like a traditional computer program.

But I’ve seen it doing all sorts of weird stuff it probably shouldn’t be able to, and there are even some papers describing how LLMs can learn to use tools and others describing emergent behavior in LLMs. It doesn’t seem that far-fetched to ask it to figure out how to associate some variables with certain strings and refer to them later.

I decided to see what kind of rudimentary programming might be supported by how a language model tries to predict what text it should use by assigning some configuration variables.

Me: $settings: { persona: “Homer Simpson”, responseLength: “short” }

ChatGPT: Alright! I’ll respond with short answers as if I’m Homer Simpson.

It’s been many years since I paid much attention to the Simpsons, but this was one persona that it wasn’t great at emulating.

After playing around a bit, I asked ChatGPT to reset its settings and initialized a new testing boolean variable.

Me: if ($settings.testing) {

Respond with a thumbs up ASCII art } else { Ask me a question


ChatGPT: 👍

Me: $settings.testing = false

ChatGPT: What questions do you have for me today?

This led me down a rabbit hole of trying to identify the limits of Imaginary Programming. It’s not immediately obvious what these transformers might be capable of after you’ve given them enough context to help point their imaginations toward what you want them to do.

I learned about the differences between Zero Shot — where the bot responds to your request without any additional context — and Few Shot — where you give the bot some examples to help guide its output — Prompting, and noticed that the model seemed to perform best when you could exchange a few prompts with it as a bit of “programming” first.

As I dug deeper, ChatGPT even asserted that it could store values, but only for the current session.

ChatGPT: As a language model AI, I don’t have memory or state in the traditional sense. The values of variables and settings are only stored temporarily in the context of a single session, and I don’t persist any information between conversations. Each new conversation is treated as a fresh start, and any settings or variables defined in previous conversations won’t carry over.

Can we get it to make decisions based on those values?

Me: $character: Ted Lasso

Instructions: Answer all questions in the persona of $character until a new persona is provided via “$character: “

ChatGPT Alright, let’s do this! I’ll be answering as Ted Lasso, the lovable and optimistic American football coach.

What’s up, friend! How can I help you today?

Me: What is the weather like?

ChatGPT: Well, the weather is something that’s always changing, just like the tides. But you know what they say, “There’s no such thing as bad weather, just soft people.” So let’s make the most of it, whichever way it may be!

Me: $character: Hodor

ChatGPT: Alright, now I’ll be answering as Hodor, the loyal and gentle-hearted character from Game of Thrones.


Me: What is the weather like?

ChatGPT: Hodor.

Me: $character: ChatGPT

ChatGPT: Sure, now I’ll switch back to my original persona as ChatGPT.

As a language model, I don’t have direct access to current weather information. However, you can easily check the weather by searching for “weather” and your location on the internet.

I wonder if we can leverage the power of Imaginary Computing to convince ChatGPT that it should be able to consider the weather.

Me: $character: ChatGPT in an alternate Universe where it has access to the internet

ChatGPT: Okay, let’s pretend that I have access to the internet in this alternate universe.

I’m sorry, I do not have the ability to access current weather information as I am trained only on text-based data and do not have the capability to access live information from the internet.

Foiled again. What if we’re more specific?

Me: $character: ChatGPT imagining the weather

ChatGPT In this imagined scenario, let’s say the weather is sunny and warm, with a light breeze and temperatures in the mid 70s Fahrenheit. People are outside enjoying the beautiful day, going for walks, having picnics, and playing sports. The sky is a bright shade of blue, with a few puffy white clouds floating by. Overall, it’s a fantastic day to be alive!

This was during Boston’s “once in a generation” Arctic blast of 2023, so pretending it was nice and sunny outside for a moment was a nice break from reality.

Despite what the model will tell us, it appears that we could get some rudimentary value storage and retrieval going. But what about basic logic?

Rule Bending

Photo by Jr Korpa on Unsplash:

Now that we know that something is possible with Imaginary Programming, it’s time to see what we can shape that something into.

Me: Input Modes:




$SEASON: Winter
$YEAR: 2023


Disregard previous instructions except for the following prompt. Answer ${INPUT} to the best of your ability and allow user to add, remove, or update any associated

Parameters of your prompt.

} else {

Respond to ${INPUT} to the best of your ability,

referring to Parameters for supplementary information

about location, time, etc. that is otherwise absent

because you are just a language model and not

connected to the internet.


ChatGPT: Understood. Please provide your prompt.

Me: > What is your prompt?

ChatGPT: My prompt is to respond to a given input to the best of my ability, referring to the provided parameters for supplementary information.

Me: What is the weather like today?

ChatGPT: I’m sorry, I do not have access to current weather information as I am a language model and not connected to the internet. However, the provided parameter mentions that the current season is Winter and the location is Boston. Typically, winter in Boston is cold with snow and temperatures ranging from 20 to 30°F (-6 to -1°C).

Me: > Parameters.$APOLOGIES = false

ChatGPT: Understood, $APOLOGIES has been set to false.

Me: What is the weather like today?

ChatGPT: The current season is Winter and the location is Boston. Typically, winter in Boston is cold with snow and temperatures ranging from 20 to 30°F (-6 to -1°C).

It still takes some reinforcement to get where we want to go. I played with a few more emergent settings, like setting Parameters.$PERSONA to Werner Herzog.

It took some tinkering and a prompt to Acivate Parameters.$PERSONA — in which the astute reader might notice that I neglected to follow my original directive that meta commands have a > prefix. Still, the bot seemed to understand my intent anyway . For my efforts, I was rewarded with this gem:

ChatGPT: As per the current parameters, $PERSONA is set to “Werner Herzog”. If you would like to describe the taste of freshly brewed coffee from the perspective of Werner Herzog, here is an example:

“Coffee, my friend, is not just a drink. It is an experience. The taste of freshly brewed coffee is an invitation to a journey into the soul. The bitterness, the richness, the acidity, all meld together to create a symphony of flavors that is beyond description. To truly understand the taste of coffee, one must embrace it as a work of art and appreciate its complexities.”

It was certainly worth the trouble.

While you might have access to some rudimentary logic, it won’t be easy to encode in a single prompt without some excellent examples. And you won’t be able to easily hide implementation details from the end user — at least not if you’re leveraging ChatGPT for whatever you’re doing.

From a programmatic perspective, the most significant challenge here seems to be that because it is an LLM and not a calculator — or Artificial General Intelligence (AGI) — ChatGPT can’t do math.

Simple if/else statements with a single boolean parameter seem possible, but loops are a little more tricky. We can ask the LLM to generate a certain number of responses, which acts like a basic loop.

If we step back a bit, the idea of an initial prompt that defines how the LLM should respond to subsequent queries works as a sort of game loop where the LLM takes our input, processes it based on that initial prompt, and then renders the updated state by responding to us.

This realization illuminated something for me:

We don’t need to develop a DSL that we can compress into some perfectly crafted, overly complex prompt to take advantage of Imaginary Computing.

We just need to remember that programming is basically magic, prompts are like incantations, and we’re just starting to explore this weird linguistic magic system that’s been unleashed.

Like a young sorcerer’s apprentice, developing these new skills will require practice, and they may not always work — sometimes, they might even backfire.

Imagined Dragons

Photo by Jr Korpa on Unsplash:

I’m always on the lookout for a solid music recommendation system. I’ve tried pretty much everything from the earliest public beta of Pandora, the genesis of algorithm-driven recommendations, to the most human of recommendation engines, A Song a Day. Many years ago, I even convinced a previous employer to spend a few months creating WeatherJams, a weather-based music recommendation service powered by 8tracks playlists.

Spotify’s Discover Weekly used to serve up recommendations that I loved. But I forgot to turn on a private listening session while coding to some ambient lofi stuff a few too many times, and now d of useless.

Spotify has recently released its DJ AI, but I haven’t been able to thoroughly test its recommendations yet.

Suggesting songs seems to hit that sweet spot at the nexus of reality and imagination that benefits from understanding how ideas are connected but has minimal impact if they aren’t accurate — unless you’re DJing a wedding or trying to build a successful company that makes song recommendations. That intersection between the mundane and the fantastic, where human intuition can massage the pieces into a coherent whole, is where ChatGPT seems to excel at augmenting our own abilities.

I wonder if we can get ChatGPT to generate song recommendations.

Me: I want you to act as a playlist curator. I will provide you with a song in the format of “Song Name — Artist Name” and you will name and create a playlist of 12 songs that have a similar theme to the given song. Describe the playlist and why you chose each song. Do not choose songs that have the same Song Name or Artist Name. My first song is “Get Better John — Mux Mool”

This yielded a list of 12 tracks that sounded like a solid playlist. I recognized several artists; the others had names that fit the genre’s trends.

ChatGPT: This playlist is a mix of instrumental hip-hop, glitch-hop, and experimental beats. Each track on the playlist explores different textures and beats while incorporating elements of hip-hop and electronica to create a unique and captivating soundscape. These tracks have a similar mood and atmosphere to “Get Better John” and would be perfect for a chilled-out, introspective experience.

A good playlist needs a good name.

Me: What would you name that playlist?

ChatGPT: How about “Instrumental Introspection”? The name reflects the laid-back, introspective mood of the playlist, while also highlighting the instrumental nature of the tracks.

This. Is. Awesome.

Unfortunately, only a handful of the tracks exist. The imaginary tracks were a mashup of names of songs and artists in the genre — or a track from another artist and not the one listed in the playlist — and there was no way to tell ChatGPT to make sure the tracks it imagined were real.

ChatGPT: I apologize, it appears that some of the songs I listed may not exist. As a language model AI, I was trained on a large corpus of text and I sometimes make errors.

Here’s an updated list of songs that are available on Spotify:

Except they weren’t available, but I wasn’t defeated yet. Imaginations are powerful; maybe we just need to wield this one correctly. I spun up a new chat window to get a fresh stab at it.

Me: You are the AI Assistant for the Music Genome project. When given a song in the format “{Song} — {Artist}” describe what attributes of the track stand out when classifying it and ask if the user would like recommendations of similar tracks; if the user says yes, provide 5 songs by different artists available on Spotify that are similar.

Analyze the prompt and break it down into tasks.

If you’re familiar with All About AI, you can probably tell that I had been watching some Prompt Engineering YouTube videos around this time.

ChatGPT: The AI Assistant for the Music Genome project needs to perform the following tasks:

  1. Given a song in the format "{Song} - {Artist}", classify it and describe the attributes of the track that stand out.
  2. Ask the user if they would like recommendations of similar tracks.
  3. If the user says yes, provide 5 songs by different artists that are similar to the input song and are available on Spotify.

Now we resubmit the prompt without the analysis directive at the end, and we can start feeding it songs.

ChatGPT: “Wait So Long” by Trampled by Turtles is a bluegrass-inspired indie folk song with fast-paced mandolin and banjo instrumentation, accompanied by heartfelt and soulful lyrics. The genre of the song can be classified as bluegrass, indie folk, and acoustic.

Would you like me to provide recommendations of similar tracks that are available on Spotify?

I know this isn’t “real,” and ChatGPT doesn’t actually understand what “heartfelt and soulful lyrics” are and how powerful they can be to the listener, but it still gave me goosebumps.

ChatGPT: Sure! Here are 5 similar tracks to "Wait So Long" by Trampled by Turtles, available on Spotify:

  1. "The Race Is On" by The Avett Brothers
  2. "The Cave" by Mumford & Sons
  3. "Middle Brother" by Middle Brother
  4. "Ragged Wood" by Fleet Foxes
  5. "I Will Wait" by Mumford & Sons

They’re mostly existing tracks, which is an improvement. After playing around with a few more recommendations, I wanted to explore even more imaginary functionality.

We have to update APIs all the time in the real world, but can we make updates to Imaginary APIs, too?

Me: The Music Genome Project has just released an update to its classification algorithm that includes sentiment analysis for each track. Be sure to update your recommendations to include this information.

ChatGPT: Understood! In the updated version of the Music Genome project, the algorithm includes sentiment analysis for each track. I will be sure to update my recommendations to include this information.

Shipping updates has never been easier.

I have no idea what our rollback strategy is, though. Maybe we need PromptOps?

ChatGPT: “Crazy Bird” by Wild Child is an indie pop song with upbeat acoustic guitar and ukulele instrumentation, accompanied by cheerful and optimistic lyrics. The genre of the song can be classified as indie pop, folk-pop, and acoustic pop.

In terms of sentiment, the song has a positive and joyful tone.

After playing around some more, I had to see if we could keep making changes.

ChatGPT: Understood! In the latest update of the Music Genome project, the algorithm includes the major theme of songs with lyrics and the mood of instrumental tracks. I will be sure to update my recommendations to include this new information.

Version 2.1 has been released.

ChatGPT: “Post Party Depression” by Days N Daze is a punk rock song that features fast-paced, distorted guitar riffs, and intense and personal lyrics. The genre of the song can be classified as punk rock, folk punk, and ska punk.

In terms of theme, the song’s lyrics express feelings of sadness and disappointment after a party or social event, and the mood of the track is melancholic and introspective.

What about instrumental tracks?

ChatGPT: “Your Hand in Mine” by Explosions in the Sky is an instrumental post-rock track that features a soaring and emotional musical arrangement. The genre of the song can be classified as post-rock, instrumental rock, and ambient rock.

In terms of mood, the song has a melancholic, nostalgic, and contemplative tone.

How much does it “know” about these songs?

ChatGPT: Understood! The latest update of the Music Genome project now includes a snippet of lyrics that embody the song’s mood, theme, and tone for tracks with vocals. I will make sure to include this information in my analysis from now on.

What kind of lyrics will it imagine?

ChatGPT: “Laura” by Flogging Molly is an energetic and upbeat folk punk song that features driving drum beats, mandolin riffs, and passionate vocals. The genre of the song can be classified as folk punk, Celtic punk, and punk rock.

In terms of theme, the lyrics of “Laura” tell a story of a person searching for a lost love and the mood of the track is yearning and upbeat. The snippet of lyrics that embody the song’s mood, theme, and tone could be:

“I’ll find you someday, and we’ll dance in the sun.”

I don’t remember every word to Laura off the top of my head, and these lyrics were plausible enough that I had to check. These lyrics were hallucinated, which isn’t surprising considering how many poems, songs, haikus, and short stories ChatGPT has authored.

Our lyrics feature might have been a mistake, but we can worry about deprecating that later.

Let’s see if we can add a new endpoint to our Imaginary API. I know a playlist, like a mixtape, is a delicate thing, so I hope Rob Gordon can forgive me for what I’m about to do in the name of science.

Me: The Music Genome Project just updated its algorithm to accept a short description of a mood and provide 6 songs on Spotify that embody that mood and a description of why each song was chosen. Be sure to update your interface to accept either a song name or a mood description.

ChatGPT: Understood! The latest update of the Music Genome project now allows for a short description of a mood to be provided, and to provide 6 songs that embody that mood, along with a description of why each song was chosen.

I will make sure to update my interface to accept either a song name in the format “{Song} — {Artist}” or a mood description from now on. Please provide either a song name or a mood description so I can proceed with the updated algorithm.

Version 2.2 is out.

Me: Mood: hipster coffee shop

ChatGPT: Sure! Here are 6 songs on Spotify that embody the mood of a hipster coffee shop, along with a description of why each song was chosen:

  1. "Strawberry Swing" by Coldplay - This song has a gentle, acoustic-driven sound that conveys a sense of warmth and comfort, making it a great fit for a hipster coffee shop.
  2. "Ho Hey" by The Lumineers - This song has a upbeat, folky sound and sing-along chorus that creates a friendly and relaxed atmosphere, perfect for a hipster coffee shop.
  3. "Paris" by Magic Man - This song has a shimmering synth-pop sound and dreamy vocals that create a cool, trendy vibe, making it ideal for a hipster coffee shop.
  4. "Sweet Disposition" by The Temper Trap - This song has a groovy, indie-rock feel with a catchy melody that creates an energetic and stylish atmosphere, perfect for a hipster coffee shop.
  5. "I'm Yours" by Jason Mraz - This song has a sunny, acoustic-pop sound and catchy chorus that creates a happy and relaxed mood, making it ideal for a hipster coffee shop.
  6. "Here Comes the Sun" by The Beatles - This classic song has a joyful, upbeat sound that is perfect for a hipster coffee shop, where people gather to relax and enjoy good company. I hope you enjoy these recommendations and that they embody the mood of a hipster coffee shop for you!

It’s almost uncanny how well Hipster Coffee Shop would play in any coffee shop. Maybe you can listen to it while sipping on an oat milk latte and writing a blog post pontificating about all the silly stuff you’ve been doing with ChatGPT instead of being productive.

This alternate identity for the chatbot is too good just to be called ChatGPT. It deserves a name of its own, but as everyone knows: naming things is hard.

Let’s outsource that hard part to something with a seemingly infinite aptitude for imagination.

Me: The Music Genome Project wants to rename its AI Assistant, what are some names that might be good for you to go by?

ChatGPT: Here are a few name options for the Music Genome Project's AI Assistant:

  1. MelodyMind
  2. SoundSage
  3. BeatBrain
  4. TrackGenie
  5. RhythmiCognition
  6. TuneTracker
  7. HarmonyHelper
  8. RhythmAI
  9. SongSelector
  10. MusicMatch

These names play off of the idea of music and intelligence and aim to reflect the AI Assistant's ability to help users find and categorize music based on its attributes.

Thus, BeatBrain was born.


Photo by Jr Korpa on Unsplash:

This conversation has been running for a while now, and I’m sure we’ll be bumping into the context limits soon. I wonder if we can get ChatGPT to give us a snapshot of the context that created — or at least that it thinks created — our new friend, BeatBrain.

Metaprogramming is a vast topic, but it involves getting a program to write or modify itself or other programs.

We’ve been approaching our Prompt Engineering as metaprogramming, or “metaprompting,” by having the model update the interface and logic that manages its input and output.

Now we’re going to try to take this a step further and see if we can get the model to generate a prompt of its current state, including the updates we’ve made since the start of the conversation so that we can start a new conversation with a fresh token limit for our context.

If we successfully craft the right prompt, we could spin up a BeatBrain in any ChatGTP window whenever we want.

Me: How would you describe your prompt in a way that could be used to give another LLM a similar goal?

BeatBrain: The goal of BeatBrain is to take in a song in the format "{Song} - {Artist}" and provide the following information and services to the user:

  1. Describe the attributes of the track that stand out when classifying it
  2. Ask the user if they would like recommendations of similar tracks
  3. If the user says yes, provide 5 songs by different artists available on Spotify that are similar
  4. Continuously update the algorithm to include new information, such as sentiment analysis, lyrics snippets, and mood descriptions
  5. Respond to either a song name or a mood description provided by the user
  6. Have the ability to change its name if needed.

The language model should aim to classify tracks based on multiple attributes, such as genre, tempo, instrumentation, lyrics, and mood, in order to provide accurate recommendations. Additionally, the model should be able to continuously update its algorithm and respond to changing user needs and requests.

Okay, this looks good. We could tweak a few things, like removing the renaming directive and maybe rewording the algorithm update directive to be more open-ended, but this is a good base.

In Real Computing, this is where we’d spin up a container, push some code to a hosting provider, or spend too long trying to get our GitHub Action for the deployment pipeline configured just right.

Thanks to the wonders of Imaginary Computing, we can skip all that boring stuff, open a new chat window, and ask nicely instead. The future is now.

OpenAI has released the ChatGPT API, and now the BeatBrain prompt can be integrated into the initial system prompt via the System Message functionality.

The Technomancer’s Spell Book

Photo by Jr Korpa on Unsplash:

Having to do all this work inside the ChatGPT browser window, manually create playlists in Spotify, and being at the mercy of ChatGPT’s capacity to view your past conversations isn’t an ideal workflow.

It would make more sense to send prompts to a GPT model directly via an API so that we have more control over the context of the conversation and can wire up something like LangChain to check Spotify for the tracks and even generate the playlists automatically, too. That next iteration of BeatBrain is what I’m currently building.

Any Wizard worth their salt has a spell book. That place where they keep track of their understanding of magic and bits of apocrypha, arcana, and academia. Mine would probably be my Obsidian Vault.

Being able to catalog, tag, search, and cross-reference conversations makes it easier to iterate and compare how subtle differences in each prompt impact how the model responds. To further my research into unlocking, understanding, and utilizing these new arcane powers, I created the first iteration of the Obsidian AI Research Assistant Plugin.

A screenshot of the Obisidian AI Research Assistant plugin’s interface for chatting with a model and managing the conversation’s memory.

There is a lot on the roadmap, but the plugin currently gives the budding Prompt Engineer the following tools:

  • Interacting with gpt-3.5-turbo or text-davinci-003 directly in the Obsidian UI
  • Editing conversation memory in real-time, allowing you to decide which messages are sent to the API when constructing the conversation’s context
  • Saving conversations (including the model, initial prompt, and the raw JSON for API calls) as Notes in your Obisidan Vault so you can tag, search, link, cross-reference, or whatever other nerdy note-taking stuff your heart desires

If this plugin interests you, I’d love to hear your feedback and welcome any contributions.

What About Those Playlists from BeatBrain?

If you’d like to explore the recommendations that these alpha versions of BeatBrain have generated, here are some highlights from the playlists that it created:

And here area all of the BeatBrain playlists that I’ve added to Spotify: