In today’s world, for many people, conversing with AI has become as routine as discussing one’s coffee preferences with a barista (or it soon will be!). Yet, here lies an irony: the more we interact with AI, the more elusive our understanding of these conversations becomes.
This irony essentially represents a modern twist on the “black box” dilemma, which has perplexed the ML community for years.
The “black box” problem refers to the opaque decision-making processes of ML models (large language models (LLMs) including), where the rationale behind any given response is shrouded in complexity. Despite advances in technology, the inner workings of these models, governed by billions or soon trillions of parameters, remain largely inscrutable.
Their decision-making is a puzzle, complicated by nonlinear interactions that defy straightforward interpretation.
Prompting — the buzzing word of 2023 — doesn’t make it any better: we now have obscured layers of communication activated with every prompt. What we see — the prompt we type — is merely the surface.
Beneath lies a hidden dialogue, an augmented system prompt, which is a complex, coded conversation the model conducts with itself, away from our understanding. And who knows what a model whispers to itself?
So, if you were confused about prompting amid the avalanche of articles, blogs, and tutorials about it — you should be.
As Ethan Mollick’s research reveals, contrary to intuition, the most effective prompts involve imaginative scenarios, such as pretending to navigate a Star Trek episode or a political thriller, demonstrating that traditional logical or direct prompts may not always yield the best responses from AI.
But it also reveals that it’s not coherent and might change with a new version of the model. He mentions the futility of seeking a universal “magic phrase” for AI interaction, the effectiveness of specific prompting techniques like adding context, few-shot learning, and Chain of Thought, and the significant impact that prompts can have on AI performance.
But for me — and I’ve been using AI a lot — many times, the most straightforward prompts, or “magic words,” can be surprisingly effective.
How to explain it? A few years back, Explainable AI (XAI) was heralded as a solution to the “black box” issue, with entities like DARPA leading the charge (they created the XAI toolkit, which has not been updated since 2021). However, the buzz around XAI seems to have dimmed, overtaken by a broader focus on Responsible AI. Is Responsible AI the solution?
So, that’s what we end up with:
How do machines make decisions? — We don’t know!
How to talk (prompt) to them? — We don’t know as well!
But, please, keep shipping to us new, larger (though we will also take smaller) models! Why? — We don’t know! But we can’t stop.
An explanatory article about the concept of a black box:
I write a weekly analysis of the AI world in the Turing Post newsletter. Subscribe for free using the button below and be the first to read the latest stories.
The goal of Turing Post is to equip you with comprehensive knowledge and historical insights, so you can make informed decisions about AI and ML. Join over 50,000 readers from the main AI labs, forward-thinking startups, and major universities 👇🏼