paint-brush
Understanding the Impact of OpenAI's GPT4 Turbo on "Wrappers"by@vicloskutova
4,828 reads
4,828 reads

Understanding the Impact of OpenAI's GPT4 Turbo on "Wrappers"

by Queen BadassNovember 7th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

OpenAI unveiled a slate of developer-focused updates at their inaugural DevDay event. New releases promise major improvements in AI capabilities through features like the new GPT-4 Turbo model and the Assistants API. A sector that will likely see a significant impact from these updates is the ecosystem of so-called "GPT wrappers." Dive in to know more.
featured image - Understanding the Impact of OpenAI's GPT4 Turbo on "Wrappers"
Queen Badass HackerNoon profile picture

OpenAI has been making waves in the artificial intelligence world with the rapid development and release of powerful language models like GPT-4 and its consumer incarnation, ChatGPT. Today, they unveiled a slate of developer-focused updates at their inaugural DevDay event.


These new releases promise major improvements in AI capabilities through features like the new GPT-4 Turbo model and the Assistants API.


A sector that will likely see a significant impact from these updates is the ecosystem of so-called "GPT wrappers."


“GPT wrappers” are third-party services that provide extended functionality by wrapping the base GPT models with additional code.


With the continuous evolution of ChatGPT and OpenAI's API gaining powerful new features, wrappers must quickly adapt or risk being obsoleted while still in their infancy.


In this article, I’ll dive into OpenAI's latest offerings and analyze what they could mean for the future of ChatGPT wrappers like my own startup, Olympia. We'll speculate on how Olympia and similar services might evolve as OpenAI continues its rapid pace of development. Stick around for an inside look at the innovations shaping the AI landscape.

DevDay Announcements Recap

OpenAI revealed several major new offerings at DevDay that promise to push AI capabilities even further toward their stated goal of incrementally transforming the world via agents and AGI (Artificial General Intelligence).

Here are some of the highlights:

ChatGPT4 Turbo

The star of the show was GPT-4 Turbo, the latest iteration of OpenAI's conversational model. GPT-4 Turbo can handle much longer conversational context - up to 128,000 tokens, compared to the 8,000 token limit of the current GPT-4 model. The longer context allows for much longer conversations, boasting the equivalent of 300 printed pages of information before running out of space.


Turbo is also faster and features upgraded memory and reasoning capabilities for more natural, human-like exchanges. The model is also equipped with powerful new capabilities related to generating JSON output and calling functions, including calling more than one function at a time in sequence or parallel.


These improvements make the GPT-4 Turbo OpenAI's most capable conversational AI to date and put it further in the lead ahead of competitors and open-source models.

Assistants API

OpenAI unveiled their Assistants API, which allows developers to integrate stateful chat completions without having to build that functionality themselves. In addition to statefulness, it also adds retrieval functions, a class of functionality that almost every wrapper application has had to until now implement themselves.


Other parts of the new API handle speech recognition and synthesis, and accessing external knowledge.

Reduced Prices

In response to feedback from developers over the high costs associated with OpenAI's language models, Sam Altman’s latest update brought some much-needed relief. One of the most welcomed announcements at DevDay was the significant reduction in prices across the platform, aimed at passing on savings to developers.


Altman revealed reduced pricing for their largest systems, GPT-4 Turbo and GPT-3.5 Turbo. GPT-4 Turbo's input tokens are now 3x cheaper than GPT-4 at $0.01, and output tokens are 2x cheaper at $0.03. GPT-3.5 Turbo input tokens are 3x cheaper than the previous 16K model at $0.001, and output tokens are 2x cheaper at $0.002.


Developers previously using GPT-3.5 Turbo 4K can benefit from a 33% reduction on input tokens at $0.001. Along with efficiency improvements, these changes make OpenAI's tech much more affordable.

Impact on GPT Wrappers

OpenAI's latest offerings promise to advance the current state-of-the-art in conversational AI significantly. For GPT wrapper services built on OpenAI's foundation models, these upgrades represent exciting new capabilities to integrate into their products.


With its boosted memory, reasoning, and conversational context, Turbo overcomes many of the limitations that wrapper services currently try to work around. The expanded context size alone is a game-changer - no longer will conversations hit a brick wall after relatively few exchanges.


We built Olympia with the goal of offering a friendlier, more capable conversational experience than our main competitor ChatGPT. Along with hundreds of other GPT wrappers out there, we can now leverage Turbo's better performance and lower cost.


Our well-honed techniques like sentiment tuning and personality injection pair perfectly with Turbo's enhancements to create more human interactions between our assistants and users.

Looking Ahead

With OpenAI's startling pace of innovation showing no signs of slowing down, these new offerings are likely just a glimpse of what's to come. Future improvements to foundation models like GPT-4 Turbo and advanced training techniques will unlock even more powerful conversational AI.


The path forward is to closely track OpenAI's progress and rapidly integrate each new feature into the products. Wrappers that can quickly adopt cutting-edge updates will be able to stay ahead of the pack and ChatGPT itself.


Specifically, Olympia may want to prepare for integrations with future upgrades like:


  • Even larger context capacity - Longer contexts mean our assistants can “remember” more about our users and their businesses whenever they talk to them.


  • Multimodal capabilities - Models that can understand and generate image, video, and audio content alongside text can open up engaging new interaction possibilities, such as AI-backed graphic designers.


  • Specialized domain training - As seen with the Assistants API, domain-specific modules tailored to focused topics will lead to more knowledgeable AI.


We're just beginning to glimpse the creative, productive potential of this technology. Advanced AI is poised to become an integral part of our lives at a pace that few people in the world could have imagined. The next few years promise to be an exciting ride!