In a landmark event for artificial intelligence, OpenAI’s developer conference saw the unveiling of GPT-4 Turbo, the latest iteration of the groundbreaking AI model, GPT-4. OpenAI has taken a giant leap forward with a model that promises enhanced power and cost-efficiency, setting a new standard for the AI industry.
Let’s see what was presented by OpenAI:
GPT-4 Turbo differentiates itself with two distinctive versions: a text-centric analyzer and a cutting-edge variant proficient in interpreting both text and imagery. The text-analyzing model is already accessible via API in preview form, and plans for the release of both versions are on the horizon in the upcoming weeks.
GPT-4 Turbo is competitively priced at $0.01 per 1,000 input tokens (approximately 750 words) and $0.03 per 1,000 output tokens. The pricing structure for the image-processing capability of GPT-4 Turbo will be contingent on image dimensions, marking a strategic shift in OpenAI’s pricing paradigm. The improved model offers more value, while even the enhanced GPT-3.5 comes at a lower cost than the base model of its earlier version.
By the way, I’ll be reviewing the new features of GPT-4 Turbo with tutorials on how to use it in my weekly newsletter ‘AI Hunters.’ There, you can also find the newest and best ideas on how to use AI in your personal life and business. Subscribe, it’s absolutely free!
OpenAI’s GPT-4 Turbo boasts a remarkable series of enhancements:
An extended context length of 128K tokens, capable of handling the equivalent of 365 book pages without losing track of the text.
Improved precision in dealing with lengthy texts.
A developer-friendly feature allowing for JSON-formatted responses.
The capability to invoke multiple functions concurrently.
Generation seed specification for output reproducibility.
The upcoming addition of logprobs to the API.
Built-in retrieval functionality, facilitating the direct incorporation of documents into the platform, is a boon for startups integrating functionalities like chatWithPDF.
GPT-4 Turbo now encompasses knowledge up to April 2023, and it can process images through its API, enhancing its utility and scope significantly.
In tandem with the GPT-4 Turbo release, OpenAI introduced the integration of DALL-E 3 and text-to-speech capabilities with six distinct voices, expanding the creative horizons of the API.
OpenAI has also rolled out fine-tuning for GPT-4, albeit to a select group of users initially. The Custom Models program underscores a commitment to collaborating with enterprises to tailor the fine-tuning process to specific business needs.
The UI for ChatGPT has undergone a significant transformation, now allowing the creation of specialized chatbots (GPTs). These can rely on prompts, auxiliary files, and a suite of tools and functions that can freely incorporate calls to external services, complete with authentication capabilities.
Users will soon be able to share their chatbots, granting access to specific scenarios such as medical appointments or Q&A sessions. The end of the month will witness the launch of GPT-store, a marketplace reminiscent of Apple's App Store, where developers can share their AI assistants post a human review and benefit from a revenue-sharing model.
OpenAI showcased a bot capable of processing PDF ticket files and Airbnb bookings, displaying the information interactively on-screen. This functionality underscores not just the AI's intelligence but also its ability to interact with backend API functions, formulating responses based on the content it processes.
Promising even more acceleration in the near future, OpenAI is setting a rapid pace for advancements in AI. With the introduction of GPT-4 Turbo, OpenAI not only enhances the model's capabilities but also democratizes access to cutting-edge technology, paving the way for a new era of innovation and application in the field of artificial intelligence.
**Meta's 2023 Connect Conference: A Spotlight on Innovative AI Features