When transformative technologies emerge at a similar time, they’re far more likely to join forces to create a more powerful tool for consumers and enterprises.
In the case of smart glasses and the all-encompassing rise of generative AI, it seems as though unity can pave the way for the next generation of powerful wearables.
While the global smart glasses market, which was valued at $218.86 million in 2022, is set to grow at an impressive CAGR of 9.5% through 2028, the prospective growth facing the generative AI industry is even more astounding.
With a reported value of $8.2 billion in 2021, the generative AI market is projected to reach a value of $126.5 billion by 2031, representing a CAGR of some 32%.
We’re already seeing the formation of a symbiotic relationship between smart glasses and the world of generative AI, and Snap has already indicated that generative AI will play a key role in the future of the firm’s augmented reality glasses.
“We saw a lot of success integrating Snap ML tools into Lens Studio, and it’s really enabled creators to build some incredible things,” said Snap CEO Evan Spiegel.
“We now have 300,000 creators who built more than 3 million lenses in Lens Studio. So, the democratization of these tools, I think, will also be very powerful.”
Early use cases of these technologies combining are already emerging. Let’s take a deeper look into the future of generative AI in smart glasses to better understand what the eyewear of tomorrow will be capable of:
We’re already seeing firms like Envision, a major developer for smart glasses that can help visually impaired users to read and identify objects, incorporate generative artificial intelligence programs like ChatGPT to improve the quality of their devices.
ChatGPT is a natural language processing tool that is capable of using AI to generate high-quality answers to user queries and prompts.
Because ChatGPT is capable of using its intelligence to interpret and answer human questions in an articulate and rapid manner, it holds great utility for the development of smart glasses, which would require some form of intelligent hands-free integration to reach its full potential.
For Envision, which uses Google Glass eyewear to help capture text in documents or on product packaging to generate an audio text-to-speech service to help visually impaired users, ChatGPT can help to bring greater functionality to its service.
Using a plugin for ChatGPT 4.0, Envision’s assistive technology has created ‘Ask Envision’, which is a feature that allows users to capture text through the glasses and then verbally ask the ChatGPT questions about the document or product they’re looking at.
This has the potential to remove the need for listening to entire lengthy text-to-speech read-outs to empower users to get to the information they’re looking for quicker.
Questions like ‘Does this product contain nuts?’ or ‘How much is the total of this bill?’ can pave the way for far more convenience among visually impaired users when out and about.
Stanford University’s Brilliant Labs also created their own iteration of generative AI to incorporate into smart glasses.
RizzGPT, a tool that combines augmented reality, artificial intelligence, and ChatGPT parent company OpenAI’s Whisper speech detection tool, was developed as a conversational aid for wearers.
“Say goodbye to awkward dates and job interviews,” declared Bryan Chiang, unveiling the device on Twitter.
“We made rizzGPT -- real-time Charisma as a Service (CaaS). It listens to your conversation and tells you exactly what to say next.
The vast potential of uniting smart glasses with generative AI can be far-reaching for both industries and can form a valuable foundation for building new technologies that can improve our living standards and convenience in everyday life.
As the above combinations of ChatGPT and smart glasses show, automatic speech recognition (ASR) can form a core component in the future of the technologies.
Crucially, both ASR and optical character recognition (OCR) features can combine with recognition engines, like DeepL in the case of OCR, to generate instantaneous language translations from anywhere in the world in real-time.
When integrated into augmented reality smart glasses, it’s possible to see live translations appear as subtitles in your field of vision while somebody’s talking to you in another language.
This means that when you’re in a foreign language and need to look at a restaurant menu, or require directions at an information kiosk, your eyewear will be capable of making translations in a way that doesn’t involve you struggling to understand how to respond.
As an added contextual aid, AI-powered platforms like Stable Diffusion can also help to augment communications through the use of visual aids like animations as a way of helping to provide further context to more complex areas of language.
It’s with this in mind that smart glasses may evolve into a more omnipresent iteration of smart speaker assistants in the near future.
We’re already seeing AI-powered hardware like Amazon’s Echo glasses operate as an extension of the Echo platform featuring Alexa, but having such supportive AI packed into the eyewear of tomorrow is likely to be a more functional experience for users.
The long-term viability of generative AI’s symbiotic relationship with smart glasses will depend on the emergence of more exciting and engaging use cases to grow the market.
With high costs and barriers to entry, including privacy concerns and a lack of content, the industry will be dependent on these success stories for growth.
Despite this, the future appears bright for generative AI and smart glasses alike.
With immeasurable convenience made possible through their combined ability to interpret and understand language and situational contexts, they may yet combine to become a dependable everyday device over the coming years.