paint-brush
What's New in AI, Part 1: Exploring Generative AI With Dan Jeffriesby@linked_do

What's New in AI, Part 1: Exploring Generative AI With Dan Jeffries

by George AnadiotisJune 12th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The backstory to Stability AI and Stable Diffusion rise to stardom, plus practical advice on how to use and fine tune Stable Diffusion.
featured image - What's New in AI, Part 1: Exploring Generative AI With Dan Jeffries
George Anadiotis HackerNoon profile picture

What’s new in AI? That may sound like a moot question for a domain that has been moving extremely fast and making the news on a daily basis for the last few months. That, however, is precisely what makes it relevant.


Framed differently, that question may as well read: when drowning in information overload, how do you sort the wheat from the chaff? This is what we asked ourselves when we sat down with O’Reilly in late 2022 to brainstorm on a new series of events that would highlight the most relevant new developments in AI.


In “What’s New in AI” we connect with leading minds in the field on topics such as generative AI, large language models, responsible AI, AI hardware, and other developments as they appear. With the first event in the series behind us and the second one quickly approaching, it’s a good time to recap and preview, respectively.

Advanced machine learning is too low level, too complicated, too expensive

Our first “What’s New in AI” topic was generative AI. Since these events are planned months in advance, part of the challenge is finding topics that will still be relevant at the time the event takes place. In a domain moving as fast as AI is, that’s not a given. Generative AI seemed like a safe bet in that respect, and Dan Jeffries seemed like an ideal guest as well.


Dan Jeffries is the Managing Director of the AI Infrastructure Alliance, and we had a very interesting conversation on AI previously. Jeffries is also the former CIO at Stability AI, the company behind Stable Diffusion. The key word here is “former”, as Jeffries resigned from Stability AI a few days before our scheduled conversation.


We did not see this coming, but fortunately it did not throw the event off. We connected and covered a broad range of topics in a massively attended event. Jeffries describes himself as an author, futurist, engineer, and systems architect. His position on machine learning and AI is extremely positive.


As Jeffries explained, he was keeping an eye on Stable Diffusion from its early days. Even though initially Stable Diffusion was rather unimpressive, Jeffries related that he realized its potential immediately. He decided to get involved as soon as he saw what he described as “big leaps in capabilities”.


“I realized that an atomic bomb was essentially about to go off, that it was going to be a massive shift in the industry, and I wanted to be a part of it. And I think it did happen. When Stable Diffusion came out, it was just a massive explosion. And after that, it catalyzed AI”, Jeffries said.


Jeffries went on to claim that models like Stable Diffusion were part of the reason why OpenAI decided to release ChatGPT when they did, as they did not want to be left behind. At the same time Jeffries acknowledged that many AI companies are not on target. The reason, he said, is a fundamental misunderstanding:


“There was a bit of a mistake in our thinking. We thought every company was going to have hundreds of data scientists doing advanced machine learning, and it just isn’t going to work out that way. The truth is that it’s too low level, too complicated, too expensive. Both in terms of people and compute most people are never going to function at that level”.


For Jeffries, the way AI is more likely to pan out is by having different organizations each contribute a part in an open source stack that other organizations will be able to build on. Something like the equivalent of a LAMP stack for AI.

AI-powered products, threats and opportunities

Admittedly, however, putting stacks together – or even using them – isn’t everyone’s cup of tea either. This is why open source does not necessarily mean DIY, and there is lots of room for both creators and third parties to offer value-add services. This pattern has played out before in open source, from WordPress to databases and operational systems, and may well play out in AI.


Jeffries mentioned names such as Google or Microsoft to exemplify how integrating Large Language Models (LLMs) in products and services is the way to go. There is, however, an inherent danger here. As AI is reliant on data, using prepackaged Big Tech offerings may not yield the best outcomes in the mid-to-long term.


That would mean more data and more power for the service providers and less control for users. Jeffries is aware of this, which is why he recommends open source stacks for AI and personal knowledge graphs for data sovereignty. Open source wins out in the end, but you always see proprietary services in the beginning is how he summarized his stance.


Jeffries is also a proponent of a “laissez faire” approach to AI. His argument is that it’s not really possible to foresee issues, and the only way to deal with them is to deploy AI systems, see what happens and then apply remedies. As for bad actors, they will be caught and punished.


Another talking point was about the value of content being driven down, which is something Jeffries broadly disagrees with. He believes that in the same way that WordPress made publishing web sites easier for everyone, generative AI will make publishing content easier for everyone, and that’s a good thing.

Media literacy and real education is the real solution for the “enshittification” of content. But there’s not much of an incentive for that.


Those are points in which our views diverge significantly. I find there is actually some overlap between them in terms of AI-generated content. Lowering the bar to content production is already leading to the “enshittification” of content, misrepresenting people’s ability as content producers while overloading content consumers and making them susceptible to fabricated content – wittingly or unwittingly.


I do realize that the best way to deal with that issue would be better media education and development of critical ability. But i’m also painfully aware of the deficit we have in those areas currently, the time it will take to develop these, and the lack of incentives to do so.


I also don’t think that having a piece of software that assists in getting content published is comparable to having a piece of software that generates the content you are publishing. The former is like the printing press, helping people get their message out. The latter is like a ghost writer, earning people credit they don’t fully deserve. And that’s not even touching on copyright issues.

Business models, Stability AI’s woes, and practical advice

Where myself and Jeffries find ourselves in agreement is on the importance of business models. Jeffries spoke about the importance of having a well-defined business model from the get-go. “A lot of companies out there are looking at things like – well we’re just going to do research or we’re just going to train up the model, and that’s not an effective way to build a product”, Jeffries said.


Jeffries did not specifically aim this remark at Stability AI. But he did say that although he appreciates the work Stability AI does, the reason for his departure was divergence on the way forward. Looking at Stability AI’s reported woes, despite the sensationalist tone they may be presented in, one can speculate about how the Stability AI story will play out and whether Jeffries proves right or not.


Either way, none of that was the focal point of the conversation. While we did cover general interest topics, about half of the event was dedicated to the creative use of Stable Diffusion. It wasn’t just theory either. Jeffries provided examples as well as practical advice ranging from how to use Stable Diffusion – including user interfaces, parameters and fine-tuning – to how to train generative AI models.


It was a perfect introduction to the “What’s New in AI” series. The recording is available on O’Reilly, so feel free to check it out for more insights.

Next up: Large Language Model Evaluation, Robustness, and Interpretability

Coming up next in the “What’s New in AI” series is an event on Large Language Model (LLM) Evaluation, Robustness, and Interpretability, hosting Nazneen Rajani. Rajani is a research lead at Hugging Face, leading the robust machine learning research direction. Previously, she led a team of researchers focused on building robust natural language generation systems based on LLMs at Salesforce Research.


LLMs have revolutionized AI by powering cutting-edge applications for tasks such as text processing, machine translation, and text generation. We’ll cover the ever-changing LLM landscape, with an emphasis on open source, and introduce techniques such as red teaming, reinforcement learning from human feedback and instruction fine tuning.


Also published here.