Hey everyone! I’m Nataraj, and just like you, I’ve been fascinated with the recent progress of artificial intelligence. Realizing that I needed to stay abreast with all the developments happening, I decided to embark on a personal journey of learning, thus 100 days of AI was born! With this series, I will be learning about LLMs and share ideas, experiments, opinions, trends & learnings through my blog posts. You can follow along the journey on HackerNoon here or my personal website here. In today’s article, we’ll be looking to build a Semantic Kernel with the help of GPT-4.
If you are reading the 100 Days of AI series I am writing a common theme that you will notice is most of the posts have some kind of an experiment or a demo. At the end of posts I also list ideas that are possible with the technologies I explored. So the obvious next question is, is there a way to easily wrap these experiments into demo-able products and share them with people and get feedback?
In this post we will explore how to easily convert an AI experiment into a demo that can be and get feedback.
Let’s say you want to create an AI app that takes in text and returns a summary. And you want to share it with people and get feedback. If you have to create a web app for this app, you have to code the UI make sure you find a hosting solution and figure out what the entire stack is going to be. Of course there are started webapp stacks that will allow to do it more faster but you don’t need all that complexity for your use case. Instead we will use Gradio which create a web app like interface with out writing any code for this app.
Gradio is a python library that allows you to easily create demos of your AI apps and share it with an audience in the form of a web app.
To understand how to easily demo a gen AI experiment with Gradio, we will build a summarization web app where you can give the app a long text and the app will output the summary of that long text.
Along with Gradio, we will use hugging face app so to follow along make sure you keep your hugging face api key handy. To get started lets load up the API key from .env file and add required python modules.
import os
import io
from IPython.display import Image, display, HTML
#from PIL import Image
import base64
import openai
env_path = '../.env'
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv(env_path)) # read local .env file
hf_api_key = os.environ['HF_API_KEY']
The core logic of what we want to achieve is to send a long text to a model and get a summary of the text. We will define a function that takes in the long text as an input and calls a model on hugging face called distilbart. Here is the function to do this.
def get_completion(inputs, parameters=None, ENDPOINT_URL=os.environ['HF_API_SUMMARY_BASE']):
headers = {
"Authorization": f"Bearer {hf_api_key}",
"Content-Type": "application/json"
}
data = { "inputs": inputs }
if parameters is not None:
data.update({"parameters": parameters})
response = requests.request("POST",
ENDPOINT_URL, headers=headers,
data=json.dumps(data)
)
return json.loads(response.content.decode("utf-8"))
The main way to interact with Gradio is through interface function. We need to pass a function and its corresponding inputs and outputs. Note that we defined a new function called summarize for this purpose. This function takes an input, which will be the long text which is to be summarized and the function will call the get_completion function from step 2 to get the summary of the input text and return that as the output. In the final line we are asking gradio to launch the demo of this app. By providing share=True we are telling gradio to create a public link that can be shared to others and also giving the server_port on which the local webhost should run on. In the below code we also customize the title, description and labels of input and output fields which you will notice in the out put.
import gradio as gr
def summarize(input):
output = get_completion(input)
return output[0]['summary_text']
gr.close_all()
demo = gr.Interface(fn=summarize,
inputs=[gr.Textbox(label="Text to summarize", lines=6)],
outputs=[gr.Textbox(label="Result", lines=3)],
title="Text summarization with distilbart-cnn",
description="Summarize any text using the `shleifer/distilbart-cnn-12-6` model under the hood!"
)
demo.launch(share=True, server_port=int(os.environ['PORT1']))
Once you run this, here is what the output now looks like in your browser at http://127.0.0.1:<your port number>.
It is that easy to demo a project with Gradio. The beauty here is you as a developer didn’t have to come up with UI stack and go through with the complexity of learning that stack and taking it to production. Gradio does it for you.
Gradio offers different input and output types using which you can create more complex and interesting demos that can be easily shared. We will explore more demos with Gradio and other tools in the coming posts.
That’s it for Day 12 of 100 Days of AI.
I write a newsletter called Above Average where I talk about the second order insights behind everything that is happening in big tech. If you are in tech and don’t want to be average, subscribe to it.
Follow me on Twitter, LinkedIn or HackerNoon for latest updates on 100 days of AI. If you are in tech you might be interested in joining my community of tech professionals here.