Recent advancements in Large Language Models led to the emergence of AI Agents. In plain terms, an agent is a mechanism that equips LLMs with a set of external tools (functions). These tools can be invoked by the model (once or multiple times in a row) based on the user’s input.
Originally, in order to integrate a function into an LLM, it was needed to carefully construct the prompt in order to describe the function. Additionally, the output had to be parsed to detect whether the model intended to use this function.
The latest release of GPT introduced function-calling capabilities that simplify the process of integrating the functions. First of all, it defines a clear and fixed structure for describing the functions. Moreover, the models themselves have been further fine-tuned to both detect when a function should be called (based on the input) and to respond with JSON that adheres to the function signature. However, despite these improvements, the integration process still remains tedious and requires a lot of overhead.
In this article, we will explore how to build a simple conversational AI agent using the Agent Dingo library.
The key feature of Dingo is that it allows the integration of external functions into ChatGPT by adding just a single line of code. In addition, it serves the agent with the required capabilities on an OpenAI-compatible web server.
To give a quick glimpse into the core functionality of Dingo, we will build a simple agent that is able to provide information about the current weather by querying the
As the first step, the library has to be installed.
pip install agent-dingo
Next, we need to set the OpenAI API key.
export OPENAI_API_KEY=<YOUR_KEY>
Then, our Dingo agent can be instantiated as follows:
from agent_dingo import AgentDingo
agent = AgentDingo()
Let’s say that for getting the current weather via OpenWeather API we have a get_temperature
function. In order to integrate it into ChatGPT we need to add a single line of code — the decorator @agent.function
.
@agent.function
def get_temperature(city: str) -> str:
base_url = "https://api.openweathermap.org/data/2.5/weather"
params = {"q": city, "appid": openweathermap_api_key, "units": "metric"}
response = requests.get(base_url, params=params)
data = response.json()
print("get_temperature function output :", data["main"])
return str(data)
To get the output we need to call thechat
method.
agent.chat("What is the current weather in Linz?")
As you can see the process of function integration is very intuitive and simple. Behind the scenes, we are asking ChatGPT for the current weather in Linz. The model has access to the function (get_temperature
) that was integrated by using agent.function
decorator. The model decides to call get_temperature
function and passes city Linz as an argument. The response is parsed and we receive the following output.
The current temperature in Linz is 30°C.
As you might have noticed, we did not have to provide any descriptions of the function. This is achieved by the automatic docstring generation during the registration process (when agent.function
decorator is called).
In addition, it is possible to serve the agent on an OpenAI-compatible web server:
from agent_dingo.wrapper import DingoWrapper
DingoWrapper(agent).serve()
The server can be accessed using the openai
python package:
import openai
openai.api_base = "http://localhost:8080"
r = openai.ChatCompletion.create(
model = "gpt-3.5-turbo",
messages = [{"role": "user", "content": "What is the current weather in Linz?"}],
temperature=0.0,
)
In this article, we explored how Agent Dingo streamlines the process of integrating Python functions into ChatGPT in order to build a conversational AI agent. With Dingo, you no longer have to worry about manually integrating the functions or dealing with intermediate function calls. The framework is crafted to automate these tasks, allowing you to focus on writing the core functionality of your app.
Also published here.