So, jy het met groot taalmodelle rondgespeel en begin generatiewe KI in jou toepassings integreer? Dis wonderlik! Maar laat ons werklik wees. LLM's tree nie altyd op soos ons wil hê hulle moet nie. Hulle is soos bose kleuters met hul eie verstand!
Jy besef gou dat eenvoudige vinnige kettings net nie genoeg is nie. Soms het ons iets meer nodig. Soms het ons multi-agent werkvloei nodig! Dit is waar AutoGen inkom.
Kom ons neem 'n voorbeeld. Stel jou voor jy maak 'n aantekening-toepassing (Dit is duidelik dat die wêreld nie genoeg daarvan het nie. 😝). Maar hey, ons wil iets spesiaals doen. Ons wil die eenvoudige, rou aantekening neem wat 'n gebruiker vir ons gee en dit omskep in 'n volledig herstruktureerde dokument, kompleet met 'n opsomming, 'n opsionele titel en 'n outomatiese taaklys van take. En ons wil dit alles doen sonder om te sweet – wel, ten minste vir jou KI-agente.
Goed, nou, ek weet wat jy dink - "Is dit nie soos nuwelingprogramme nie?" Daarop sê ek, jy is reg. Meen … maar reg! Maar moenie mislei word deur die eenvoud van die werkvloei nie. Die vaardighede wat jy hier sal leer – soos die hantering van KI-agente, die implementering van werkvloeibeheer en die bestuur van gesprekgeskiedenis – sal jou help om jou KI-speletjie na die volgende vlak te neem.
Maak dus vas, want ons gaan leer hoe om KI-werkvloei te maak met AutoGen!
Voordat ons begin, let daarop dat jy 'n skakel na al die bronkode op GitHub kan vind.
Kom ons begin met die eerste gebruiksgeval - "Genereer 'n opsomming vir die nota gevolg deur 'n voorwaardelike titel". Om regverdig te wees, hoef ons nie regtig agente hier te gebruik nie. Maar hey, ons moet iewers begin reg?
Agentiese raamwerke soos AutoGen vereis altyd dat ons die modelparameters opstel. Ons praat oor die model en terugvalmodel om te gebruik, die temperatuur en selfs instellings soos uitteltyd en kas. In die geval van AutoGen, lyk daardie instelling so iets:
# build the gpt_configuration object base_llm_config = { "config_list": [ { "model": "Llama-3-8B-Instruct", "api_key": os.getenv("OPENAI_API_KEY"), "base_url": os.getenv("OPENAI_API_URL"), } ], "temperature": 0.0, "cache_seed": None, "timeout": 600, }
Soos jy kan sien, is ek 'n groot oopbron KI fanboy en sweer by Llama 3. Jy kan AutoGen na enige OpenAI-versoenbare afleidingsbediener laat wys deur bloot die api_key
en base_url
waardes te konfigureer. So, gebruik gerus Groq, Together.ai, of selfs vLLM om jou model plaaslik aan te bied. Ek gebruik Inferix .
Dit is regtig so maklik!
Ek is nuuskierig! Sal u belangstel in 'n soortgelyke gids vir oopbron-KI-gasheer? Laat weet my in die kommentaar.
Die inisiasie van gespreksagente in AutoGen is redelik eenvoudig; verskaf eenvoudig die basis LLM-konfigurasie saam met 'n stelselboodskap en jy is gereed om te gaan.
import autogen def get_note_summarizer(base_llm_config: dict): # A system message to define the role and job of our agent system_message = """You are a helpful AI assistant. The user will provide you a note. Generate a summary describing what the note is about. The summary must follow the provided "RULES". "RULES": - The summary should be not more than 3 short sentences. - Don't use bullet points. - The summary should be short and concise. - Identify and retain any "catchy" or memorable phrases from the original text - Identify and correct all grammatical errors. - Output the summary and nothing else.""" # Create and return our assistant agent return autogen.AssistantAgent( name="Note_Summarizer", # Lets give our agent a nice name llm_config=base_llm_config, # This is where we pass the llm configuration system_message=system_message, ) def get_title_generator(base_llm_config: dict): # A system message to define the role and job of our agent system_message = """You are a helpful AI assistant. The user will provide you a note along with a summary. Generate a title based on the user's input. The title must be witty and easy to read. The title should accurate present what the note is about. The title must strictly be less than 10 words. Make sure you keep the title short. Make sure you print the title and nothing else. """ # Create and return our assistant agent return autogen.AssistantAgent( name="Title_Generator", llm_config=base_llm_config, system_message=system_message, )
Die belangrikste deel van die skep van agente is die system_message
. Neem 'n oomblik om te kyk na die system_message
wat ek gebruik het om my agente op te stel.
Dit is belangrik om te onthou dat die manier waarop KI-agente in AutoGen werk, is deur aan 'n gesprek deel te neem. Die manier waarop hulle die gesprek interpreteer en voortdra hang heeltemal af van die system_message
waarmee hulle gekonfigureer is. Dit is een van die plekke waar jy tyd sal spandeer om dinge reg te kry.
Ons kort nog net een agent. 'n Agent om as 'n gevolmagtigde vir ons mense op te tree. 'n Agent wat die gesprek kan begin met die "nota" as sy aanvanklike aansporing.
def get_user(): # A system message to define the role and job of our agent system_message = "A human admin. Supplies the initial prompt and nothing else." # Create and return our user agent return autogen.UserProxyAgent( name="Admin", system_message=system_message, human_input_mode="NEVER", # We don't want interrupts for human-in-loop scenarios code_execution_config=False, # We definitely don't want AI executing code. default_auto_reply=None, )
Hier is niks fancy aan die gang nie. Let net op dat ek die default_auto_reply
parameter op None
gestel het. Dis belangrik. Deur dit op geen te stel, maak seker dat die gesprek eindig wanneer die gebruikeragent 'n boodskap gestuur word.
Oeps, ek het heeltemal vergeet om daardie agente te skep. Kom ons doen dit vinnig.
# Create our agents user = get_user() note_summarizer = get_note_summarizer(base_llm_config) title_generator = get_title_generator(base_llm_config)
GroupChat
te gebruikDie laaste stuk van die legkaart is om ons agente te laat koördineer. Ons moet die volgorde van hul deelname bepaal en besluit watter agente moet.
Goed, dit is meer as een stuk. Maar jy verstaan die punt! 🙈
Een moontlike oplossing sou wees om KI te laat uitvind in watter volgorde die Agente deelneem. Dit is nie 'n slegte idee nie. Trouens, dit is my keuse wanneer ek komplekse probleme hanteer waar die aard van die werkvloei dinamies is.
Hierdie benadering het egter sy nadele. Die werklikheid slaan weer toe! Die agent wat verantwoordelik is om hierdie besluite te neem, benodig dikwels 'n groot model, wat lei tot hoër vertragings en koste. Daarbenewens is daar 'n risiko dat dit verkeerde besluite kan neem.
Vir deterministiese werkvloeie, waar ons die volgorde van stappe voor die tyd weet, hou ek daarvan om die leisels vas te gryp en self die skip te stuur. Gelukkig ondersteun AutoGen hierdie gebruiksgeval met 'n handige kenmerk genaamd GroupChat
.
from autogen import GroupChatManager from autogen.agentchat.groupchat import GroupChat from autogen.agentchat.agent import Agent def get_group_chat(agents, generate_title: bool = False): # Define the function which decides the agent selection order def speaker_selection_method(last_speaker: Agent, group_chat: GroupChat): # The admin will always forward the note to the summarizer if last_speaker.name == "Admin": return group_chat.agent_by_name("Note_Summarizer") # Forward the note to the title generator if the user wants a title if last_speaker.name == "Note_Summarizer" and generate_title: return group_chat.agent_by_name("Title_Generator") # Handle the default case - exit return None return GroupChat( agents=agents, messages=[], max_round=3, # There will only be 3 turns in this group chat. The group chat will exit automatically post that. speaker_selection_method=speaker_selection_method, )
Stel jou 'n GroupChat
voor as 'n WhatsApp-groep waar al die agente kan gesels en saamwerk. Hierdie opstelling laat agente voortbou op mekaar se werk. Die GroupChat
klas tesame met 'n metgeselklas genaamd die GroupChatManager
tree op soos die groepadministrateurs, en hou tred met al die boodskappe wat elke agent stuur om te verseker dat almal op hoogte bly van die gesprekgeskiedenis.
In die bogenoemde kodebrokkie het ons 'n GroupChat
geskep met 'n pasgemaakte speaker_selection_method
. Die speaker_selection_method
stel ons in staat om ons pasgemaakte werkvloei te spesifiseer. Hier is 'n visuele voorstelling van dieselfde.
Aangesien die speaker_selection_method
in wese 'n Python-funksie is, kan ons daarmee doen wat ons wil! Dit help ons om 'n paar baie kragtige werkvloeie te skep. Ons kan byvoorbeeld:
Stel jou die moontlikhede voor! 😜
Die laaste stap is om 'n instansie van die GroupChat
te skep, dit in 'n GroupChatManager
te vou en die gesprek te begin.
# Create our group chat groupchat = get_group_chat([user, note_summarizer, title_generator], generate_title=True) manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=base_llm_config) # Start the chat user.initiate_chat( manager, clear_history=True, message=note, )
Let wel: Die gebruiker gesels met die
GroupChatManager
, nie die individuele agente nie. Dit het geen idee watter agente by die gesprek sal aansluit om die finale antwoord te verskaf nie. Slim, reg?
Die uitset sal iets soos volg lyk:
Admin (to chat_manager): Note: Convo with editor: - discuss titles and thumbnails - discuss video editing tips tracker - Zeeshan presents the tracker - what trick helps with what - he decidedls if we can experiment with something new - make sure all all videos since how config management works in k8s are backed u - make list of YouTube thumbnail templates - make list of YouTube idea generation limits -------------------------------------------------------------------------------- Next speaker: Note_Summarizer Note_Summarizer (to chat_manager): The note is about a conversation with an editor regarding video production. They discussed titles and thumbnails, as well as a video editing tips tracker presented by Zeeshan, which highlights tricks for specific tasks. Additionally, they ensured that all videos on Kubernetes configuration management are backed up and created lists of YouTube thumbnail templates and idea generation limits. -------------------------------------------------------------------------------- Next speaker: Title_Generator Title_Generator (to chat_manager): "Video Production Chat: Titles, Thumbnails, and Editing Tips" --------------------------------------------------------------------------------
Volgende, sal ons in die finale gebruiksgeval duik: neem 'n gegewe "nota", herstruktureer dit vir beter duidelikheid, en skep dan 'n taaklys vir die gebruiker.
Hier is hoe ons te werk gaan:
Ons sal begin deur 'n lys van onderwerpe wat in die nota behandel word, te identifiseer. Hierdie lys is die dryfveer agter die hele proses. Dit stel die afdelings vir die herformateerde nota vas en bepaal die vlak van detail vir ons gegenereerde take.
Daar is net een klein probleempie. Die Paraphrazer
en Task_Creator
-agent gee nie regtig om oor mekaar se uitset nie. Hulle gee net om oor die uitset van die Topic_Analyzer
.
So, ons het 'n manier nodig om te verhoed dat hierdie agente se antwoorde die gesprekgeskiedenis bemors, of dit sal volledige chaos wees. Ons het reeds beheer oor die werkvloei oorgeneem; nou is dit tyd om ook die baas van die gespreksgeskiedenis te wees! 😎
Eerste dinge eerste, ons moet ons agente opstel. Ek gaan jou nie verveel met die besonderhede nie, so hier is die kode:
def get_topic_analyzer(base_llm_config: dict): # A system message to define the role and job of our agent system_message = """You are a helpful AI assistant. The user will provide you a note. Generate a list of topics discussed in that note. The output must obey the following "RULES": "RULES": - Output should only contain the important topics from the note. - There must be atleast one topic in output. - Don't reuse the same text from user's note. - Don't have more than 10 topics in output.""" # Create and return our assistant agent return autogen.AssistantAgent( name="Topic_Analyzer", llm_config=base_llm_config, system_message=system_message, ) def get_paraphrazer(base_llm_config: dict): # A system message to define the role and job of our agent system_message = """You are a helpful AI content editor. The user will provide you a note along with a summary. Rewrite that note and make sure you cover everything in the note. Do not include the title. The output must obey the following "RULES": "RULES": - Output must be in markdown. - Make sure you use each points provided in summary as headers. - Each header must start with `##`. - Headers are not bullet points. - Each header can optionally have a list of bullet points. Don't put bullet points if the header has no content. - Strictly use "-" to start bullet points. - Optionally make an additional header named "Addional Info" to cover points not included in the summary. Use "Addional Info" header for unclassified points. - Identify and correct spelling & grammatical mistakes.""" # Create and return our assistant agent return autogen.AssistantAgent( name="Paraphrazer", llm_config=base_llm_config, system_message=system_message, ) def get_tasks_creator(base_llm_config: dict): # A system message to define the role and job of our agent system_message = """You are a helpful AI personal assistant. The user will provide you a note along with a summary. Identify each task the user has to do as next steps. Make sure to cover all the action items mentioned in the note. The output must obey the following "RULES": "RULES": - Output must be an YAML object with a field named tasks. - Make sure each task object contains fields title and description. - Extract the title based on the tasks the user has to do as next steps. - Description will be in markdown format. Feel free to include additional formatting and numbered lists. - Strictly use "-" or "dashes" to start bullet points in the description field. - Output empty tasks array if no tasks were found. - Identify and correct spelling & grammatical mistakes. - Identify and fix any errors in the YAML object. - Output should strictly be in YAML with no ``` or any additional text.""" # Create and return our assistant agent return autogen.AssistantAgent( name="Task_Creator", llm_config=base_llm_config, system_message=system_message, )
GroupChat
Ongelukkig. AutoGen laat ons nie toe om die gesprekgeskiedenis direk te beheer nie. Dus, ons moet voortgaan en die GroupChat
klas uitbrei met ons pasgemaakte implementering.
class CustomGroupChat(GroupChat): def __init__(self, agents): super().__init__(agents, messages=[], max_round=4) # This function get's invoked whenever we want to append a message to the conversation history. def append(self, message: Dict, speaker: Agent): # We want to skip messages from the Paraphrazer and the Task_Creator if speaker.name != "Paraphrazer" and speaker.name != "Task_Creator": super().append(message, speaker) # The `speaker_selection_method` now becomes a function we will override from the base class def select_speaker(self, last_speaker: Agent, selector: AssistantAgent): if last_speaker.name == "Admin": return self.agent_by_name("Topic_Analyzer") if last_speaker.name == "Topic_Analyzer": return self.agent_by_name("Paraphrazer") if last_speaker.name == "Paraphrazer": return self.agent_by_name("Task_Creator") # Return the user agent by default return self.agent_by_name("Admin")
Ons ignoreer twee funksies van die basis GroupChat
klas:
append
- Dit beheer watter boodskappe by die gesprekgeskiedenis gevoeg word.select_speaker
- Dit is 'n ander manier om die speaker_selection_method
te spesifiseer.
Maar wag, toe ek dieper in AutoGen se kode duik, het ek besef dat die GroupChatManager
elke agent ook die gesprekgeskiedenis laat handhaaf. Moenie my vra hoekom nie. Ek weet regtig nie!
Dus, kom ons brei ook die GroupChatManager
uit om dit reg te stel:
class CustomGroupChatManager(GroupChatManager): def __init__(self, groupchat, llm_config): super().__init__(groupchat=groupchat, llm_config=llm_config) # Don't forget to register your reply functions self.register_reply(Agent, CustomGroupChatManager.run_chat, config=groupchat, reset_config=GroupChat.reset) def run_chat( self, messages: Optional[List[Dict]] = None, sender: Optional[Agent] = None, config: Optional[GroupChat] = None, ) -> Union[str, Dict, None]: """Run a group chat.""" if messages is None: messages = self._oai_messages[sender] message = messages[-1] speaker = sender groupchat = config for i in range(groupchat.max_round): # set the name to speaker's name if the role is not function if message["role"] != "function": message["name"] = speaker.name groupchat.append(message, speaker) if self._is_termination_msg(message): # The conversation is over break # We do not want each agent to maintain their own conversation history history # broadcast the message to all agents except the speaker # for agent in groupchat.agents: # if agent != speaker: # self.send(message, agent, request_reply=False, silent=True) # Pro Tip: Feel free to "send" messages to the user agent if you want to access the messages outside of autogen for agent in groupchat.agents: if agent.name == "Admin": self.send(message, agent, request_reply=False, silent=True) if i == groupchat.max_round - 1: # the last round break try: # select the next speaker speaker = groupchat.select_speaker(speaker, self) # let the speaker speak # We'll now have to pass their entire conversation of messages on generate_reply # Commented OG code: reply = speaker.generate_reply(sender=self) reply = speaker.generate_reply(sender=self, messages=groupchat.messages) except KeyboardInterrupt: # let the admin agent speak if interrupted if groupchat.admin_name in groupchat.agent_names: # admin agent is one of the participants speaker = groupchat.agent_by_name(groupchat.admin_name) # We'll now have to pass their entire conversation of messages on generate_reply # Commented OG code: reply = speaker.generate_reply(sender=self) reply = speaker.generate_reply(sender=self, messages=groupchat.messages) else: # admin agent is not found in the participants raise if reply is None: break # The speaker sends the message without requesting a reply speaker.send(reply, self, request_reply=False) message = self.last_message(speaker) return True, None
Ek het 'n paar klein wysigings aan die oorspronklike implementering aangebring. Jy behoort die kommentaar te kan volg om meer te wete te kom.
Maar daar is een ding wat ek regtig wil beklemtoon. Jy kan die GroupChatManager se “run_chat”-metode ignoreer om jou eie werkvloei-enjin soos Apache Airflow of Temporal in te plug. Praktisyns van verspreide stelsels weet presies hoe kragtig hierdie vermoë is!
Ons stel dit alles op soos die vorige voorbeeld en kyk hoe hierdie baba purrr! 🐱
# Create our agents user = get_user() topic_analyzer = get_topic_analyzer(base_llm_config) paraphrazer = get_paraphrazer(base_llm_config) task_creator = get_tasks_creator(base_llm_config) # Create our group chat groupchat = CustomGroupChat(agents=[user, topic_analyzer, paraphrazer, task_creator]) manager = CustomGroupChatManager(groupchat=groupchat, llm_config=base_llm_config) # Start the chat user.initiate_chat( manager, clear_history=True, message=note, ) # Lets print the count of tasks just for fun chat_messages = user.chat_messages.get(manager) if chat_messages is not None: for message in chat_messages: if message.get("name") == "Task_Creator": taskList = yaml.safe_load(message.get("content")) # type: ignore l = len(taskList.get("tasks")) print(f"Got {l} tasks from Task_Creator.")
Die uitset sal iets soos volg lyk:
Admin (to chat_manager): Note: Convo with editor: - discuss titles and thumbnails - discuss video editing tips tracker - Zeeshan presents the tracker - what trick helps with what - he decidedls if we can experiment with something new - make sure all all videos since how config management works in k8s are backed u - make list of YouTube thumbnail templates - make list of YouTube idea generation limits -------------------------------------------------------------------------------- Topic_Analyzer (to chat_manager): Here is the list of topics discussed in the note: 1. Titles 2. Thumbnails 3. Video editing tips 4. Config management in Kubernetes (k8s) 5. YouTube thumbnail templates 6. YouTube idea generation limits -------------------------------------------------------------------------------- Paraphrazer (to chat_manager): Here is the rewritten note in markdown format: ## Titles - Discuss titles and thumbnails with the editor ## Video Editing Tips Tracker ### Zeeshan presents the tracker - What trick helps with what - He decides if we can experiment with something new ## Config Management in Kubernetes (k8s) - Make sure all videos since how config management works in k8s are backed up ## YouTube Thumbnail Templates - Make a list of YouTube thumbnail templates ## YouTube Idea Generation Limits - Make a list of YouTube idea generation limits ## Additional Info - Discuss video editing tips tracker with Zeeshan - Present the tracker and decide if we can experiment with something new -------------------------------------------------------------------------------- Task_Creator (to chat_manager): tasks: - title: Discuss Titles and Thumbnails description: >- - Discuss titles and thumbnails with the editor This task involves having a conversation with the editor to discuss the titles and thumbnails for the videos. - title: Discuss Video Editing Tips Tracker description: >- - Zeeshan presents the tracker - Discuss what trick helps with what - Decide if we can experiment with something new This task involves discussing the video editing tips tracker presented by Zeeshan, understanding what tricks help with what, and deciding if it's possible to experiment with something new. - title: Back up All Videos Since How Config Management Works in k8s description: >- - Make sure all videos since how config management works in k8s are backed up This task involves ensuring that all videos related to config management in Kubernetes (k8s) are backed up. - title: Create List of YouTube Thumbnail Templates description: >- - Make list of YouTube thumbnail templates This task involves creating a list of YouTube thumbnail templates. - title: Create List of YouTube Idea Generation Limits description: >- - Make list of YouTube idea generation limits This task involves creating a list of YouTube idea generation limits. -------------------------------------------------------------------------------- Got 5 tasks from Task_Creator.
Ja. Welkom by die era van KI wat ons hoomans werk gee om te doen! (Waar het dit alles verkeerd geloop? 🤷♂️)
Dit is moeilik om generatiewe KI-gedrewe toepassings te bou. Maar dit kan met die regte gereedskap gedoen word. Om op te som:
As die volgende stappe kan u die volgende hulpbronne nagaan om dieper in die wêreld van KI-agente te duik: