Pòs blog sa a detaye kijan pou konstwi yon ajan GraphRAG lè l sèvi avèk baz done graf Neo4j ak baz done vektè Milvus . Ajan sa a konbine pouvwa baz done graf ak rechèch vektè pou bay repons egzat ak enpòtan pou demann itilizatè yo. Nan egzanp sa a, nou pral itilize LangGraph, Llama 3.1 8B ak Ollama ak GPT-4o.
Sistèm tradisyonèl Retrieval Augmented Generation ( RAG ) konte sèlman sou
Ajan nou an swiv twa konsèp kle: routage, mekanis repli, ak koreksyon pwòp tèt ou. Prensip sa yo aplike atravè yon seri konpozan LangGraph:
Lè sa a, nou gen lòt konpozan, tankou:
Ka achitekti ajan GraphRAG nou an ka vizyalize kòm yon workflow ak plizyè nœuds konekte:
Pou montre kapasite ajan LLM nou yo, ann gade nan de eleman diferan: Graph Generation
ak Composite Agent
.
Pandan tout kòd la disponib nan pati anba a nan pòs sa a, snippets sa yo pral bay yon pi bon konpreyansyon sou fason ajan sa yo travay nan kad LangChain.
Eleman sa a fèt pou amelyore pwosesis pou reponn kesyon yo lè l sèvi avèk kapasite yon Neo4j. Li reponn kesyon yo grasa konesans ki entegre nan baz done graf Neo4j la. Men ki jan li fonksyone:
GraphCypherQAChain
- Pèmèt LLM a kominike avèk baz done graf Neo4j la. Li itilize LLM nan de fason:
cypher_llm
– Enstans LLM sa a responsab pou jenere demann Cypher pou ekstrè enfòmasyon ki enpòtan nan graf la ki baze sou kesyon itilizatè a.
Validasyon - Asire w ke rekèt Cypher yo valide pou asire ke yo kòrèk sentaktik.
Rekipere kontèks - Rekèt valide yo egzekite sou graf Neo4j la pou rekipere kontèks ki nesesè yo.
Jenerasyon repons - Modèl lang lan itilize kontèks rekipere pou jenere yon repons pou kesyon itilizatè a.
### Generate Cypher Query llm = ChatOllama(model=local_llm, temperature=0) # Chain graph_rag_chain = GraphCypherQAChain.from_llm( cypher_llm=llm, qa_llm=llm, validate_cypher=True, graph=graph, verbose=True, return_intermediate_steps=True, return_direct=True, ) # Run question = "agent memory" generation = graph_rag_chain.invoke({"query": question})
Eleman sa a pèmèt sistèm RAG a antre nan Neo4j, ki ka ede bay repons ki pi konplè ak egzat.
Sa a se kote majik la rive: ajan nou an ka konbine rezilta Milvus ak Neo4j, sa ki pèmèt yon pi bon konpreyansyon sou enfòmasyon yo epi ki mennen nan repons ki pi egzak ak nuans. Men ki jan li fonksyone:
### Composite Vector + Graph Generations cypher_prompt = PromptTemplate( template="""You are an expert at generating Cypher queries for Neo4j. Use the following schema to generate a Cypher query that answers the given question. Make the query flexible by using case-insensitive matching and partial string matching where appropriate. Focus on searching paper titles as they contain the most relevant information. Schema: {schema} Question: {question} Cypher Query:""", input_variables=["schema", "question"], )
# QA prompt qa_prompt = PromptTemplate( template="""You are an assistant for question-answering tasks. Use the following Cypher query results to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise. If topic information is not available, focus on the paper titles. Question: {question} Cypher Query: {query} Query Results: {context} Answer:""", input_variables=["question", "query", "context"], ) llm = ChatOpenAI(model="gpt-4o", temperature=0)
# Chain graph_rag_chain = GraphCypherQAChain.from_llm( cypher_llm=llm, qa_llm=llm, validate_cypher=True, graph=graph, verbose=True, return_intermediate_steps=True, return_direct=True, cypher_prompt=cypher_prompt, qa_prompt=qa_prompt, )
Ann fè yon gade nan rezilta rechèch nou an, konbine fòs baz done graf ak vektè pou amelyore dekouvèt rechèch nou an.
Nou kòmanse ak rechèch graf nou an lè l sèvi avèk Neo4j:
# Example input data question = "What paper talks about Multi-Agent?" generation = graph_rag_chain.invoke({"query": question}) print(generation)
> Entering new GraphCypherQAChain chain... Generated Cypher: cypher MATCH (p:Paper) WHERE toLower(p.title) CONTAINS toLower("Multi-Agent") RETURN p.title AS PaperTitle, p.summary AS Summary, p.url AS URL
> Finished chain. {'query': 'What paper talks about Multi-Agent?', 'result': [{'PaperTitle': 'Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework', 'Summary': 'In this work, we aim to push the upper bound of the reasoning capability of LLMs by proposing a collaborative multi-agent, multi-reasoning-path (CoMM) prompting framework. Specifically, we prompt LLMs to play different roles in a problem-solving team, and encourage different role-play agents to collaboratively solve the target task. In particular, we discover that applying different reasoning paths for different roles is an effective strategy to implement few-shot prompting approaches in the multi-agent scenarios. Empirical results demonstrate the effectiveness of the proposed methods on two college-level science problems over competitive baselines. Our further analysis shows the necessity of prompting LLMs to play different roles or experts independently.', 'URL': 'https://github.com/amazon-science/comm-prompt'}]
Rechèch graf la ekselan nan jwenn relasyon ak metadata. Li ka byen vit idantifye papye ki baze sou tit, otè, oswa kategori predefini, bay yon View estriktire nan done yo.
Apre sa, nou ale nan rechèch vektè nou an pou yon pèspektiv diferan:
# Example input data question = "What paper talks about Multi-Agent?" # Get vector + graph answers docs = retriever.invoke(question) vector_context = rag_chain.invoke({"context": docs, "question": question})
> The paper discusses "Adaptive In-conversation Team Building for Language Model Agents" and talks about Multi-Agent. It presents a new adaptive team-building paradigm that offers a flexible solution for building teams of LLM agents to solve complex tasks effectively. The approach, called Captain Agent, dynamically forms and manages teams for each step of the task-solving process, utilizing nested group conversations and reflection to ensure diverse expertise and prevent stereotypical outputs.
Rechèch vektè vrèman bon nan konpreyansyon kontèks ak resanblans semantik. Li ka dekouvwi papye ki gen rapò konseptyèlman ak rechèch la, menm si yo pa klèman genyen tèm rechèch yo.
Finalman, nou konbine tou de metòd rechèch:
Sa a se yon pati enpòtan nan ajan RAG nou an, ki fè li posib pou itilize tou de baz done vektè ak graf.
composite_chain = prompt | llm | StrOutputParser() answer = composite_chain.invoke({"question": question, "context": vector_context, "graph_context": graph_context}) print(answer)
> The paper "Collaborative Multi-Agent, Multi-Reasoning-Path (CoMM) Prompting Framework" talks about Multi-Agent. It proposes a framework that prompts LLMs to play different roles in a problem-solving team and encourages different role-play agents to collaboratively solve the target task. The paper presents empirical results demonstrating the effectiveness of the proposed methods on two college-level science problems.
Lè nou entegre rechèch graf ak vektè, nou pwofite fòs tou de apwòch yo. Rechèch graf la bay presizyon ak navige relasyon estriktire, pandan y ap rechèch vektè a ajoute pwofondè atravè konpreyansyon semantik.
Metòd konbine sa a ofri plizyè avantaj:
Nan pòs blog sa a, nou te montre kijan pou konstwi yon ajan GraphRAG lè l sèvi avèk Neo4j ak Milvus. Lè yo konbine fòs baz done graf yo ak rechèch vektè , ajan sa a bay repons egzat ak enpòtan pou demann itilizatè yo.
Achitekti ajan RAG nou an, ak routage devwe li yo, mekanis repli, ak kapasite oto-koreksyon, fè li solid ak serye. Egzanp konpozan jenerasyon graf ak ajan konpoze montre kijan ajan sa a kapab antre nan baz done vektè ak graf pou bay repons konplè ak nuans.
Nou espere ke gid sa a te itil epi enspire ou tcheke posiblite pou konbine baz done graf ak rechèch vektè nan pwòp pwojè ou yo.
Kòd aktyèl la disponib sou GitHub .
Pou aprann plis sou sijè sa a, vin jwenn nou nan NODES 2024 le 7 novanm, konferans gratis pwomotè vityèl nou an sou aplikasyon entèlijan, graf konesans, ak AI. Enskri kounye a!