Ukufundisa njani i-agent ye-AI ye-Research Paper Retrieval, Search, kunye ne-Summarization Ukufundisa njani i-agent ye-AI ye-Research Paper Retrieval, Search, kunye ne-Summarization Kuba abacwaningi, ukuxhaswa kunye neziphumo ezidlulileyo kubamba ukufumana i-igolide kwi-heystack. Qinisekisa umncedisi we-AI-powered enokufumana kuphela iinkcukacha ezininzi ezifanelekileyo kodwa nangokufaka iinkcukacha eziphambili kunye nokufumana imibuzo zakho ezizodwa, konke ngexesha elifanelekileyo. Inqaku le nqaku ibonelela ekwakwakhiwa kwe-AI yokufunda usebenzisa i-Document Embedding ye-Superlinked. Ngokusetyenziswa kwe-semantic kunye ne-temporal relevance, sincoma ukunciphisa i-rearranging ye-complex, ukunika ukufumana kwinkcukacha ngokukhawuleza nangokulula. Inqaku le nqaku ibonelela ekwakwakhiwa kwe-AI yokufunda usebenzisa i-Document Embedding ye-Superlinked. Ngokusetyenziswa kwe-semantic kunye ne-temporal relevance, sincoma ukunciphisa i-rearranging ye-complex, ukunika ukufumana kwinkcukacha ngokukhawuleza nangokulula. TL;DR: Yenza i-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi ye-inthanethi. (Uma ufuna ukufikelela ngqo kwi-code? Qhagamshelana ne-open source kwi-GitHub apha. Uyakwazi ukufumana i-semantic search ye-agentic usage ye-case yakho? Thina siphinde ukunceda.) Khangela i-open source kwi-GitHub . Yiba Yiba Yiba Ndingathanda ukuba . Ukucinga Ukucinga Ukucinga Inqaku elandelayo ibonisa indlela yokwakha inkqubo ye-agent usebenzisa i-agent ye-Kernel ukulawula iingxaki. Ukuba ufuna ukuqhuba kunye nokuhamba ikhodi kwi-browser, here’s the . Ukucinga . Ukucinga Ukucinga Yintoni ukuqala ukuvelisa inkqubo ye-research assistant? Ngokuvamile, ukwakhiwa kwinkqubo efana kuxhomekeke ngempumelelo kunye neengxaki ezininzi zokusetyenziswa. Iinkqubo ze-search ngokuvamile zitholele iinkcukacha ezininzi zokusekelwe ngokufanelekileyo kwaye zitholele inqubo ye-rearranging ye-secondary ukucacisa kunye nokuguqulwa kwimiphumo. Nangona i-rearranging ivumela ukucaciso, kubandakanya kakhulu i-computational complexity, i-latency, kunye ne-overhead ngenxa ye-data retrieval ezininzi efunekayo ekuqaleni. I-Superlinked ibonelela le ngempumelelo ngokuxhomekeka i-numeric kunye ne-catalogical embeddings kunye ne-semantic text embeddings, enika i- Ukwakha inkqubo ye-agent kunye ne-Superlinked Umthengisi we-AI inokufumana izinto ezintathu eziphambili: Find Papers: Search for research papers by topic (isib. "i-quantum computing") yaye uqhagamshelane ngokufanelekileyo kunye nexesha elidlulileyo. Ukubhalisa iingcebiso: Ukubhalisa iingcebiso ezidlulileyo kwiingcebiso zeengcebiso. Iingxaki ze-response: Ukukhuphela iingxaki ngqo kwiiphepha zophando ezithile ezisekelwe kwiingxaki ze-user. Superlinked ukunciphisa ukunyaniseka kwindlela re-ranking njengoko ukwandisa ukunyaniseka kwe-vector search. I-RecencySpace ye-Superlinked iyasetyenziselwa ukuba ikhowudi i-metadata ye-temporal, ukunyaniseka iinkcukacha ezidlulileyo ngexesha lokufumana, kwaye ukunciphisa ukunyaniseka kwe-re-ranking ezininzi. Ngokwesibonelo, ukuba iinkcukacha ezimbini ziquka i-relevance efanayo - le yinkcukacha ezidlulileyo iya kuhlaziywa kakhulu. Isinyathelo 1: Ukubonisa Toolbox %pip install superlinked Ukwenza izinto kulula kwaye ezininzi i-modular, i-Abstract Tool Class. Oku kutshintshe inqubo yokwakha kunye nokongeza izixhobo import pandas as pd import superlinked.framework as sl from datetime import timedelta from sentence_transformers import SentenceTransformer from openai import OpenAI import os from abc import ABC, abstractmethod from typing import Any, Optional, Dict from tqdm import tqdm from google.colab import userdata # Abstract Tool Class class Tool(ABC): @abstractmethod def name(self) -> str: pass @abstractmethod def description(self) -> str: pass @abstractmethod def use(self, *args, **kwargs) -> Any: pass # Get API key from Google Colab secrets try: api_key = userdata.get('OPENAI_API_KEY') except KeyError: raise ValueError("OPENAI_API_KEY not found in user secrets. Please add it using Tools > User secrets.") # Initialize OpenAI Client api_key = os.environ.get("OPENAI_API_KEY", "your-openai-key") # Replace with your OpenAI API key if not api_key: raise ValueError("Please set the OPENAI_API_KEY environment variable.") client = OpenAI(api_key=api_key) model = "gpt-4" Step 2 : Understanding the Dataset Le nqakraza isebenzisa i-dataset ebandakanya i-10,000 i-AI research papers ezinikezwayo Ukwenza oku kulula, nceda uqhagamshelane iseli elandelayo, kwaye uyakhawuleza ngokuzenzekelayo i-dataset kwi-directory yakho yokusebenza. Ungasetyenziswa kwizithuthi zayo zayo zayo zayo zayo, njengeengcali zophando okanye nezinye iinkcukacha zenzululwazi. Ukuba ukhethe ukwenza oku, yonke into kufuneka uqhagamshelane i-schema yobugcisa kunye nokuqinisekisa iimveliso ze-column. Ukucinga import pandas as pd !wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1FCR3TW5yLjGhEmm-Uclw0_5PWVEaLk1j' -O arxiv_ai_data.csv Okwangoku, ukuze izicwangciso ezininzi, siya kubasebenzisa i-subset ezincinane yeengxaki kuphela ukuze isebenze iingxaki, kodwa nceda usebenzise isibonelo usebenzisa i-dataset epheleleyo. Izici ezininzi ezininzi zenzulululwazi apha ukuba i-timestamps ezivela kwi-dataset iya kubhalwe kwi-string timestamps (njenge '1993-08-01 00:00:00+00:00') kwi-panda datetime objects. Oku kubhalwe kufuneka ngenxa yokwenza imisebenzi ze-date/time. df = pd.read_csv('arxiv_ai_data.csv').head(100) # Convert to datetime but keep it as datetime (more readable and usable) df['published'] = pd.to_datetime(df['published']) # Ensure summary is a string df['summary'] = df['summary'].astype(str) # Add 'text' column for similarity search df['text'] = df['title'] + " " + df['summary'] Debug: Columns in original DataFrame: ['authors', 'categories', 'comment', 'doi', 'entry_id', 'journal_ref' 'pdf_url', 'primary_category', 'published', 'summary', 'title', 'updated'] Ukucinga i-Dataset Columns Okulandelayo isibuyekezo esincinane yeengxaki ezininzi kwi-dataset yethu, ezininzi ziya kubalulekile kwiingxaki ezidlulileyo: ifumaneka: Ixesha lokuvelisa iiphepha le-research. inkxaso: I-abstract yeephepha, ibonelela inkcazelo olutsha. entry_id: I-ID eyodwa yeenkcukacha ngamnye kwi-arXiv. Ukusabela oku, sinxibelelana ngokutsho kwiiyure ezine: Ukucinga Ukucinga Yaye Ukuphucula umgangatho we-recovery, i-title kunye ne-summary zihlanganiswa kwi-column ye-text epheleleyo, leyo yinkqubo ye-embedding kunye ne-search yethu. entry_id published title summary A Note on Superlinked's In-Memory Indexer : I-In-memory indexing ye-Superlinked ibhekwa i-dataset yethu ngqo kwi-RAM, yenza ukufumana ngokukhawuleza kakhulu, nto leyo elungileyo yokufunda ixesha elifanelekileyo kunye ne-prototyping eshushu. Kule-proof-of-concept kunye ne-1,000 iimpazamo zophando, ukusetyenziswa kwe-in-memory ukwandisa kakhulu ukusebenza kwe-query, ukunciphisa iimfuno ezinxulumene ne-disk access. Isinyathelo 3: Ukucaciswa kwe-Superlinked Schema Ukuze uqhagamshelane, kufuneka i-schema yokufaka iinkcukacha zethu. Siye zihlanganisa kunye neengxaki ezininzi: PaperSchema lass PaperSchema(sl.Schema): text: sl.String published: sl.Timestamp # This will handle datetime objects properly entry_id: sl.IdField title: sl.String summary: sl.String paper = PaperSchema() Ukucaciswa kwe-Superlinked Spaces for Effective Retrieval Umzila esisodwa ekulawuleni kunye nokuhambisa ngempumelelo data set yethu kuquka ukucacisa izixeko ezimbini ze-vector: TextSimilaritySpace kunye ne-RecencySpace. TextSimilarityIzitya Yintoni isisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisombululo esisomb TextSimilaritySpace text_space = sl.TextSimilaritySpace( text=sl.chunk(paper.text, chunk_size=200, chunk_overlap=50), model="sentence-transformers/all-mpnet-base-v2" ) Ukucinga Yintoni Ukukhuthaza i-metadata ye-temporal, ngokucacisa ukuhlaziywa kwizifundo ze-research. Ngokukhawulezisa i-timestamps, le mveliso ibonelela ubunzima kakhulu kumadokhumenti ezintsha. Ngenxa yoko, iziphumo ze-recovery zihlanganisa ngokwemvelo i-content relevance kunye neentsuku ze-publishment, ukunceda iinkcukacha ezidlulileyo. RecencySpace recency_space = sl.RecencySpace( timestamp=paper.published, period_time_list=[ sl.PeriodTime(timedelta(days=365)), # papers within 1 year sl.PeriodTime(timedelta(days=2*365)), # papers within 2 years sl.PeriodTime(timedelta(days=3*365)), # papers within 3 years ], negative_filter=-0.25 ) Fumana i-RecencySpace njenge-filter eyenziwe ngexesha, efana ne-sorting ye-imeyile yakho ngexesha okanye ukubonisa i-Instagram post kunye neentsuku ezidlulileyo yokuqala. Oku kunceda ukufumana imibuzo, 'Ukuba le nqaku entsha?' Iintsuku ezincinane ezincinane (njenge-365 iintsuku) zibonisa iindidi ezininzi ezincinane, zokusekelwe ngonyaka. Iintsuku ezininzi ze-timedeltas (njenge-1095 iintsuku) zibonisa iintsuku ezininzi. Yintoni Ukucacisa ngakumbi, bheka isibonelo elandelayo apho iiphepha ezimbini zine-content relevance, kodwa i-ranking yayo iya kuxhomekeke kwi-publishing date yayo. negative_filter Paper A: Published in 1996 Paper B: Published in 1993 Scoring example: - Text similarity score: Both papers get 0.8 - Recency score: - Paper A: Receives the full recency boost (1.0) - Paper B: Gets penalized (-0.25 due to negative_filter) Final combined scores: - Paper A: Higher final rank - Paper B: Lower final rank Zonke iinkcukacha ziquka iinkcukacha ezininzi zokwenza i-dataset engaphezulu kunye ne-efficient. Zifumaneka iinkcukacha ze-content-based kunye ne-time-based, kwaye zithembisa kakhulu ukufumana i-relevance kunye ne-akhawunti ze-research. Oku inikeza indlela efanelekileyo yokuhlanganisa kunye ne-search kwi-dataset ngokusekelwe kunye ne-content kunye ne-time ye-publishing. Isinyathelo 4: Ukwakha i-index Okulandelayo, izixazululo zihlanganiswa kwi-index, nto leyo i-core ye-search engine: paper_index = sl.Index([text_space, recency_space]) Emva koko, i-DataFrame ifakwe kwi-schema kwaye ifakwe kwi-batches (10 iiphepha ngexesha elinye) kwi-in-memory store: # Parser to map DataFrame columns to schema fields parser = sl.DataFrameParser( paper, mapping={ paper.entry_id: "entry_id", paper.published: "published", paper.text: "text", paper.title: "title", paper.summary: "summary", } ) # Set up in-memory source and executor source = sl.InMemorySource(paper, parser=parser) executor = sl.InMemoryExecutor(sources=[source], indices=[paper_index]) app = executor.run() # Load the DataFrame with a progress bar using batches batch_size = 10 data_batches = [df[i:i + batch_size] for i in range(0, len(df), batch_size)] for batch in tqdm(data_batches, total=len(data_batches), desc="Loading Data into Source"): source.put([batch]) I-in-memory executor yoko Superlinked ibonisa apha - i-1,000 iiphepha zihlanganisa ngokufanelekileyo kwi-RAM, kwaye iingxaki zithunyelwe ngaphandle kwe-Disk I/O bottlenecks. Isinyathelo 5: Crafting i-query Okulandelayo ukuvelisa iingxaki. Kule le ngxaki yokwenza iingxaki. Ukulawula oku, kufuneka iingxaki yeengxaki enokufanelekileyo kunye neengxaki ezidlulileyo. Apha yintoni yintoni: # Define the query knowledgebase_query = ( sl.Query( paper_index, weights={ text_space: sl.Param("relevance_weight"), recency_space: sl.Param("recency_weight"), } ) .find(paper) .similar(text_space, sl.Param("search_query")) .select(paper.entry_id, paper.published, paper.text, paper.title, paper.summary) .limit(sl.Param("limit")) ) Ngokwenza oku, sinokukwazi ukhethe ukuba ukunika umgangatho (i-relevance_weight) okanye i-recency (i-recency_weight) - i-combo esebenzayo kakhulu kwiimfuneko ze-agent yethu. Isinyathelo 6: Ukwakha izixhobo Ngiya kwakhona i-Tooling Part. Uya kuthetha iintlobo ezintathu ... I-Retrieval Tool : Le tool yenzelwe ngokugqithisa kwi-Index ye-Superlinked, okuvumela ukuba ifake iiphepha ezininzi ze-5 ngokuxhomekeke kwiphepha. I-Retrieval Tool ibaluleke i-relevancy (1.0 ubunzima) kunye ne-recentity (0.5 ubunzima) ukufumana i- "Find papers" izicwangciso. Yintoni nathi siza ukufumana iiphepha ezinxulumene ne-question. Ngoko ke, ukuba i-question iyona: "Yintoni iiphepha ze-quantum computing ziye zithunyelwa phakathi kwe-1993 kunye ne-1994?" ke i-Drawing Tool iya kufumana iiphepha ziye, zithunyelwe ngamnye ngamnye, kwaye zibonise iziphumo. class RetrievalTool(Tool): def __init__(self, df, app, knowledgebase_query, client, model): self.df = df self.app = app self.knowledgebase_query = knowledgebase_query self.client = client self.model = model def name(self) -> str: return "RetrievalTool" def description(self) -> str: return "Retrieves a list of relevant papers based on a query using Superlinked." def use(self, query: str) -> pd.DataFrame: result = self.app.query( self.knowledgebase_query, relevance_weight=1.0, recency_weight=0.5, search_query=query, limit=5 ) df_result = sl.PandasConverter.to_pandas(result) # Ensure summary is a string if 'summary' in df_result.columns: df_result['summary'] = df_result['summary'].astype(str) else: print("Warning: 'summary' column not found in retrieved DataFrame.") return df_result Next up is the . Le mveliso yenzelwe kwimeko apho i-resume esifunyenweyo yeephepha. Ukuze usebenzise, kufuneka ifumaneke , nto leyo i-ID ye-paper ebonakalayo. Ukuba a ifumaneka, isixhobo uya kukusebenza njengoko ezi ID ziquka kufuneka ukufumana iiphepha ezifanelekileyo kwi-dataset. Summarization Tool paper_id paper_id class SummarizationTool(Tool): def __init__(self, df, client, model): self.df = df self.client = client self.model = model def name(self) -> str: return "SummarizationTool" def description(self) -> str: return "Generates a concise summary of specified papers using an LLM." def use(self, query: str, paper_ids: list) -> str: papers = self.df[self.df['entry_id'].isin(paper_ids)] if papers.empty: return "No papers found with the given IDs." summaries = papers['summary'].tolist() summary_str = "\n\n".join(summaries) prompt = f""" Summarize the following paper summaries:\n\n{summary_str}\n\nProvide a concise summary. """ response = self.client.chat.completions.create( model=self.model, messages=[{"role": "user", "content": prompt}], temperature=0.7, max_tokens=500 ) return response.choices[0].message.content.strip() Okokuqala, sinayo i . Le mveliso itheyibhile ukufumana iingcebiso ezinxulumeneyo kwaye isetyenziselwa ukufumana imibuzo. Ukuba iingcebiso ezinxulumeneyo ziyafumaneka ukufumana imibuzo, iya kukunika impendulo esekelwe ulwazi olukhulu QuestionAnsweringTool RetrievalTool class QuestionAnsweringTool(Tool): def __init__(self, retrieval_tool, client, model): self.retrieval_tool = retrieval_tool self.client = client self.model = model def name(self) -> str: return "QuestionAnsweringTool" def description(self) -> str: return "Answers questions about research topics using retrieved paper summaries or general knowledge if no specific context is available." def use(self, query: str) -> str: df_result = self.retrieval_tool.use(query) if 'summary' not in df_result.columns: # Tag as a general question if summary is missing prompt = f""" You are a knowledgeable research assistant. This is a general question tagged as [GENERAL]. Answer based on your broad knowledge, not limited to specific paper summaries. If you don't know the answer, provide a brief explanation of why. User's question: {query} """ else: # Use paper summaries for specific context contexts = df_result['summary'].tolist() context_str = "\n\n".join(contexts) prompt = f""" You are a research assistant. Use the following paper summaries to answer the user's question. If you don't know the answer based on the summaries, say 'I don't know.' Paper summaries: {context_str} User's question: {query} """ response = self.client.chat.completions.create( model=self.model, messages=[{"role": "user", "content": prompt}], temperature=0.7, max_tokens=500 ) return response.choices[0].message.content.strip() Isinyathelo 7: Ukwakhiwa kwe-Kernel Agent Okulandelayo i-Kernel Agent. I-Kernel Agent isebenza njenge-controller ye-central, evumela ukusebenza okuzenzakalelayo kunye ne-efficient. Ukusebenza njenge-component ye-core ye-system, i-Kernel Agent ilawula i-communication ngokuhambisa iingxaki ngokufanelekileyo xa i-agents ezininzi zokusebenza ngexesha elifanayo. Kwiinkqubo ze-agents ezininzi, njenge-one-agent, i-Kernel Agent isebenzisa ngqo izixhobo ezifanelekileyo ukulawula umsebenzi ngokufanelekileyo. class KernelAgent: def __init__(self, retrieval_tool: RetrievalTool, summarization_tool: SummarizationTool, question_answering_tool: QuestionAnsweringTool, client, model): self.retrieval_tool = retrieval_tool self.summarization_tool = summarization_tool self.question_answering_tool = question_answering_tool self.client = client self.model = model def classify_query(self, query: str) -> str: prompt = f""" Classify the following user prompt into one of the three categories: - retrieval: The user wants to find a list of papers based on some criteria (e.g., 'Find papers on AI ethics from 2020'). - summarization: The user wants to summarize a list of papers (e.g., 'Summarize papers with entry_id 123, 456, 789'). - question_answering: The user wants to ask a question about research topics and get an answer (e.g., 'What is the latest development in AI ethics?'). User prompt: {query} Respond with only the category name (retrieval, summarization, question_answering). If unsure, respond with 'unknown'. """ response = self.client.chat.completions.create( model=self.model, messages=[{"role": "user", "content": prompt}], temperature=0.7, max_tokens=10 ) classification = response.choices[0].message.content.strip().lower() print(f"Query type: {classification}") return classification def process_query(self, query: str, params: Optional[Dict] = None) -> str: query_type = self.classify_query(query) if query_type == 'retrieval': df_result = self.retrieval_tool.use(query) response = "Here are the top papers:\n" for i, row in df_result.iterrows(): # Ensure summary is a string and handle empty cases summary = str(row['summary']) if pd.notna(row['summary']) else "" response += f"{i+1}. {row['title']} \nSummary: {summary[:200]}...\n\n" return response elif query_type == 'summarization': if not params or 'paper_ids' not in params: return "Error: Summarization query requires a 'paper_ids' parameter with a list of entry_ids." return self.summarization_tool.use(query, params['paper_ids']) elif query_type == 'question_answering': return self.question_answering_tool.use(query) else: return "Error: Unable to classify query as 'retrieval', 'summarization', or 'question_answering'." Kule ngexesha, zonke iingxaki ze-Research Agent System ziye ziye ziboniswe. I-system ingasetyenziswa ngexesha lokufumana i-Kernel Agent kunye nezixhobo ezifanelekileyo, emva kokuba i-Research Agent System iya kufumaneka ngokupheleleyo. retrieval_tool = RetrievalTool(df, app, knowledgebase_query, client, model) summarization_tool = SummarizationTool(df, client, model) question_answering_tool = QuestionAnsweringTool(retrieval_tool, client, model) # Initialize KernelAgent kernel_agent = KernelAgent(retrieval_tool, summarization_tool, question_answering_tool, client, model) Ndiyathanda ngoku kwinkqubo. # Test query print(kernel_agent.process_query("Find papers on quantum computing in last 10 years")) Ukusebenza oku kusebenza . It uya kufumana iingcebiso ezinxulumene kunye neengcebiso ezidlulileyo, kwaye uyavumela iingcebiso ezinxulumeneyo. Ukuba imiphumo yokufumana kuquka iingcebiso yeengcebiso (ukubonisa ukuba iingcebiso zithunyelwe kwi-dataset), uyavumela iingcebiso ziye zithunyelwa nathi. RetrievalTool Query type: retrieval Here are the top papers: 1. Quantum Computing and Phase Transitions in Combinatorial Search Summary: We introduce an algorithm for combinatorial search on quantum computers that is capable of significantly concentrating amplitude into solutions for some NP search problems, on average. This is done by... 1. The Road to Quantum Artificial Intelligence Summary: This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provi... 1. Solving Highly Constrained Search Problems with Quantum Computers Summary: A previously developed quantum search algorithm for solving 1-SAT problems in a single step is generalized to apply to a range of highly constrained k-SAT problems. We identify a bound on the number o... 1. The model of quantum evolution Summary: This paper has been withdrawn by the author due to extremely unscientific errors.... 1. Artificial and Biological Intelligence Summary: This article considers evidence from physical and biological sciences to show machines are deficient compared to biological systems at incorporating intelligence. Machines fall short on two counts: fi... Nceda siphinde isibuyekezo enye, ngoko, siphinde isibuyekezo enye. print(kernel_agent.process_query("Summarize this paper", params={"paper_ids": ["http://arxiv.org/abs/cs/9311101v1"]})) Query type: summarization This paper discusses the challenges of learning logic programs that contain the cut predicate (!). Traditional learning methods cannot handle clauses with cut because it has a procedural meaning. The proposed approach is to first generate a candidate base program that covers positive examples, and then make it consistent by inserting cut where needed. Learning programs with cut is difficult due to the need for intensional evaluation, and current induction techniques may need to be limited to purely declarative logic languages. Ndingathanda le nqakraza iye lula ukuvelisa i-agent ye-AI kunye ne-agent-based systems. Ininzi lwezinto zokufunda eziboniswe apha ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye zibe zibe zibe zibe zibe zibe zibe zibe zibe zibe zibe zibe zibe zibe. Ukuze umzekelo wayo ngexesha elandelayo xa iinkonzo ze-recovery ezifanelekileyo ziquka i-AI ye-agents yakho! iimveliso Ukucinga Notebook Code Ukubonisa umgangatho we-semantic kunye ne-temporal, ukunciphisa i-rearranging emangalisayo nangokuthintela i-accuracy ye-search yeengxaki zophando. Iintlawulo ze-time-based (negative_filter=-0.25) zithintela uphando olutsha xa iinkcukacha zinezinto ezifanayo. I-Architecture ye-Tool-based ye-Modular inikezela iingxaki ze-specialized (i-recovery, i-summarization, i-question-answering) kunye nokugcina ukuxhaswa kwe-system. Ukukhuphela iinkcukacha kwiintlobo ezincinane (batch_size=10) kunye ne-progress tracking kunceda ukunciphisa ukuzinza kwinkqubo xa usebenzise iinkcukacha ezininzi zophando. Iingxaki zeengxaki ze-adjustable zithunyelwe kubasebenzisi ukuxhaswa i-relevance (1.0) kunye ne-recent (0.5) ngokutsho nezidingo zophando ezithile. I-component ye-question-answering ikunciphisa ngokukhawuleza kwiinkonzo ezininzi xa i-paper-specific context ayikho, ukunciphisa izifundo ze-user-dead-end. Ukugcina kwi-akhawunti yeenkcukacha ezininzi zeengcali zeengcali ezidlulileyo kunokuba i-challenge kunye ne-time-consuming. I-agent AI assistant workflow enokufanelekileyo yokufaka uphando olufanelekileyo, ukuphefumula iinkcukacha eziphambili, kunye nokuphendula imibuzo ezithile kwiengcali zeengcali ingayifumana kakhulu le nqubo. Umthengisi Vipul Maheshwari, umbhali Filip Makraduli, umbhali