paint-brush
Die gebruik van MinIO om 'n herwinningsaansoek vir uitgebreide generasie-klets te boudeur@minio
5,680 lesings
5,680 lesings

Die gebruik van MinIO om 'n herwinningsaansoek vir uitgebreide generasie-klets te bou

deur MinIO21m2024/09/18
Read on Terminal Reader

Te lank; Om te lees

Die bou van 'n produksiegraad JOOL-toepassing vereis 'n geskikte data-infrastruktuur om stukke data wat jou eie korpus uitmaak, te stoor, weergawe, verwerk, evalueer en navraag te doen.
featured image - Die gebruik van MinIO om 'n herwinningsaansoek vir uitgebreide generasie-klets te bou
MinIO HackerNoon profile picture
0-item


Daar is dikwels gesê dat in die ouderdom van KI - data jou grag is. Vir die doel vereis die bou van 'n produksiegraad JOOL-toepassing 'n geskikte data-infrastruktuur om stukke data wat jou eie korpus uitmaak, te stoor, te weergawe, te verwerk, te evalueer en navraag te doen. Aangesien MinIO 'n data-eerste benadering tot KI volg, is ons standaard aanvanklike infrastruktuuraanbeveling vir 'n projek van hierdie tipe om 'n Modern Data Lake (MinIO) en 'n vektordatabasis op te stel. Terwyl ander bykomende gereedskap dalk langs die pad ingeprop moet word, is hierdie twee infrastruktuur-eenhede fundamenteel. Hulle sal as die swaartepunt dien vir bykans alle take wat u later ondervind om u JOOL-aansoek in produksie te kry.


Maar jy is in 'n raaisel. Jy het al voorheen van hierdie terme LLM en JOOL gehoor, maar verder het jy nie veel gewaag nie as gevolg van die onbekende. Maar sal dit nie lekker wees as daar 'n "Hello World" of boilerplate-toepassing is wat jou kan help om aan die gang te kom nie?


Moenie bekommerd wees nie, ek was in dieselfde bootjie. So in hierdie blog sal ons demonstreer hoe om MinIO te gebruik om 'n Retrieval Augmented Generation (RAG) gebaseerde kletstoepassing te bou met behulp van kommoditeit hardeware.


  • Gebruik MinIO om al die dokumente, verwerkte stukke en die inbeddings met behulp van die vektordatabasis te stoor.


  • Gebruik MinIO se emmerkennisgewingfunksie om gebeurtenisse te aktiveer wanneer dokumente by 'n emmer bygevoeg of verwyder word


  • Webhook wat die gebeurtenis verbruik en die dokumente met Langchain verwerk en die metadata en stukkies dokumente in 'n metadata-emmer stoor


  • Aktiveer MinIO-emmerkennisgewinggebeurtenisse vir nuut bygevoegde of verwyderde stukke dokumente


  • 'n Webhook wat die gebeurtenisse verbruik en inbeddings genereer en dit stoor in die Vector Database (LanceDB) wat in MinIO bestaan


Sleutelgereedskap wat gebruik word

  • MinIO - Object Store om al die data te behou
  • LanceDB - Bedienerlose oopbron-vektordatabasis wat data in objekstoor voortduur
  • Ollama - Om LLM en inbeddingsmodel plaaslik te laat loop (OpenAI API-versoenbaar)
  • Gradio - Interface waardeur interaksie met JOOL-toepassing kan wees
  • FastAPI - Bediener vir die Webhooks wat emmerkennisgewing van MinIO ontvang en die Gradio-toepassing blootstel
  • LangChain & Unstructured - Om nuttige teks uit ons dokumente te onttrek en dit in 'n stuk te verdeel vir inbedding


Modelle gebruik

  • LLM - Phi-3-128K (3.8B-parameters)
  • Inbeddings - Nomic Embed Text v1.5 ( Matryoshka-inbeddings / 768 Dim, 8K-konteks)

Begin MinIO Server

Jy kan die binêre aflaai as jy dit nie reeds van hier af het nie


 # Run MinIO detached !minio server ~/dev/data --console-address :9090 &


Begin Ollama Server + Laai LLM en inbeddingsmodel af

Laai Ollama van hier af af


 # Start the Server !ollama serve


 # Download Phi-3 LLM !ollama pull phi3:3.8b-mini-128k-instruct-q8_0


 # Download Nomic Embed Text v1.5 !ollama pull nomic-embed-text:v1.5


 # List All the Models !ollama ls


Skep 'n Basiese Gradio-toepassing deur FastAPI te gebruik om die model te toets

 LLM_MODEL = "phi3:3.8b-mini-128k-instruct-q8_0" EMBEDDING_MODEL = "nomic-embed-text:v1.5" LLM_ENDPOINT = "http://localhost:11434/api/chat" CHAT_API_PATH = "/chat" def llm_chat(user_question, history): history = history or [] user_message = f"**You**: {user_question}" llm_resp = requests.post(LLM_ENDPOINT, json={"model": LLM_MODEL, "keep_alive": "48h", # Keep the model in-memory for 48 hours "messages": [ {"role": "user", "content": user_question } ]}, stream=True) bot_response = "**AI:** " for resp in llm_resp.iter_lines(): json_data = json.loads(resp) bot_response += json_data["message"]["content"] yield bot_response


 import json import gradio as gr import requests from fastapi import FastAPI, Request, BackgroundTasks from pydantic import BaseModel import uvicorn import nest_asyncio app = FastAPI() with gr.Blocks(gr.themes.Soft()) as demo: gr.Markdown("## RAG with MinIO") ch_interface = gr.ChatInterface(llm_chat, undo_btn=None, clear_btn="Clear") ch_interface.chatbot.show_label = False ch_interface.chatbot.height = 600 demo.queue() if __name__ == "__main__": nest_asyncio.apply() app = gr.mount_gradio_app(app, demo, path=CHAT_API_PATH) uvicorn.run(app, host="0.0.0.0", port=8808)

Toets inbeddingsmodel

 import numpy as np EMBEDDING_ENDPOINT = "http://localhost:11434/api/embeddings" EMBEDDINGS_DIM = 768 def get_embedding(text): resp = requests.post(EMBEDDING_ENDPOINT, json={"model": EMBEDDING_MODEL, "prompt": text}) return np.array(resp.json()["embedding"][:EMBEDDINGS_DIM], dtype=np.float16)


 ## Test with sample text get_embedding("What is MinIO?")


Inname pyplyn Oorsig

Skep MinIO Emmers

Gebruik mc-opdrag of doen dit vanaf UI

  • persoonlike-korpus - Om al die dokumente te stoor
  • pakhuis - Om al die metadata, stukke en vektorinbeddings te stoor


 !mc alias set 'myminio' 'http://localhost:9000' 'minioadmin' 'minioadmin'


 !mc mb myminio/custom-corpus !mc mb myminio/warehouse

Skep Webhook wat emmerkennisgewings verbruik vanaf pasgemaakte korpus-emmer

 import json import gradio as gr import requests from fastapi import FastAPI, Request from pydantic import BaseModel import uvicorn import nest_asyncio app = FastAPI() @app.post("/api/v1/document/notification") async def receive_webhook(request: Request): json_data = await request.json() print(json.dumps(json_data, indent=2)) with gr.Blocks(gr.themes.Soft()) as demo: gr.Markdown("## RAG with MinIO") ch_interface = gr.ChatInterface(llm_chat, undo_btn=None, clear_btn="Clear") ch_interface.chatbot.show_label = False demo.queue() if __name__ == "__main__": nest_asyncio.apply() app = gr.mount_gradio_app(app, demo, path=CHAT_API_PATH) uvicorn.run(app, host="0.0.0.0", port=8808)


 ## Test with sample text get_embedding("What is MinIO?")


Skep MinIO-gebeurteniskennisgewings en koppel dit aan pasgemaakte korpus-emmer

Skep Webhook Event

Gaan in die konsole na Gebeurtenisse-> Voeg gebeurtenisbestemming by -> Webhook


Vul die velde met Volgende waardes en druk stoor


Identifiseerder - doc-webhook


Eindpunt - http://localhost:8808/api/v1/document/notification


Klik Herbegin MinIO aan die bokant wanneer na gevra word


( Let wel : Jy kan ook mc hiervoor gebruik)

Koppel die Webhook-gebeurtenis aan gepasmaakte-korpus-emmergebeurtenisse

Gaan in die konsole na Emmers (Administrateur) -> persoonlike-korpus -> Gebeurtenisse


Vul die velde met Volgende waardes en druk stoor


ARN - Kies die doc-webhook uit die aftreklys


Kies Gebeurtenisse - Merk PUT en DELETE


( Let wel : Jy kan ook mc hiervoor gebruik)


Ons het ons eerste webhook-opstelling

Toets nou deur 'n voorwerp by te voeg en te verwyder

Onttrek data uit die Dokumente en Chunk

Ons sal Langchain en Unstructured gebruik om 'n objek van MinIO te lees en dokumente te verdeel in veelvoude stukke


 from langchain_text_splitters import RecursiveCharacterTextSplitter from langchain_community.document_loaders import S3FileLoader MINIO_ENDPOINT = "http://localhost:9000" MINIO_ACCESS_KEY = "minioadmin" MINIO_SECRET_KEY = "minioadmin" # Split Text from a given document using chunk_size number of characters text_splitter = RecursiveCharacterTextSplitter(chunk_size=1024, chunk_overlap=64, length_function=len) def split_doc_by_chunks(bucket_name, object_key): loader = S3FileLoader(bucket_name, object_key, endpoint_url=MINIO_ENDPOINT, aws_access_key_id=MINIO_ACCESS_KEY, aws_secret_access_key=MINIO_SECRET_KEY) docs = loader.load() doc_splits = text_splitter.split_documents(docs) return doc_splits


 # test the chunking split_doc_by_chunks("custom-corpus", "The-Enterprise-Object-Store-Feature-Set.pdf")

Voeg die Chunking-logika by Webhook

Voeg die logika by webhook en stoor die metadata en stukke in pakhuisemmer


 import urllib.parse import s3fs METADATA_PREFIX = "metadata" # Using s3fs to save and delete objects from MinIO s3 = s3fs.S3FileSystem() # Split the documents and save the metadata to warehouse bucket def create_object_task(json_data): for record in json_data["Records"]: bucket_name = record["s3"]["bucket"]["name"] object_key = urllib.parse.unquote(record["s3"]["object"]["key"]) print(record["s3"]["bucket"]["name"], record["s3"]["object"]["key"]) doc_splits = split_doc_by_chunks(bucket_name, object_key) for i, chunk in enumerate(doc_splits): source = f"warehouse/{METADATA_PREFIX}/{bucket_name}/{object_key}/chunk_{i:05d}.json" with s3.open(source, "w") as f: f.write(chunk.json()) return "Task completed!" def delete_object_task(json_data): for record in json_data["Records"]: bucket_name = record["s3"]["bucket"]["name"] object_key = urllib.parse.unquote(record["s3"]["object"]["key"]) s3.delete(f"warehouse/{METADATA_PREFIX}/{bucket_name}/{object_key}", recursive=True) return "Task completed!"

Dateer FastAPI-bediener op met die nuwe logika

 import json import gradio as gr import requests from fastapi import FastAPI, Request, BackgroundTasks from pydantic import BaseModel import uvicorn import nest_asyncio app = FastAPI() @app.post("/api/v1/document/notification") async def receive_webhook(request: Request, background_tasks: BackgroundTasks): json_data = await request.json() if json_data["EventName"] == "s3:ObjectCreated:Put": print("New object created!") background_tasks.add_task(create_object_task, json_data) if json_data["EventName"] == "s3:ObjectRemoved:Delete": print("Object deleted!") background_tasks.add_task(delete_object_task, json_data) return {"status": "success"} with gr.Blocks(gr.themes.Soft()) as demo: gr.Markdown("## RAG with MinIO") ch_interface = gr.ChatInterface(llm_chat, undo_btn=None, clear_btn="Clear") ch_interface.chatbot.show_label = False demo.queue() if __name__ == "__main__": nest_asyncio.apply() app = gr.mount_gradio_app(app, demo, path=CHAT_API_PATH) uvicorn.run(app, host="0.0.0.0", port=8808)

Voeg nuwe webhook by om dokumentmetadata/-stukke te verwerk

Noudat ons die eerste webhook het wat werk, is die volgende stap om al die stukke met metadata te kry Genereer die inbeddings en stoor dit in die vektordatabasis



 import json import gradio as gr import requests from fastapi import FastAPI, Request, BackgroundTasks from pydantic import BaseModel import uvicorn import nest_asyncio app = FastAPI() @app.post("/api/v1/metadata/notification") async def receive_metadata_webhook(request: Request, background_tasks: BackgroundTasks): json_data = await request.json() print(json.dumps(json_data, indent=2)) @app.post("/api/v1/document/notification") async def receive_webhook(request: Request, background_tasks: BackgroundTasks): json_data = await request.json() if json_data["EventName"] == "s3:ObjectCreated:Put": print("New object created!") background_tasks.add_task(create_object_task, json_data) if json_data["EventName"] == "s3:ObjectRemoved:Delete": print("Object deleted!") background_tasks.add_task(delete_object_task, json_data) return {"status": "success"} with gr.Blocks(gr.themes.Soft()) as demo: gr.Markdown("## RAG with MinIO") ch_interface = gr.ChatInterface(llm_chat, undo_btn=None, clear_btn="Clear") ch_interface.chatbot.show_label = False demo.queue() if __name__ == "__main__": nest_asyncio.apply() app = gr.mount_gradio_app(app, demo, path=CHAT_API_PATH) uvicorn.run(app, host="0.0.0.0", port=8808)


Skep MinIO-gebeurteniskennisgewings en koppel dit aan pakhuisemmer

Skep Webhook Event

Gaan in die konsole na Gebeurtenisse-> Voeg gebeurtenisbestemming by -> Webhook


Vul die velde met Volgende waardes en druk stoor


Identifiseerder - metadata-webhaak


Eindpunt - http://localhost:8808/api/v1/metadata/notification


Klik Herbegin MinIO aan die bokant wanneer dit gevra word


( Let wel : Jy kan ook mc hiervoor gebruik)

Koppel die Webhook-gebeurtenis aan gepasmaakte-korpus-emmergebeurtenisse

Gaan in die konsole na Emmers (Administrateur) -> pakhuis -> Gebeurtenisse


Vul die velde met Volgende waardes en druk stoor


ARN - Kies die metadata-webhaak uit die aftreklys


Voorvoegsel - metadata/


Agtervoegsel - .json


Kies Gebeurtenisse - Merk PUT en DELETE


( Let wel : Jy kan ook mc hiervoor gebruik)


Ons het ons eerste webhook-opstelling

Toets nou deur 'n voorwerp in pasgemaakte korpus by te voeg en te verwyder en kyk of hierdie webhaak geaktiveer word

Skep LanceDB Vector Database in MinIO

Noudat ons die basiese webhook het wat werk, laat ons die lanceDB vektordatabank in MinIO pakhuisemmer opstel waarin ons al die inbeddings en bykomende metadatavelde sal stoor


 import os import lancedb # Set these environment variables for the lanceDB to connect to MinIO os.environ["AWS_DEFAULT_REGION"] = "us-east-1" os.environ["AWS_ACCESS_KEY_ID"] = MINIO_ACCESS_KEY os.environ["AWS_SECRET_ACCESS_KEY"] = MINIO_SECRET_KEY os.environ["AWS_ENDPOINT"] = MINIO_ENDPOINT os.environ["ALLOW_HTTP"] = "True" db = lancedb.connect("s3://warehouse/v-db/")


 # list existing tables db.table_names()


 # Create a new table with pydantic schema from lancedb.pydantic import LanceModel, Vector import pyarrow as pa DOCS_TABLE = "docs" EMBEDDINGS_DIM = 768 table = None class DocsModel(LanceModel): parent_source: str # Actual object/document source source: str # Chunk/Metadata source text: str # Chunked text vector: Vector(EMBEDDINGS_DIM, pa.float16()) # Vector to be stored def get_or_create_table(): global table if table is None and DOCS_TABLE not in list(db.table_names()): return db.create_table(DOCS_TABLE, schema=DocsModel) if table is None: table = db.open_table(DOCS_TABLE) return table


 # Check if that worked get_or_create_table()


 # list existing tables db.table_names()

Voeg die stoor/verwydering van data van lanceDB by metadata-webhook

 import multiprocessing EMBEDDING_DOCUMENT_PREFIX = "search_document" # Add queue that keeps the processed meteadata in memory add_data_queue = multiprocessing.Queue() delete_data_queue = multiprocessing.Queue() def create_metadata_task(json_data): for record in json_data["Records"]: bucket_name = record["s3"]["bucket"]["name"] object_key = urllib.parse.unquote(record["s3"]["object"]["key"]) print(bucket_name, object_key) with s3.open(f"{bucket_name}/{object_key}", "r") as f: data = f.read() chunk_json = json.loads(data) embeddings = get_embedding(f"{EMBEDDING_DOCUMENT_PREFIX}: {chunk_json['page_content']}") add_data_queue.put({ "text": chunk_json["page_content"], "parent_source": chunk_json.get("metadata", "").get("source", ""), "source": f"{bucket_name}/{object_key}", "vector": embeddings }) return "Metadata Create Task Completed!" def delete_metadata_task(json_data): for record in json_data["Records"]: bucket_name = record["s3"]["bucket"]["name"] object_key = urllib.parse.unquote(record["s3"]["object"]["key"]) delete_data_queue.put(f"{bucket_name}/{object_key}") return "Metadata Delete Task completed!"

Voeg 'n skeduleerder by wat data vanaf toue verwerk

 from apscheduler.schedulers.background import BackgroundScheduler import pandas as pd def add_vector_job(): data = [] table = get_or_create_table() while not add_data_queue.empty(): item = add_data_queue.get() data.append(item) if len(data) > 0: df = pd.DataFrame(data) table.add(df) table.compact_files() print(len(table.to_pandas())) def delete_vector_job(): table = get_or_create_table() source_data = [] while not delete_data_queue.empty(): item = delete_data_queue.get() source_data.append(item) if len(source_data) > 0: filter_data = ", ".join([f'"{d}"' for d in source_data]) table.delete(f'source IN ({filter_data})') table.compact_files() table.cleanup_old_versions() print(len(table.to_pandas())) scheduler = BackgroundScheduler() scheduler.add_job(add_vector_job, 'interval', seconds=10) scheduler.add_job(delete_vector_job, 'interval', seconds=10)

Dateer FastAPI op met die Vector Embedding Changes

 import json import gradio as gr import requests from fastapi import FastAPI, Request, BackgroundTasks from pydantic import BaseModel import uvicorn import nest_asyncio app = FastAPI() @app.on_event("startup") async def startup_event(): get_or_create_table() if not scheduler.running: scheduler.start() @app.on_event("shutdown") async def shutdown_event(): scheduler.shutdown() @app.post("/api/v1/metadata/notification") async def receive_metadata_webhook(request: Request, background_tasks: BackgroundTasks): json_data = await request.json() if json_data["EventName"] == "s3:ObjectCreated:Put": print("New Metadata created!") background_tasks.add_task(create_metadata_task, json_data) if json_data["EventName"] == "s3:ObjectRemoved:Delete": print("Metadata deleted!") background_tasks.add_task(delete_metadata_task, json_data) return {"status": "success"} @app.post("/api/v1/document/notification") async def receive_webhook(request: Request, background_tasks: BackgroundTasks): json_data = await request.json() if json_data["EventName"] == "s3:ObjectCreated:Put": print("New object created!") background_tasks.add_task(create_object_task, json_data) if json_data["EventName"] == "s3:ObjectRemoved:Delete": print("Object deleted!") background_tasks.add_task(delete_object_task, json_data) return {"status": "success"} with gr.Blocks(gr.themes.Soft()) as demo: gr.Markdown("## RAG with MinIO") ch_interface = gr.ChatInterface(llm_chat, undo_btn=None, clear_btn="Clear") ch_interface.chatbot.show_label = False ch_interface.chatbot.height = 600 demo.queue() if __name__ == "__main__": nest_asyncio.apply() app = gr.mount_gradio_app(app, demo, path=CHAT_API_PATH) uvicorn.run(app, host="0.0.0.0", port=8808) 




Noudat ons die Inname-pyplyn werk, laat ons die finale JOOL-pyplyn integreer.

Voeg vektorsoekvermoë by

Noudat ons die dokument in die lanceDB ingeneem het, laat ons die soekvermoë byvoeg


 EMBEDDING_QUERY_PREFIX = "search_query" def search(query, limit=5): query_embedding = get_embedding(f"{EMBEDDING_QUERY_PREFIX}: {query}") res = get_or_create_table().search(query_embedding).metric("cosine").limit(limit) return res


 # Lets test to see if it works res = search("What is MinIO Enterprise Object Store Lite?") res.to_list()

Vra LLM om die relevante dokumente te gebruik

 RAG_PROMPT = """ DOCUMENT: {documents} QUESTION: {user_question} INSTRUCTIONS: Answer in detail the user's QUESTION using the DOCUMENT text above. Keep your answer ground in the facts of the DOCUMENT. Do not use sentence like "The document states" citing the document. If the DOCUMENT doesn't contain the facts to answer the QUESTION only Respond with "Sorry! I Don't know" """


 context_df = [] def llm_chat(user_question, history): history = history or [] global context_df # Search for relevant document chunks res = search(user_question) documents = " ".join([d["text"].strip() for d in res.to_list()]) # Pass the chunks to LLM for grounded response llm_resp = requests.post(LLM_ENDPOINT, json={"model": LLM_MODEL, "messages": [ {"role": "user", "content": RAG_PROMPT.format(user_question=user_question, documents=documents) } ], "options": { # "temperature": 0, "top_p": 0.90, }}, stream=True) bot_response = "**AI:** " for resp in llm_resp.iter_lines(): json_data = json.loads(resp) bot_response += json_data["message"]["content"] yield bot_response context_df = res.to_pandas() context_df = context_df.drop(columns=['source', 'vector']) def clear_events(): global context_df context_df = [] return context_df

Dateer FastAPI Chat Endpoint op om JOOL te gebruik

 import json import gradio as gr import requests from fastapi import FastAPI, Request, BackgroundTasks from pydantic import BaseModel import uvicorn import nest_asyncio app = FastAPI() @app.on_event("startup") async def startup_event(): get_or_create_table() if not scheduler.running: scheduler.start() @app.on_event("shutdown") async def shutdown_event(): scheduler.shutdown() @app.post("/api/v1/metadata/notification") async def receive_metadata_webhook(request: Request, background_tasks: BackgroundTasks): json_data = await request.json() if json_data["EventName"] == "s3:ObjectCreated:Put": print("New Metadata created!") background_tasks.add_task(create_metadata_task, json_data) if json_data["EventName"] == "s3:ObjectRemoved:Delete": print("Metadata deleted!") background_tasks.add_task(delete_metadata_task, json_data) return {"status": "success"} @app.post("/api/v1/document/notification") async def receive_webhook(request: Request, background_tasks: BackgroundTasks): json_data = await request.json() if json_data["EventName"] == "s3:ObjectCreated:Put": print("New object created!") background_tasks.add_task(create_object_task, json_data) if json_data["EventName"] == "s3:ObjectRemoved:Delete": print("Object deleted!") background_tasks.add_task(delete_object_task, json_data) return {"status": "success"} with gr.Blocks(gr.themes.Soft()) as demo: gr.Markdown("## RAG with MinIO") ch_interface = gr.ChatInterface(llm_chat, undo_btn=None, clear_btn="Clear") ch_interface.chatbot.show_label = False ch_interface.chatbot.height = 600 gr.Markdown("### Context Supplied") context_dataframe = gr.DataFrame(headers=["parent_source", "text", "_distance"], wrap=True) ch_interface.clear_btn.click(clear_events, [], context_dataframe) @gr.on(ch_interface.output_components, inputs=[ch_interface.chatbot], outputs=[context_dataframe]) def update_chat_context_df(text): global context_df if context_df is not None: return context_df return "" demo.queue() if __name__ == "__main__": nest_asyncio.apply() app = gr.mount_gradio_app(app, demo, path=CHAT_API_PATH) uvicorn.run(app, host="0.0.0.0", port=8808)


Kon jy RAG-gebaseerde klets met MinIO as die datameer-agtergrond deurgaan en implementeer? Ons sal in die nabye toekoms 'n webinar oor dieselfde onderwerp doen waar ons vir jou 'n lewendige demonstrasie sal gee terwyl ons hierdie JOOL-gebaseerde kletstoepassing bou.

JOOL-R-Ons

As 'n ontwikkelaar wat gefokus is op KI-integrasie by MinIO, ondersoek ek voortdurend hoe ons gereedskap naatloos in moderne KI-argitekture geïntegreer kan word om doeltreffendheid en skaalbaarheid te verbeter. In hierdie artikel het ons jou gewys hoe om MinIO met Retrieval-Augmented Generation (RAG) te integreer om 'n kletstoepassing te bou. Dit is net die punt van die ysberg, om jou 'n hupstoot te gee in jou strewe om meer unieke gebruikte gevalle vir RAG en MinIO te bou. Nou het jy die boustene om dit te doen. Kom ons doen dit!


As jy enige vrae het oor MinIO JOOL-integrasie, kontak ons gerus op Slap !