paint-brush
CSV-bestanden omzetten in grafieken met LLM's: een stapsgewijze handleidingdoor@neo4j
202 lezingen

CSV-bestanden omzetten in grafieken met LLM's: een stapsgewijze handleiding

door Neo4j41m2024/10/29
Read on Terminal Reader

Te lang; Lezen

Ontdek hoe u LLM's kunt gebruiken om CSV-bestanden om te zetten in grafische structuren en zo de gegevensmodellering in Neo4j te verbeteren met een iteratieve, op prompts gebaseerde aanpak.
featured image - CSV-bestanden omzetten in grafieken met LLM's: een stapsgewijze handleiding
Neo4j HackerNoon profile picture
0-item
1-item


Hoe presteren LLM's als ze grafieken proberen te maken uit platte CSV-bestanden?

Een groot deel van mijn werk is het verbeteren van de gebruikerservaring met Neo4j. Vaak is het een grote uitdaging voor gebruikers om data in Neo4j te krijgen en efficiënt te modelleren, vooral in de begindagen. Hoewel het initiële datamodel belangrijk is en overweging behoeft, kan het eenvoudig worden geherstructureerd om de prestaties te verbeteren naarmate de datagrootte of het aantal gebruikers groeit.


Dus, als een uitdaging voor mezelf, dacht ik dat ik zou kijken of een LLM kon helpen met het initiële datamodel. Als het niets anders was, zou het laten zien hoe dingen met elkaar verbonden zijn en de gebruiker wat snelle resultaten geven die ze aan anderen kunnen laten zien.


Intuïtief weet ik dat datamodellering een iteratief proces is en dat bepaalde LLM's gemakkelijk afgeleid kunnen raken door grote hoeveelheden data. Dit bood dus een goede gelegenheid om LangGraph te gebruiken om cyclisch door de data te werken.


Laten we eens kijken naar de redenen die dit mogelijk maakten.

Grondbeginselen van grafiekmodellering

De cursus Graph Data Modeling Fundamentals op GraphAcademy leidt je door de basisbeginselen van het modelleren van gegevens in een grafiek. Als eerste stap gebruik ik de volgende vuistregels:


  • Zelfstandige naamwoorden worden labels: ze beschrijven het ding dat de knoop vertegenwoordigt.
  • Werkwoorden worden relatievormen: ze beschrijven hoe dingen met elkaar verbonden zijn.
  • Al het andere wordt een eigenschap (vooral bijwoorden) — Je hebt een naam en rijdt misschien in een grijze auto.


Werkwoorden kunnen ook knooppunten zijn; u bent misschien blij om te weten dat iemand een product heeft besteld , maar dat basismodel laat u niet weten waar en wanneer het product is besteld. In dit geval wordt order een nieuw knooppunt in het model.


Ik weet zeker dat dit samengevat kan worden in een opdracht om een zero-shot-aanpak te creëren voor grafische datamodellering.

Een iteratieve aanpak

Ik heb dit een paar maanden geleden kort geprobeerd en merkte dat het model dat ik gebruikte snel afgeleid raakte bij het werken met grotere schema's. Bovendien bereikten de prompts al snel de tokenlimieten van de LLM.


Ik dacht dat ik deze keer een iteratieve aanpak zou proberen, waarbij ik de sleutels één voor één zou nemen. Dit zou afleiding moeten voorkomen, omdat de LLM slechts één item tegelijk hoeft te overwegen.


De uiteindelijke aanpak bestond uit de volgende stappen:


  1. Laad het CSV-bestand in een Pandas-dataframe.
  2. Analyseer elke kolom in het CSV-bestand en voeg deze toe aan een datamodel dat losjes is gebaseerd op het JSON-schema.
  3. Identificeer en voeg ontbrekende unieke ID's toe voor elke entiteit.
  4. Controleer of het datamodel nauwkeurig is.
  5. Genereer Cypher-statements om de knooppunten en relaties te importeren.
  6. Genereer de unieke beperkingen die ten grondslag liggen aan de importinstructies.
  7. Maak de beperkingen en voer de import uit.

De gegevens

Ik heb even snel op Kaggle gekeken naar een interessante dataset . De dataset die eruit sprong was Spotify Most Streamed Songs .


 import pandas as pd csv_file = '/Users/adam/projects/datamodeller/data/spotify/spotify-most-streamed-songs.csv' df = pd.read_csv(csv_file) df.head() track_name artist(s)_name artist_count released_year released_month released_day in_spotify_playlists in_spotify_charts streams in_apple_playlists … key mode danceability_% valence_% energy_% acousticness_% instrumentalness_% liveness_% speechiness_% cover_url 0 Seven (feat. Latto) (Explicit Ver.) Latto, Jung Kook 2 2023 7 14 553 147 141381703 43 … B Major 80 89 83 31 0 8 4 Not Found 1 LALA Myke Towers 1 2023 3 23 1474 48 133716286 48 … C# Major 71 61 74 7 0 10 4 https://i.scdn.co/image/ab67616d0000b2730656d5… 2 vampire Olivia Rodrigo 1 2023 6 30 1397 113 140003974 94 … F Major 51 32 53 17 0 31 6 https://i.scdn.co/image/ab67616d0000b273e85259… 3 Cruel Summer Taylor Swift 1 2019 8 23 7858 100 800840817 116 … A Major 55 58 72 11 0 11 15 https://i.scdn.co/image/ab67616d0000b273e787cf… 4 WHERE SHE GOES Bad Bunny 1 2023 5 18 3133 50 303236322 84 … A Minor 65 23 80 14 63 11 6 https://i.scdn.co/image/ab67616d0000b273ab5c9c…


5 rijen × 25 kolommen


Het is relatief eenvoudig, maar ik zie meteen dat er een relatie moet zijn tussen nummers en artiesten.


Er zijn ook uitdagingen op het gebied van de zuiverheid van de gegevens die overwonnen moeten worden, wat betreft kolomnamen en artiesten als door komma's gescheiden waarden in de kolom artist(s)_name.

Een LLM kiezen

Ik wilde hiervoor echt een lokale LLM gebruiken, maar ik kwam er al snel achter dat Llama 3 niet zou volstaan. Als je twijfelt, kun je terugvallen op OpenAI:


 from langchain_core.prompts import PromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field from typing import List from langchain_core.output_parsers import JsonOutputParser from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o")

Een datamodel maken

Ik heb een verkorte set modelleringsinstructies gebruikt om de datamodelleringsprompt te maken. Ik moest de prompt een paar keer engineeren om een consistente output te krijgen.


Het zero-shot voorbeeld werkte relatief goed, maar ik vond dat de output inconsistent was. Het definiëren van een gestructureerde output om de JSON output te bevatten hielp echt:


 class JSONSchemaSpecification(BaseModel): notes: str = Field(description="Any notes or comments about the schema") jsonschema: str = Field(description="A JSON array of JSON schema specifications that describe the entities in the data model")

Voorbeelduitvoer met enkele opnamen

De JSON zelf was ook inconsistent, dus ik heb uiteindelijk een schema gedefinieerd op basis van de dataset met filmaanbevelingen.


Voorbeelduitvoer:


 example_output = [ dict( title="Person", type="object", description="Node", properties=[ dict(name="name", column_name="person_name", type="string", description="The name of the person", examples=["Tom Hanks"]), dict(name="date_of_birth", column_name="person_dob", type="date", description="The date of birth for the person", examples=["1987-06-05"]), dict(name="id", column_name="person_name, date_of_birth", type="string", description="The ID is a combination of name and date of birth to ensure uniqueness", examples=["tom-hanks-1987-06-05"]), ], ), dict( title="Director", type="object", description="Node", properties=[ dict(name="name", column_name="director_names", type="string", description="The name of the directors. Split values in column by a comma", examples=["Francis Ford Coppola"]), ], ), dict( title="Movie", type="object", description="Node", properties=[ dict(name="title", column_name="title", type="string", description="The title of the movie", examples=["Toy Story"]), dict(name="released", column_name="released", type="integer", description="The year the movie was released", examples=["1990"]), ], ), dict( title="ACTED_IN", type="object", description="Relationship", properties=[ dict(name="_from", column_name="od", type="string", description="Person found by the `id`. The ID is a combination of name and date of birth to ensure uniqueness", examples=["Person"]), dict(name="_to", column_name="title", type="string", description="The movie title", examples=["Movie"]), dict(name="roles", type="string", column_name="person_roles", description="The roles the person played in the movie", examples=["Woody"]), ], ), dict( title="DIRECTED", type="object", description="Relationship", properties=[ dict(name="_from", type="string", column_name="director_names", description="Director names are comma separated", examples=["Director"]), dict(name="_to", type="string", column_name="title", description="The label of the node this relationship ends at", examples=["Movie"]), ], ), ]


Ik moest afwijken van het strikte JSON-schema en het veld column_name toevoegen aan de uitvoer om de LLM te helpen het importscript te genereren. Het geven van voorbeelden van beschrijvingen hielp ook in dit opzicht, anders waren de eigenschappen die in de MATCH-clausule werden gebruikt inconsistent.

De ketting

Hier is de laatste opdracht:


 model_prompt = PromptTemplate.from_template(""" You are an expert Graph Database administrator. Your task is to design a data model based on the information provided from an existing data source. You must decide where the following column fits in with the existing data model. Consider: * Does the column represent an entity, for example a Person, Place, or Movie? If so, this should be a node in its own right. * Does the column represent a relationship between two entities? If so, this should be a relationship between two nodes. * Does the column represent an attribute of an entity or relationship? If so, this should be a property of a node or relationship. * Does the column represent a shared attribute that could be interesting to query through to find similar nodes, for example a Genre? If so, this should be a node in its own right. ## Instructions for Nodes * Node labels are generally nouns, for example Person, Place, or Movie * Node titles should be in UpperCamelCase ## Instructions for Relationships * Relationshops are generally verbs, for example ACTED_IN, DIRECTED, or PURCHASED * Examples of good relationships are (:Person)-[:ACTED_IN]->(:Movie) or (:Person)-[:PURCHASED]->(:Product) * Relationships should be in UPPER_SNAKE_CASE * Provide any specific instructions for the field in the description. For example, does the field contain a list of comma separated values or a single value? ## Instructions for Properties * Relationships should be in lowerPascalCase * Prefer the shorter name where possible, for example "person_id" and "personId" should simply be "id" * If you are changing the property name from the original field name, mention the column name in the description * Do not include examples for integer or date fields * Always include instructions on data preparation for the field. Does it need to be cast as a string or split into multiple fields on a delimiting value? * Property keys should be letters only, no numbers or special characters. ## Important! Consider the examples provided. Does any data preparation need to be done to ensure the data is in the correct format? You must include any information about data preparation in the description. ## Example Output Here is an example of a good output: {example_output} ## New Data: Key: {key} Data Type: {type} Example Values: {examples} ## Existing Data Model Here is the existing data model: {existing_model} ## Keep Existing Data Model Apply your changes to the existing data model but never remove any existing definitions. """, partial_variables=dict(example_output=dumps(example_output))) model_chain = model_prompt | llm.with_structured_output(JSONSchemaSpecification)


De keten uitvoeren

Om het model iteratief bij te werken, heb ik over de sleutels in het dataframe geitereerd en elke sleutel, het bijbehorende gegevenstype en de eerste vijf unieke waarden aan de prompt doorgegeven:


 from json_repair import dumps, loads existing_model = {} for i, key in enumerate(df): print("\n", i, key) print("----------------") try: res = try_chain(model_chain, dict( existing_model=dumps(existing_model), key=key, type=df[key].dtype, examples=dumps(df[key].unique()[:5].tolist()) )) print(res.notes) existing_model = loads(res.jsonschema) print([n['title'] for n in existing_model]) except Exception as e: print(e) pass existing_model


Console-uitvoer:


 0 track_name ---------------- Adding 'track_name' to an existing data model. This represents a music track entity. ['Track'] 1 artist(s)_name ---------------- Adding a new column 'artist(s)_name' to the existing data model. This column represents multiple artists associated with tracks and should be modeled as a new node 'Artist' and a relationship 'PERFORMED_BY' from 'Track' to 'Artist'. ['Track', 'Artist', 'PERFORMED_BY'] 2 artist_count ---------------- Added artist_count as a property of Track node. This property indicates the number of artists performing in the track. ['Track', 'Artist', 'PERFORMED_BY'] 3 released_year ---------------- Add the released_year column to the existing data model as a property of the Track node. ['Track', 'Artist', 'PERFORMED_BY'] 4 released_month ---------------- Adding the 'released_month' column to the existing data model, considering it as an attribute of the Track node. ['Track', 'Artist', 'PERFORMED_BY'] 5 released_day ---------------- Added a new property 'released_day' to the 'Track' node to capture the day of the month a track was released. ['Track', 'Artist', 'PERFORMED_BY'] 6 in_spotify_playlists ---------------- Adding the new column 'in_spotify_playlists' to the existing data model as a property of the 'Track' node. ['Track', 'Artist', 'PERFORMED_BY'] 7 in_spotify_charts ---------------- Adding the 'in_spotify_charts' column to the existing data model as a property of the Track node. ['Track', 'Artist', 'PERFORMED_BY'] 8 streams ---------------- Adding a new column 'streams' to the existing data model, representing the number of streams for a track. ['Track', 'Artist', 'PERFORMED_BY'] 9 in_apple_playlists ---------------- Adding new column 'in_apple_playlists' to the existing data model ['Track', 'Artist', 'PERFORMED_BY'] 10 in_apple_charts ---------------- Adding 'in_apple_charts' as a property to the 'Track' node, representing the number of times the track appeared in the Apple charts. ['Track', 'Artist', 'PERFORMED_BY'] 11 in_deezer_playlists ---------------- Add 'in_deezer_playlists' to the existing data model for a music track database. ['Track', 'Artist', 'PERFORMED_BY'] 12 in_deezer_charts ---------------- Adding a new property 'inDeezerCharts' to the existing 'Track' node to represent the number of times the track appeared in Deezer charts. ['Track', 'Artist', 'PERFORMED_BY'] 13 in_shazam_charts ---------------- Adding new data 'in_shazam_charts' to the existing data model. This appears to be an attribute of the 'Track' node, indicating the number of times a track appeared in the Shazam charts. ['Track', 'Artist', 'PERFORMED_BY'] 14 bpm ---------------- Added bpm column as a property to the Track node as it represents a characteristic of the track. ['Track', 'Artist', 'PERFORMED_BY'] 15 key ---------------- Adding the 'key' column to the existing data model. The 'key' represents the musical key of a track, which is a shared attribute that can be interesting to query through to find similar tracks. ['Track', 'Artist', 'PERFORMED_BY'] 16 mode ---------------- Adding 'mode' to the existing data model. It represents a musical characteristic of a track, which is best captured as an attribute of the Track node. ['Track', 'Artist', 'PERFORMED_BY'] 17 danceability_% ---------------- Added 'danceability_%' to the existing data model as a property of the Track node. The field represents the danceability percentage of the track. ['Track', 'Artist', 'PERFORMED_BY'] 18 valence_% ---------------- Adding the valence percentage column to the existing data model as a property of the Track node. ['Track', 'Artist', 'PERFORMED_BY'] 19 energy_% ---------------- Integration of the new column 'energy_%' into the existing data model. This column represents an attribute of the Track entity and should be added as a property of the Track node. ['Track', 'Artist', 'PERFORMED_BY'] 20 acousticness_% ---------------- Adding acousticness_% to the existing data model as a property of the Track node. ['Track', 'Artist', 'PERFORMED_BY'] 21 instrumentalness_% ---------------- Adding the new column 'instrumentalness_%' to the existing Track node as it represents an attribute of the Track entity. ['Track', 'Artist', 'PERFORMED_BY'] 22 liveness_% ---------------- Adding the new column 'liveness_%' to the existing data model as an attribute of the Track node ['Track', 'Artist', 'PERFORMED_BY'] 23 speechiness_% ---------------- Adding the new column 'speechiness_%' to the existing data model as a property of the 'Track' node. ['Track', 'Artist', 'PERFORMED_BY'] 24 cover_url ---------------- Adding a new property 'cover_url' to the existing 'Track' node. This property represents the URL of the track's cover image. ['Track', 'Artist', 'PERFORMED_BY']


Na een paar aanpassingen aan de prompt om use cases te verwerken, eindigde ik met een model waar ik erg tevreden mee was. De LLM had kunnen bepalen dat de dataset bestond uit Track, Artist en een PERFORMED_BY-relatie om de twee te verbinden:


 [ { "title": "Track", "type": "object", "description": "Node", "properties": [ { "name": "name", "column_name": "track_name", "type": "string", "description": "The name of the track", "examples": [ "Seven (feat. Latto) (Explicit Ver.)", "LALA", "vampire", "Cruel Summer", "WHERE SHE GOES", ], }, { "name": "artist_count", "column_name": "artist_count", "type": "integer", "description": "The number of artists performing in the track", "examples": [2, 1, 3, 8, 4], }, { "name": "released_year", "column_name": "released_year", "type": "integer", "description": "The year the track was released", "examples": [2023, 2019, 2022, 2013, 2014], }, { "name": "released_month", "column_name": "released_month", "type": "integer", "description": "The month the track was released", "examples": [7, 3, 6, 8, 5], }, { "name": "released_day", "column_name": "released_day", "type": "integer", "description": "The day of the month the track was released", "examples": [14, 23, 30, 18, 1], }, { "name": "inSpotifyPlaylists", "column_name": "in_spotify_playlists", "type": "integer", "description": "The number of Spotify playlists the track is in. Cast the value as an integer.", "examples": [553, 1474, 1397, 7858, 3133], }, { "name": "inSpotifyCharts", "column_name": "in_spotify_charts", "type": "integer", "description": "The number of times the track appeared in the Spotify charts. Cast the value as an integer.", "examples": [147, 48, 113, 100, 50], }, { "name": "streams", "column_name": "streams", "type": "array", "description": "The list of stream IDs for the track. Maintain the array format.", "examples": [ "141381703", "133716286", "140003974", "800840817", "303236322", ], }, { "name": "inApplePlaylists", "column_name": "in_apple_playlists", "type": "integer", "description": "The number of Apple playlists the track is in. Cast the value as an integer.", "examples": [43, 48, 94, 116, 84], }, { "name": "inAppleCharts", "column_name": "in_apple_charts", "type": "integer", "description": "The number of times the track appeared in the Apple charts. Cast the value as an integer.", "examples": [263, 126, 207, 133, 213], }, { "name": "inDeezerPlaylists", "column_name": "in_deezer_playlists", "type": "array", "description": "The list of Deezer playlist IDs the track is in. Maintain the array format.", "examples": ["45", "58", "91", "125", "87"], }, { "name": "inDeezerCharts", "column_name": "in_deezer_charts", "type": "integer", "description": "The number of times the track appeared in the Deezer charts. Cast the value as an integer.", "examples": [10, 14, 12, 15, 17], }, { "name": "inShazamCharts", "column_name": "in_shazam_charts", "type": "array", "description": "The list of Shazam chart IDs the track is in. Maintain the array format.", "examples": ["826", "382", "949", "548", "425"], }, { "name": "bpm", "column_name": "bpm", "type": "integer", "description": "The beats per minute of the track. Cast the value as an integer.", "examples": [125, 92, 138, 170, 144], }, { "name": "key", "column_name": "key", "type": "string", "description": "The musical key of the track. Cast the value as a string.", "examples": ["B", "C#", "F", "A", "D"], }, { "name": "mode", "column_name": "mode", "type": "string", "description": "The mode of the track (eg, Major, Minor). Cast the value as a string.", "examples": ["Major", "Minor"], }, { "name": "danceability", "column_name": "danceability_%", "type": "integer", "description": "The danceability percentage of the track. Cast the value as an integer.", "examples": [80, 71, 51, 55, 65], }, { "name": "valence", "column_name": "valence_%", "type": "integer", "description": "The valence percentage of the track. Cast the value as an integer.", "examples": [89, 61, 32, 58, 23], }, { "name": "energy", "column_name": "energy_%", "type": "integer", "description": "The energy percentage of the track. Cast the value as an integer.", "examples": [83, 74, 53, 72, 80], }, { "name": "acousticness", "column_name": "acousticness_%", "type": "integer", "description": "The acousticness percentage of the track. Cast the value as an integer.", "examples": [31, 7, 17, 11, 14], }, { "name": "instrumentalness", "column_name": "instrumentalness_%", "type": "integer", "description": "The instrumentalness percentage of the track. Cast the value as an integer.", "examples": [0, 63, 17, 2, 19], }, { "name": "liveness", "column_name": "liveness_%", "type": "integer", "description": "The liveness percentage of the track. Cast the value as an integer.", "examples": [8, 10, 31, 11, 28], }, { "name": "speechiness", "column_name": "speechiness_%", "type": "integer", "description": "The speechiness percentage of the track. Cast the value as an integer.", "examples": [4, 6, 15, 24, 3], }, { "name": "coverUrl", "column_name": "cover_url", "type": "string", "description": "The URL of the track's cover image. If the value is 'Not Found', it should be cast as an empty string.", "examples": [ "https://i.scdn.co/image/ab67616d0000b2730656d5ce813ca3cc4b677e05", "https://i.scdn.co/image/ab67616d0000b273e85259a1cae29a8d91f2093d", ], }, ], }, { "title": "Artist", "type": "object", "description": "Node", "properties": [ { "name": "name", "column_name": "artist(s)_name", "type": "string", "description": "The name of the artist. Split values in column by a comma", "examples": [ "Latto", "Jung Kook", "Myke Towers", "Olivia Rodrigo", "Taylor Swift", "Bad Bunny", ], } ], }, { "title": "PERFORMED_BY", "type": "object", "description": "Relationship", "properties": [ { "name": "_from", "type": "string", "description": "The label of the node this relationship starts at", "examples": ["Track"], }, { "name": "_to", "type": "string", "description": "The label of the node this relationship ends at", "examples": ["Artist"], }, ], }, ] [ { "title": "Track", "type": "object", "description": "Node", "properties": [ { "name": "name", "column_name": "track_name", "type": "string", "description": "The name of the track", "examples": [ "Seven (feat. Latto) (Explicit Ver.)", "LALA", "vampire", "Cruel Summer", "WHERE SHE GOES", ], }, { "name": "artist_count", "column_name": "artist_count", "type": "integer", "description": "The number of artists performing in the track", "examples": [2, 1, 3, 8, 4], }, { "name": "released_year", "column_name": "released_year", "type": "integer", "description": "The year the track was released", "examples": [2023, 2019, 2022, 2013, 2014], }, { "name": "released_month", "column_name": "released_month", "type": "integer", "description": "The month the track was released", "examples": [7, 3, 6, 8, 5], }, { "name": "released_day", "column_name": "released_day", "type": "integer", "description": "The day of the month the track was released", "examples": [14, 23, 30, 18, 1], }, { "name": "inSpotifyPlaylists", "column_name": "in_spotify_playlists", "type": "integer", "description": "The number of Spotify playlists the track is in. Cast the value as an integer.", "examples": [553, 1474, 1397, 7858, 3133], }, { "name": "inSpotifyCharts", "column_name": "in_spotify_charts", "type": "integer", "description": "The number of times the track appeared in the Spotify charts. Cast the value as an integer.", "examples": [147, 48, 113, 100, 50], }, { "name": "streams", "column_name": "streams", "type": "array", "description": "The list of stream IDs for the track. Maintain the array format.", "examples": [ "141381703", "133716286", "140003974", "800840817", "303236322", ], }, { "name": "inApplePlaylists", "column_name": "in_apple_playlists", "type": "integer", "description": "The number of Apple playlists the track is in. Cast the value as an integer.", "examples": [43, 48, 94, 116, 84], }, { "name": "inAppleCharts", "column_name": "in_apple_charts", "type": "integer", "description": "The number of times the track appeared in the Apple charts. Cast the value as an integer.", "examples": [263, 126, 207, 133, 213], }, { "name": "inDeezerPlaylists", "column_name": "in_deezer_playlists", "type": "array", "description": "The list of Deezer playlist IDs the track is in. Maintain the array format.", "examples": ["45", "58", "91", "125", "87"], }, { "name": "inDeezerCharts", "column_name": "in_deezer_charts", "type": "integer", "description": "The number of times the track appeared in the Deezer charts. Cast the value as an integer.", "examples": [10, 14, 12, 15, 17], }, { "name": "inShazamCharts", "column_name": "in_shazam_charts", "type": "array", "description": "The list of Shazam chart IDs the track is in. Maintain the array format.", "examples": ["826", "382", "949", "548", "425"], }, { "name": "bpm", "column_name": "bpm", "type": "integer", "description": "The beats per minute of the track. Cast the value as an integer.", "examples": [125, 92, 138, 170, 144], }, { "name": "key", "column_name": "key", "type": "string", "description": "The musical key of the track. Cast the value as a string.", "examples": ["B", "C#", "F", "A", "D"], }, { "name": "mode", "column_name": "mode", "type": "string", "description": "The mode of the track (eg, Major, Minor). Cast the value as a string.", "examples": ["Major", "Minor"], }, { "name": "danceability", "column_name": "danceability_%", "type": "integer", "description": "The danceability percentage of the track. Cast the value as an integer.", "examples": [80, 71, 51, 55, 65], }, { "name": "valence", "column_name": "valence_%", "type": "integer", "description": "The valence percentage of the track. Cast the value as an integer.", "examples": [89, 61, 32, 58, 23], }, { "name": "energy", "column_name": "energy_%", "type": "integer", "description": "The energy percentage of the track. Cast the value as an integer.", "examples": [83, 74, 53, 72, 80], }, { "name": "acousticness", "column_name": "acousticness_%", "type": "integer", "description": "The acousticness percentage of the track. Cast the value as an integer.", "examples": [31, 7, 17, 11, 14], }, { "name": "instrumentalness", "column_name": "instrumentalness_%", "type": "integer", "description": "The instrumentalness percentage of the track. Cast the value as an integer.", "examples": [0, 63, 17, 2, 19], }, { "name": "liveness", "column_name": "liveness_%", "type": "integer", "description": "The liveness percentage of the track. Cast the value as an integer.", "examples": [8, 10, 31, 11, 28], }, { "name": "speechiness", "column_name": "speechiness_%", "type": "integer", "description": "The speechiness percentage of the track. Cast the value as an integer.", "examples": [4, 6, 15, 24, 3], }, { "name": "coverUrl", "column_name": "cover_url", "type": "string", "description": "The URL of the track's cover image. If the value is 'Not Found', it should be cast as an empty string.", "examples": [ "https://i.scdn.co/image/ab67616d0000b2730656d5ce813ca3cc4b677e05", "https://i.scdn.co/image/ab67616d0000b273e85259a1cae29a8d91f2093d", ], }, ], }, { "title": "Artist", "type": "object", "description": "Node", "properties": [ { "name": "name", "column_name": "artist(s)_name", "type": "string", "description": "The name of the artist. Split values in column by a comma", "examples": [ "Latto", "Jung Kook", "Myke Towers", "Olivia Rodrigo", "Taylor Swift", "Bad Bunny", ], } ], }, { "title": "PERFORMED_BY", "type": "object", "description": "Relationship", "properties": [ { "name": "_from", "type": "string", "description": "The label of the node this relationship starts at", "examples": ["Track"], }, { "name": "_to", "type": "string", "description": "The label of the node this relationship ends at", "examples": ["Artist"], }, ], }, ]

Unieke identificatiegegevens toevoegen

Ik merkte dat het schema geen unieke identifiers bevatte, en dit kan een probleem worden bij het importeren van relaties. Het is logisch dat verschillende artiesten nummers met dezelfde naam uitbrengen en dat twee artiesten dezelfde naam kunnen hebben.


Om deze reden was het belangrijk om een identificatiecode voor Tracks te creëren, zodat ze binnen een grotere dataset konden worden onderscheiden:


 # Add primary key/unique identifiers uid_prompt = PromptTemplate.from_template(""" You are a graph database expert reviewing a single entity from a data model generated by a colleague. You want to ensure that all of the nodes imported into the database are unique. ## Example A schema contains Actors with a number of properties including name, date of birth. Two actors may have the same name then add a new compound property combining the name and date of birth. If combining values, include the instruction to convert the value to slug case. Call the new property 'id'. If you have identified a new property, add it to the list of properties leaving the rest intact. Include in the description the fields that are to be concatenated. ## Example Output Here is an example of a good output: {example_output} ## Current Entity Schema {entity} """, partial_variables=dict(example_output=dumps(example_output))) uid_chain = uid_prompt | llm.with_structured_output(JSONSchemaSpecification)


Deze stap is eigenlijk alleen nodig voor knooppunten. Daarom heb ik de knooppunten uit het schema gehaald, de keten voor elk knooppunt uitgevoerd en vervolgens de relaties gecombineerd met de bijgewerkte definities:


 # extract nodes and relationships nodes = [n for n in existing_model if "node" in n["description"].lower()] rels = [n for n in existing_model if "node" not in n["description"].lower()] # generate a unique id for nodes with_uids = [] for entity in nodes: res = uid_chain.invoke(dict(entity=dumps(entity))) json = loads(res.jsonschema) with_uids = with_uids + json if type(json) == list else with_uids + [json] # combine nodes and relationships with_uids = with_uids + rels

Gegevensmodelbeoordeling

Voor de gemoedsrust is het de moeite waard om het model te controleren op optimalisaties. De model_prompt deed goed werk met het identificeren van de zelfstandige naamwoorden en werkwoorden, maar in een complexer model.


Eén iteratie behandelde de *_playlists en _charts kolommen als ID's en probeerde Stream nodes en IN_PLAYLIST relaties te creëren. Ik neem aan dat dit kwam door de telling van meer dan 1.000 inclusief opmaak met een komma (bijv. 1.001).


Leuk idee, maar misschien iets te slim. Maar dit laat zien hoe belangrijk het is om een mens in de lus te hebben die de datastructuur begrijpt.


 # Add primary key/unique identifiers review_prompt = PromptTemplate.from_template(""" You are a graph database expert reviewing a data model generated by a colleague. Your task is to review the data model and ensure that it is fit for purpose. Check for: ## Check for nested objects Remember that Neo4j cannot store arrays of objects or nested objects. These must be converted into into separate nodes with relationships between them. You must include the new node and a reference to the relationship to the output schema. ## Check for Entities in properties If there is a property that represents an array of IDs, a new node should be created for that entity. You must include the new node and a reference to the relationship to the output schema. # Keep Instructions Ensure that the instructions for the nodes, relationships, and properties are clear and concise. You may improve them but the detail must not be removed in any circumstances. ## Current Entity Schema {entity} """) review_chain = review_prompt | llm.with_structured_output(JSONSchemaSpecification) review_nodes = [n for n in with_uids if "node" in n["description"].lower() ] review_rels = [n for n in with_uids if "node" not in n["description"].lower() ] reviewed = [] for entity in review_nodes: res = review_chain.invoke(dict(entity=dumps(entity))) json = loads(res.jsonschema) reviewed = reviewed + json # add relationships back in reviewed = reviewed + review_rels len(reviewed) reviewed = with_uids


In een real-world scenario zou ik dit een paar keer willen uitvoeren om het datamodel iteratief te verbeteren. Ik zou een maximumlimiet instellen en dan itereren tot dat punt of het datamodelobject verandert niet meer.

Importverklaringen genereren

Op dit punt zou het schema robuust genoeg moeten zijn en zoveel mogelijk informatie moeten bevatten, zodat een LLM een set importscripts kan genereren.


In overeenstemming met de aanbevelingen voor het importeren van Neo4j-gegevens moet het bestand meerdere keren worden verwerkt, waarbij elke keer één knooppunt of relatie wordt geïmporteerd om te snelle bewerkingen en vergrendeling te voorkomen.


 import_prompt = PromptTemplate.from_template(""" Based on the data model, write a Cypher statement to import the following data from a CSV file into Neo4j. Do not use LOAD CSV as this data will be imported using the Neo4j Python Driver, use UNWIND on the $rows parameter instead. You are writing a multi-pass import process, so concentrate on the entity mentioned. When importing data, you must use the following guidelines: * follow the instructions in the description when identifying primary keys. * Use the instructions in the description to determine the format of properties when a finding. * When combining fields into an ID, use the apoc.text.slug function to convert any text to slug case and toLower to convert the string to lowercase - apoc.text.slug(toLower(row.`name`)) * If you split a property, convert it to a string and use the trim function to remove any whitespace - trim(toString(row.`name`)) * When combining properties, wrap each property in the coalesce function so the property is not null if one of the values is not set - coalesce(row.`id`, '') + '--'+ coalsece(row.`title`) * Use the `column_name` field to map the CSV column to the property in the data model. * Wrap all column names from the CSV in backticks - for example row.`column_name`. * When you merge nodes, merge on the unique identifier and nothing else. All other properties should be set using `SET`. * Do not use apoc.periodic.iterate, the files will be batched in the application. Data Model: {data_model} Current Entity: {entity} """)


Deze keten vereist een ander outputobject dan de vorige stappen. In dit geval is het cypher-lid het belangrijkst, maar ik wilde ook een chain_of_thought-sleutel toevoegen om Chain of Thought aan te moedigen:


 class CypherOutputSpecification(BaseModel): chain_of_thought: str = Field(description="Any reasoning used to write the Cypher statement") cypher: str = Field(description="The Cypher statement to import the data") notes: Optional[str] = Field(description="Any notes or closing remarks about the Cypher statement") import_chain = import_prompt | llm.with_structured_output(CypherOutputSpecification)


Vervolgens wordt hetzelfde proces toegepast om over elk van de beoordeelde definities te itereren en de Cypher te genereren:


 import_cypher = [] for n in reviewed: print('\n\n------', n['title']) res = import_chain.invoke(dict( data_model=dumps(reviewed), entity=n )) import_cypher.append(( res.cypher )) print(res.cypher)


Console-uitvoer:


 ------ Track UNWIND $rows AS row MERGE (t:Track {id: apoc.text.slug(toLower(coalesce(row.`track_name`, '') + '-' + coalesce(row.`released_year`, '')))}) SET t.name = trim(toString(row.`track_name`)), t.artist_count = toInteger(row.`artist_count`), t.released_year = toInteger(row.`released_year`), t.released_month = toInteger(row.`released_month`), t.released_day = toInteger(row.`released_day`), t.inSpotifyPlaylists = toInteger(row.`in_spotify_playlists`), t.inSpotifyCharts = toInteger(row.`in_spotify_charts`), t.streams = row.`streams`, t.inApplePlaylists = toInteger(row.`in_apple_playlists`), t.inAppleCharts = toInteger(row.`in_apple_charts`), t.inDeezerPlaylists = row.`in_deezer_playlists`, t.inDeezerCharts = toInteger(row.`in_deezer_charts`), t.inShazamCharts = row.`in_shazam_charts`, t.bpm = toInteger(row.`bpm`), t.key = trim(toString(row.`key`)), t.mode = trim(toString(row.`mode`)), t.danceability = toInteger(row.`danceability_%`), t.valence = toInteger(row.`valence_%`), t.energy = toInteger(row.`energy_%`), t.acousticness = toInteger(row.`acousticness_%`), t.instrumentalness = toInteger(row.`instrumentalness_%`), t.liveness = toInteger(row.`liveness_%`), t.speechiness = toInteger(row.`speechiness_%`), t.coverUrl = CASE row.`cover_url` WHEN 'Not Found' THEN '' ELSE trim(toString(row.`cover_url`)) END ------ Artist UNWIND $rows AS row WITH row, split(row.`artist(s)_name`, ',') AS artistNames UNWIND artistNames AS artistName MERGE (a:Artist {id: apoc.text.slug(toLower(trim(artistName)))}) SET a.name = trim(artistName) ------ PERFORMED_BY UNWIND $rows AS row UNWIND split(row.`artist(s)_name`, ',') AS artist_name MERGE (t:Track {id: apoc.text.slug(toLower(row.`track_name`)) + '-' + trim(toString(row.`released_year`))}) MERGE (a:Artist {id: apoc.text.slug(toLower(trim(artist_name)))}) MERGE (t)-[:PERFORMED_BY]->(a)


Deze opdracht vergde enige technische kennis om consistente resultaten te krijgen:


  • Soms zou de Cypher een MERGE-statement met meerdere gedefinieerde velden bevatten, wat op zijn best suboptimaal is. Als een van de kolommen null is, mislukt de hele import.
  • Soms bevatte het resultaat apoc.period.iterate , wat niet langer nodig is. Ik wilde code die ik met de Python-driver kon uitvoeren.
  • Ik moest nogmaals benadrukken dat de opgegeven kolomnaam gebruikt moet worden bij het maken van relaties.
  • De LLM volgde de instructies gewoon niet bij het gebruik van de unieke identifier op de nodes aan elk uiteinde van de relatie, dus het kostte een paar pogingen om de instructies in de beschrijving te volgen. Er was wat heen en weer tussen deze prompt en de model_prompt.
  • Er waren backticks nodig voor de kolomnamen die speciale tekens bevatten (bijv. energy_%).


Het zou ook nuttig zijn om dit in twee prompts te splitsen: één voor nodes en één voor relationships. Maar dat is een taak voor een andere dag.

Creëer de unieke beperkingen

Vervolgens kunnen de importscripts worden gebruikt als basis om unieke beperkingen in de database te creëren:


 constraint_prompt = PromptTemplate.from_template(""" You are an expert graph database administrator. Use the following Cypher statement to write a Cypher statement to create unique constraints on any properties used in a MERGE statement. The correct syntax for a unique constraint is: CREATE CONSTRAINT movie_title_id IF NOT EXISTS FOR (m:Movie) REQUIRE m.title IS UNIQUE; Cypher: {cypher} """) constraint_chain = constraint_prompt | llm.with_structured_output(CypherOutputSpecification) constraint_queries = [] for statement in import_cypher: res = constraint_chain.invoke(dict(cypher=statement)) statements = res.cypher.split(";") for cypher in statements: constraint_queries.append(cypher)


Console-uitvoer:


 CREATE CONSTRAINT track_id_unique IF NOT EXISTS FOR (t:Track) REQUIRE t.id IS UNIQUE CREATE CONSTRAINT stream_id IF NOT EXISTS FOR (s:Stream) REQUIRE s.id IS UNIQUE CREATE CONSTRAINT playlist_id IF NOT EXISTS FOR (p:Playlist) REQUIRE p.id IS UNIQUE CREATE CONSTRAINT chart_id IF NOT EXISTS FOR (c:Chart) REQUIRE c.id IS UNIQUE CREATE CONSTRAINT track_id_unique IF NOT EXISTS FOR (t:Track) REQUIRE t.id IS UNIQUE CREATE CONSTRAINT stream_id_unique IF NOT EXISTS FOR (s:Stream) REQUIRE s.id IS UNIQUE CREATE CONSTRAINT track_id_unique IF NOT EXISTS FOR (t:Track) REQUIRE t.id IS UNIQUE CREATE CONSTRAINT playlist_id_unique IF NOT EXISTS FOR (p:Playlist) REQUIRE p.id IS UNIQUE CREATE CONSTRAINT track_id_unique IF NOT EXISTS FOR (track:Track) REQUIRE track.id IS UNIQUE CREATE CONSTRAINT chart_id_unique IF NOT EXISTS FOR (chart:Chart) REQUIRE chart.id IS UNIQUE


Soms gaf deze prompt instructies voor indexen en beperkingen, vandaar de splitsing in de puntkomma.

Voer de import uit

Nu alles op zijn plaats zat, was het tijd om de Cypher-statements uit te voeren:


 from os import getenv from neo4j import GraphDatabase driver = GraphDatabase.driver( getenv("NEO4J_URI"), auth=( getenv("NEO4J_USERNAME"), getenv("NEO4J_PASSWORD") ) ) with driver.session() as session: # truncate the db session.run("MATCH (n) DETACH DELETE n") # create constraints for q in constraint_queries: if q.strip() != "": session.run(q) # import the data for q in import_cypher: if q.strip() != "": res = session.run(q, rows=rows).consume() print(q) print(res.counters)

Kwaliteitscontrole op de dataset

Dit bericht zou niet compleet zijn zonder wat QA op de dataset met behulp van GraphCypherQAChain:


 from langchain.chains import GraphCypherQAChain from langchain_community.graphs import Neo4jGraph graph = Neo4jGraph( url=getenv("NEO4J_URI"), username=getenv("NEO4J_USERNAME"), password=getenv("NEO4J_PASSWORD"), enhanced_schema=True ) qa = GraphCypherQAChain.from_llm( llm, graph=graph, allow_dangerous_requests=True, verbose=True )

Meest populaire artiesten

Wie zijn de populairste artiesten in de database?


 qa.invoke({"query": "Who are the most popular artists?"}) > Entering new GraphCypherQAChain chain... Generated Cypher: cypher MATCH (:Track)-[:PERFORMED_BY]->(a:Artist) RETURN a.name, COUNT(*) AS popularity ORDER BY popularity DESC LIMIT 10 Full Context: [{'a.name': 'Bad Bunny', 'popularity': 40}, {'a.name': 'Taylor Swift', 'popularity': 38}, {'a.name': 'The Weeknd', 'popularity': 36}, {'a.name': 'SZA', 'popularity': 23}, {'a.name': 'Kendrick Lamar', 'popularity': 23}, {'a.name': 'Feid', 'popularity': 21}, {'a.name': 'Drake', 'popularity': 19}, {'a.name': 'Harry Styles', 'popularity': 17}, {'a.name': 'Peso Pluma', 'popularity': 16}, {'a.name': '21 Savage', 'popularity': 14}] > Finished chain. { "query": "Who are the most popular artists?", "result": "Bad Bunny, Taylor Swift, and The Weeknd are the most popular artists." }


Het lijkt erop dat de LLM populariteit beoordeelt op basis van het aantal nummers waarop een artiest heeft gespeeld, en niet op het totale aantal streams.

Slagen per minuut

Welk nummer heeft de hoogste BPM?


 qa.invoke({"query": "Which track has the highest BPM?"}) > Entering new GraphCypherQAChain chain... Generated Cypher: cypher MATCH (t:Track) RETURN t ORDER BY t.bpm DESC LIMIT 1 Full Context: [{'t': {'id': 'seven-feat-latto-explicit-ver--2023'}}] > Finished chain. { "query": "Which track has the highest BPM?", "result": "I don't know the answer." }

Verbeteren van de Cypher Generation Prompt

In dit geval ziet de Cypher er prima uit en het juiste resultaat werd in de prompt opgenomen, maar gpt-4o kon het antwoord niet interpreteren. Het lijkt erop dat de CYPHER_GENERATION_PROMPT die aan de GraphCypherQAChain is doorgegeven, extra instructies nodig heeft om de kolomnamen uitgebreider te maken.


Gebruik altijd uitgebreide kolomnamen in de Cypher-instructie met behulp van de label- en eigenschapsnamen. Gebruik bijvoorbeeld 'person_name' in plaats van 'name'.


GraphCypherQAChain met aangepaste prompt:


 CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database. Instructions: Use only the provided relationship types and properties in the schema. Do not use any other relationship types or properties that are not provided. Schema: {schema} Note: Do not include any explanations or apologies in your responses. Do not respond to any questions that might ask anything else than for you to construct a Cypher statement. Do not include any text except the generated Cypher statement. Always use verbose column names in the Cypher statement using the label and property names. For example, use 'person_name' instead of 'name'. Include data from the immediate network around the node in the result to provide extra context. For example, include the Movie release year, a list of actors and their roles, or the director of a movie. When ordering by a property, add an `IS NOT NULL` check to ensure that only nodes with that property are returned. Examples: Here are a few examples of generated Cypher statements for particular questions: # How many people acted in Top Gun? MATCH (m:Movie {{name:"Top Gun"}}) RETURN COUNT { (m)<-[:ACTED_IN]-() } AS numberOfActors The question is: {question}""" CYPHER_GENERATION_PROMPT = PromptTemplate( input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE ) qa = GraphCypherQAChain.from_llm( llm, graph=graph, allow_dangerous_requests=True, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT, )

Nummers uitgevoerd door de meeste artiesten

Grafieken zijn uitstekend geschikt om het aantal relaties per type en richting weer te geven.


 qa.invoke({"query": "Which tracks are performed by the most artists?"}) > Entering new GraphCypherQAChain chain... Generated Cypher: cypher MATCH (t:Track) WITH t, COUNT { (t)-[:PERFORMED_BY]->(:Artist) } as artist_count WHERE artist_count IS NOT NULL RETURN t.id AS track_id, t.name AS track_name, artist_count ORDER BY artist_count DESC Full Context: [{'track_id': 'los-del-espacio-2023', 'track_name': 'Los del Espacio', 'artist_count': 8}, {'track_id': 'se-le-ve-2021', 'track_name': 'Se Le Ve', 'artist_count': 8}, {'track_id': 'we-don-t-talk-about-bruno-2021', 'track_name': "We Don't Talk About Bruno", 'artist_count': 7}, {'track_id': 'cayï-ï-la-noche-feat-cruz-cafunï-ï-abhir-hathi-bejo-el-ima--2022', 'track_name': None, 'artist_count': 6}, {'track_id': 'jhoome-jo-pathaan-2022', 'track_name': 'Jhoome Jo Pathaan', 'artist_count': 6}, {'track_id': 'besharam-rang-from-pathaan--2022', 'track_name': None, 'artist_count': 6}, {'track_id': 'nobody-like-u-from-turning-red--2022', 'track_name': None, 'artist_count': 6}, {'track_id': 'ultra-solo-remix-2022', 'track_name': 'ULTRA SOLO REMIX', 'artist_count': 5}, {'track_id': 'angel-pt-1-feat-jimin-of-bts-jvke-muni-long--2023', 'track_name': None, 'artist_count': 5}, {'track_id': 'link-up-metro-boomin-don-toliver-wizkid-feat-beam-toian-spider-verse-remix-spider-man-across-the-spider-verse--2023', 'track_name': None, 'artist_count': 5}] > Finished chain. { "query": "Which tracks are performed by the most artists?", "result": "The tracks \"Los del Espacio\" and \"Se Le Ve\" are performed by the most artists, with each track having 8 artists." }

Samenvatting

De CSV-analyse en -modellering is het meest tijdrovende onderdeel. Het kan meer dan vijf minuten duren om te genereren.


De kosten zelf waren vrij laag. In acht uur experimenteren heb ik zeker honderden verzoeken verstuurd en uiteindelijk heb ik een dollar of zo uitgegeven.


Er waren een aantal uitdagingen om dit punt te bereiken:


  • De prompts hadden meerdere iteraties nodig om goed te krijgen. Dit probleem zou kunnen worden opgelost door het model te finetunen of door voorbeelden van weinig shots te geven.
  • JSON-reacties van GPT-4o kunnen inconsistent zijn. Mij werd json-repair aanbevolen, wat beter was dan proberen de LLM zijn eigen JSON-uitvoer te laten valideren.


Ik kan me voorstellen dat deze aanpak goed werkt in een LangGraph-implementatie waarbij de bewerkingen in volgorde worden uitgevoerd, waardoor een LLM het model kan bouwen en verfijnen. Naarmate er nieuwe modellen worden uitgebracht, kunnen ze ook profiteren van fine-tuning.

Meer informatie

Bekijk Harnessing Large Language Models With Neo4j voor meer informatie over het stroomlijnen van het proces van het maken van kennisgrafieken met LLM's. Lees Create a Neo4j GraphRAG Workflow Using LangChain and LangGraph voor meer informatie over LangGraph en Neo4j. En om meer te weten te komen over fine-tuning, bekijk Knowledge Graphs and LLMs: Fine-Tuning vs. Retrieval-Augmented Generation .


Feature Image: Grafiekmodel toont tracks met PERFORMED_BY-relaties met artiesten. Foto door de auteur.


Om meer te weten te komen over dit onderwerp, kunt u op 7 november naar NODES 2024 komen, onze gratis virtuele ontwikkelaarsconferentie over intelligente apps, knowledge graphs en AI. Meld u nu aan!


L O A D I N G
. . . comments & more!

About Author

Neo4j HackerNoon profile picture
Neo4j@neo4j
Neo4j is the world's leading graph database, with native graph storage and processing..

LABELS

DIT ARTIKEL WERD GEPRESENTEERD IN...