paint-brush
Getting the Most out of a Large Language Modelby@pranavch
126 reads

Getting the Most out of a Large Language Model

by Pranav ChaudharyFebruary 1st, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A Large Language Model (LLM) is an ML model trained on huge text dataset. Once a LLM is trained it can be adopted to perform variety of task such as chatbot, text generation, Q&A etc. There are multiple avenues which can be combined together to get most out of an LLM. One such way is using prompts to extract response from model.
featured image - Getting the Most out of a Large Language Model
Pranav Chaudhary HackerNoon profile picture

A Large Language Model is a type of foundational model which is an ML model trained on a huge text dataset. These datasets range from a few billion to a few hundred billion parameters. The cost to train large language models can reach millions of dollars, thousands of resource hours, computation hours, and complex algorithms. Once an LLM is trained it can be adapted to perform a variety of tasks such as text generation, Q&A, etc. Multiple avenues can be combined to get the most out of an LLM.


Let’s get into them.


Prompting

Prompts are the input to a model to extract a response from the model.


A prompt may contain:

  • Context: is the additional information that can guide the model’s response.
  • Instruction: is a specific task that the model is supposed to perform.
  • Input Data: is input or question posed to the model.
  • Output Indicator: is the type or format of the output required.


LLM receives the input as a prompt and based on various prompt parameters, produces output.



The following is a historical stock price.
Predict the stock price for next 30 days in a bulleted list

1/1: $300, 1/2: $300 .... 1/26: $350


In the above example:


  • Context: The following is a historical stock price.
  • Instruction: Predict the stock price for the next 30 days
  • Input Data: 1/1: $300, 1/2: $300 .... 1/26: $350
  • Output Indicator: in a bulleted list


Inference Parameters

Inference parameters are parameters provided to an LLM to control its randomness, tokens, probabilities, etc. These parameters help users to influence and choose the LLM output.


  • Max Length: This parameter lets control the maximum length of generated output by LLM.
  • Top-k Sampling: This parameter helps control the randomness and diversity of the generated text. It constraints the next token selection to the top-k most likely token at each step.
  • Top-p Sampling: The nucleus or top-p helps control the diversity of generated output. This will do so by choosing the next word from the selection whose probabilities sum up to a given value.
  • Temperature: This controls the randomness and diversity of the output. When set to “0” this will let the model produce more deterministic output. The higher the temperature value, the more random will the output be.
  • Repetition Penalty: This will reduce the repetition of tokens that appear in the output. This parameter will let the model generate more diverse output.
  • Sop Sequences: This is a sequence parameter that directs LLM to stop generating further texts when encountered.


These inference parameters are the numerical values that can be adjusted by analyzing the output. When combined with advanced prompting, this will lead to efficient output generation.


Advance Prompt Engineering

An LLM can produce desired output using basic prompts which is useful for simple tasks like writing a poem or story or summarizing given text etc. For more context-specific and ambiguous input and output, which need various dialogues to understand and produce the artifact, Advance Prompt Techniques are useful.


There are various types of advanced prompt engineering:


  • Zero-Shot Prompting: This is the way to prompt an LLM to provide output for new data that is not part of the model’s training data. The use case for such prompting is classification, sentiment analysis, summarization, etc.


    Classify the text into positive, neutral or negative:
    Text: That shot selection was awesome.
    Classification:
    


  • One-Shot Prompting: This prompting provides one clear and descriptive example of data and the model will produce output based on that imitation.


    Prompt: "Generate a recipe for chocolate chip cookies."
    Example Recipe: "Ingredients: butter, sugar, eggs, flour, chocolate chips. Instructions: Preheat oven to 350°F. Mix butter and sugar..."
    Generated Recipe: 
    


  • Few-Shot Prompting: This is a prompting method where multiple sample input and output data are provided as a form of prompt to an existing model. The model will then predict the output for one of the inputs based on the previous data.


    Classification: Pos
    Text: The furniture is small.
    Classification: Neu
    Text: I don't like your attitude
    Classification: Neg
    Text: That shot selection was awful
    Classification:
    


  • Chain-of-Thought Prompting: In a complex scenario, this prompting technique helps LLM to understand the reasoning and produce the output for more complex tasks that require reasoning for explanation. Chain-of-thought prompting can be combined with one-shot or few-shot prompts to yield better output for problems.


    Prompt: The odd numbers in this group add up to an even number: 4, 8, 9, 15, 12, 2, 1.
    Answer: Adding all the odd numbers (9, 15, 1) gives 25. The answer is False.
    Prompt: The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.
    Answer:
    


  • Meta-Prompting: This is a technique of leveraging LLM to generate a prompt based on the required output. The AI will then generate the prompt dynamically which can then be perfected in real time.


    Prompt: Generate a prompt to write blog in different styles and tone. 
    


  • Constraint-Based Prompting: This is another technique of prompting by introducing constraints to let LLM focus on specific aspects to generate the response. These constraints can be limiting response length, response structure etc.


    Prompt: Generate the title for the given article in 10 words
    


Conclusion

In conclusion, LLM is a potent AI model capable of delivering effective output for various tasks, thanks to extensive training on a vast dataset. Prompt engineering emerges as a valuable tool in guiding LLM to generate desired outputs. When used appropriately, LLM can even produce output for data it has never encountered. Crafting a good and effective prompt is an iterative process, especially for complex tasks like reasoning, where advanced and mixed prompts are often required. Tuning various inference parameters enhances the effectiveness of LLM. The combination of prompt engineering and fine-tuned inference parameters can result in efficient outputs from an LLM.