Too Long; Didn't Read
A brief document inspired by OpenAI’s latest drama, capturing the main techniques for interacting with a large language model (LLM) to make it more relevant for our use case.
I did not detail some of them, such as fine-tuning, or parameter tuning (e.g., model temperature, a parameter that can be used to reduce model variability, mainly affecting its creativity), but I mentioned them at the end. The goal was to have an incremental approach that captured QUICKLY all this drama!