New Story

Developing Function Calling Models: Comparing Full Training and LoRA on Gemma-2B

by Language Models (dot tech)April 8th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

We employ the Google Gemma-2B model as the pretrained model in our framework. Our approach incorporates two distinct training methodologies: full model training and LoRA model training.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Developing Function Calling Models: Comparing Full Training and LoRA on Gemma-2B
Language Models (dot tech) HackerNoon profile picture
0-item

Abstract and 1. Introduction

2 Related works

3 Methodology and 3.1 Causal language model as a classification model

3.2 Functional token

3.3 Dataset collection

3.4 Model development and training

4 Experiments and 4.1 Android function calls

4.2 Extension to Vehicle, Yelp, and DoorDash function sets

4.3 Full and partial training datasets and 4.4 Full training and LoRA training

4.5 Parallel and nested function call and 4.6 Weighted loss function for special tokens

5 Discussion and future works and References


Appendix

A.1 Android function examples

A.2 Vehicle function examples

3.4 Model development and training

We employ the Google Gemma-2B model as the pretrained model in our framework. Our approach incorporates two distinct training methodologies: full model training and LoRA model training. For


Figure 3: The Process of Generating the Dataset: This involves two critical stages: (1) creation of solvable queries specific to certain APIs and the generation of appropriate function calls for them, and (2) creation of unsolvable queries, complemented by unrelated function bodies. Incorporating a binary validation mechanism for rigorous validation ensures the collection of an optimized training dataset, poised to significantly improve model functionality.


Figure 4: Accuracy Plot for Benchmark: This analysis includes the Llama-7B with RAG, GPT-3.5 with RAG, GPT-3.5, GPT-4, and the Octopus series models, labeled Octopus-0, Octopus-1, Octopus-2, and Octopus-3. The distinction among Octopus models arises from the dataset size and training methodology. The original Octopus-0 model was trained using the full model approach with 1K data points per API. Octopus-1, while also utilizing 1K data points per API, was trained using the LoRA method. Octopus-2 and Octopus-3 followed the full model training but with reduced data points of 500 and 100, respectively. For comprehensive differences among these models, refer to Table (1).


full model training, we utilize an AdamW optimizer with a learning rate set at 5e-5, a warm-up step of 10, and a linear learning rate scheduler. The same optimizer and learning rate configuration are applied to LoRA training. We specify the LoRA rank as 16 and apply LoRA to the following modules: q_proj, k_proj, v_proj, o_proj, up_proj, down_proj. The LoRA alpha parameter is set to 32. For both training methods—full model and LoRA—we set the number of epochs to 3.


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Wei Chen, Stanford University, with equal contribution and a corresponding author {weichen6}@stanford.edu;

(2) Zhiyuan Li, Stanford University and a corresponding author {zhiyuan8}@stanford.edu.


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks