This story draft by @escholar has not been reviewed by an editor, YET.

Bioinformatics and biomedical informatics with ChatGPT: year one review: Instruction Finetuning

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Authors:

(1) Jinge Wang, Department of Microbiology, Immunology & Cell Biology, West Virginia University, Morgantown, WV 26506, USA;

(2) Zien Cheng, Department of Microbiology, Immunology & Cell Biology, West Virginia University, Morgantown, WV 26506, USA;

(3) Qiuming Yao, School of Computing, University of Nebraska-Lincoln, Lincoln, NE 68588, USA;

(4) Li Liu, College of Health Solutions, Arizona State University, Phoenix, AZ 85004, USA and Biodesign Institute, Arizona State University, Tempe, AZ 85281, USA;

(5) Dong Xu, Department of Electrical Engineer and Computer Science, Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO 65211, USA;

(6) Gangqing Hu, Department of Microbiology, Immunology & Cell Biology, West Virginia University, Morgantown, WV 26506, USA ([email protected]).

Table of Links

Abstract and 1. Introduction

2. Omics

3. Genetics

4. Biomedical Text Mining and 4.1. Performance Assessments across typical tasks

4.2. Biological pathway mining

5. Drug Discovery

5.1. Human-in-the-Loop and 5.2. In-context Learning

5.2 Instruction Finetuning

6. Biomedical Image Understanding

7. Bioinformatics Programming

7.1 Application in Applied Bioinformatics

7.2. Biomedical Database Access

7.2. Online tools for Coding with ChatGPT

7.4 Benchmarks for Bioinformatics Coding

8. Chatbots in Bioinformatics Education

9. Discussion and Future Perspectives

Author Contributions, Acknowledgements, Conflict of Interest Statement, Ethics Statement, and References

5.3. INSTRUCTION FINETUNING

Task-tuning language models for specific tasks within drug discovery has shown considerable promise, as evidenced by several recent projects. ChatMol[69] is a chatbot based on the T5 model[70], finetuned with experimental property data and molecular spatial knowledge to improve its capabilities in describing and editing target molecules. Task-tuning GPT-3 has demonstrated notable advantages over traditional machine learning approaches, particularly in tasks where training data is small[66]. Task-tuning also significantly improves GPT-3 in extracting DDI triplets, showcasing a substantial F1 score enhancement over GPT-4 with few-shots[71]. These projects demonstrate that task-tuning of foundation models can effectively capture the complex knowledge at the molecule level relevant to drug discovery.


Instruction tuning diverges from task tuning by training an LLM across a spectrum of tasks using instruction-output pairs and enables the model to address new, unseen tasks[72]. DrugAssist[63], a Llama2-7B-based model, though instruction-tuned with data with individual molecule properties, achieved competitive results when simultaneously optimizing multiple properties. Similarly, DrugChat[61], a Vicuna-13b-based model instruction-tuned with examples from databases like ChEMBL and PubChem, effectively answered open-ended questions about graph-represented drug compounds. MolInstructions[73], a large-scale instruction dataset tailored for the biomolecular domain, demonstrated its effectiveness in finetuning models like Llama-7B on a variety of tasks, including molecular property prediction and biomedical text mining.


Task-tuning may be combined with instruction tuning to synergize the strength of each. ChemDFM[74], pre-trained on LLaMa-13B with a chemically rich corpus and further enhanced through instruction tuning, exceled in a range of chemical tasks, particularly in molecular property prediction and reaction prediction, outperforming models like GPT-4 with in-context learning. InstructMol[75] is a multi-modality instructiontuning-based LLM. It has a two-stage tuning process, first by instruction tuning with molecule graph-text caption pairs to integrate molecule knowledge and then by task-specific tuning for three drug discoveryrelated molecular tasks. Applied to Vicuna-7B, InstructMol surpassed other leading open-source LLMs and narrows the performance gap with specialized models[75]. These developments underscore the effectiveness of both task and instruction tuning as strategies for enhancing generalized foundation models with domain-specific knowledge to address specific challenges in drug discovery.


This paper is available on arxiv under CC BY 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...

Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks