Skip to main content

Ensuring Robustness in LLM-based Biomedical Annotation

Edited by:

Jin-Dong Kim, PhD, Database Center for Life Science - Research Organization of Information and Systems, Japan
Zhiyong Lu, PhD, National Library of Medicine - National Institutes of Health, United States 
Fabio Rinaldi, PhD, Dalle Molle Institute for Artificial Intelligence - University of Italian Switzerland, Switzerland

Submission Status: Open   |   Submission Deadline: 29 August 2025


Genomics & Informatics invites participating authors to submit to our Collection on Ensuring Robustness in LLM-based Biomedical Annotation. The 9th edition of the Biomedical Linked Annotation Hackathon (BLAH), held in January 2025, centered on the theme “Ensuring Robustness in LLM-based Research.” This focus underscores the community’s commitment to harnessing cutting-edge technologies while addressing emerging challenges.

Please note that this Collection accepts submissions ONLY from participating authors at the 9th edition of the Biomedical Linked Annotation Hackathon (BLAH), held in January 2025.

Image credit: © vectorfusionart / stock.adobe.com

Meet the Guest Editors

Back to top

Jin-Dong Kim, PhD, Database Center for Life Science - Research Organization of Information and Systems, Japan

Dr. Kim serves as an Project Associate Professor at the Database Center for Life Science (DBCLS) within the Research Organization of Information and Systems (ROIS). His work focuses on natural language processing (NLP) and text mining, specifically tailored to the life sciences. Previously, he held the position of project lecturer at the University of Tokyo until 2010. Dr. Kim has authored over 100 peer-reviewed scientific papers, accumulating more than 7,000 citations. His efforts are directed towards improving the accessibility and interoperability of text mining resources in the life sciences through long-term initiatives such as PubAnnotation and PubDictionaries. Additionally, he has been instrumental in organizing key events including the Biomedical Linked Annotation Hackathon (BLAH) series and the BioNLP Open Shared Task (BioNLP-OST) series.

Zhiyong Lu, PhD, National Library of Medicine - National Institutes of Health, United States of America

Dr. Zhiyong Lu is a tenured Senior Investigator at the NIH/NLM IPR, leading research in biomedical text and image processing, information retrieval, and AI/machine learning. In his role as Deputy Director for Literature Search at NCBI, Dr. Lu oversees the overall R&D efforts to improve literature search and information access in resources like PubMed and LitCovid, which are used by millions worldwide each day. Additionally, Dr. Lu is Adjunct Professor of Computer Science at the University of Illinois Urbana-Champaign (UIUC). Dr. Lu serves as an Associate Editor of Bioinformatics, Organizer of the BioCreative NLP challenge, and Chair of the ISCB Text Mining COSI. With over 350 peer-reviewed publications, Dr. Lu is a highly cited author, and a Fellow of the American College of Medical Informatics (ACMI) and the International Academy of Health Sciences Informatics (IAHSI).

Fabio Rinaldi, PhD, Dalle Molle Institute for Artificial Intelligence - University of Italian Switzerland, Switzerland

Dr. Rinaldi is responsible for NLP research at IDSIA (Dalle Molle Institute for Artificial Intelligence), in Southern Switzerland, and a group leader at the Swiss Institute of Bioinformatics. Until 2019 he was a lecturer and senior researcher at the University of Zurich, as well as PI in a number of research projects. Dr. Rinaldi co-authored more than 100 scientific publications (including more than 40 journal papers), dealing with topics such as Ontologies, Entity Extraction, Answer Extraction, Text Classification, Document and Knowledge Management, Language Resources and Terminology.

About the Collection

Genomics & Informatics invites participating authors to submit to our Collection on Ensuring Robustness in LLM-based Biomedical Annotation. 

The field of biomedical literature annotation plays an essential role in extracting pivotal knowledge from expansive biomedical texts while preserving the link between individual insights and their supporting evidence. Recent advances, notably the integration of Large Language Models (LLMs), have significantly transformed the landscape. However, these breakthroughs have also introduced challenges in reproducibility and reliable evaluation, raising concerns about the overall robustness of biomedical research.

The 9th edition of the Biomedical Linked Annotation Hackathon (BLAH), held in January 2025, centered on the theme “Ensuring Robustness in LLM-based Research.” This focus underscores the community’s commitment to harnessing cutting-edge technologies while addressing emerging challenges. At BLAH9, participants explored the intricate balance between leveraging LLMs and maintaining the stability of literature annotation, investigating novel approaches to enhance consistency and evaluation reliability.

This collection, titled ‘Ensuring Robustness in LLM-based Biomedical Annotation,’ invites contributions that showcase insights and findings from BLAH9. We welcome papers addressing methodological innovations, evaluation strategies, and practical solutions aimed at bolstering the reliability of LLM-based biomedical annotation systems.

Image credit: © vectorfusionart / stock.adobe.com

There are currently no articles in this collection.

Submission Guidelines

Back to top

This Collection welcomes submission of original research, review articles, methodologies and brief reports. Should you wish to submit a different article type, please read our submission guidelines to confirm that type is accepted by the journal. 

Articles for this Collection should be submitted via our submission system, Snapp. Please, select the appropriate Collection title “Ensuring Robustness in LLM-based Biomedical Annotation" under the “Details” tab during the submission stage.

Articles will undergo the journal’s standard peer-review process and are subject to all the journal’s standard policies. Articles will be added to the Collection as they are published.

The Editors have no competing interests with the submissions which they handle through the peer-review process. The peer-review of any submissions for which the Editors have competing interests is handled by another Editorial Board Member who has no competing interests.