The Ghost at LLMs4OL 2024 Task A: Prompt-Tuning-Based Large Language Models for Term Typing

Authors

DOI:

https://doi.org/10.52825/ocp.v4i.2486

Keywords:

Large Language Models, Ontology Learning, Prompt Tuning

Abstract

The LLMs4OL Challenge @ ISWC 2024 aims to explore the intersection of Large Language Models (LLMs) and Ontology Learning (OL) through three main tasks: 1) Term Typing, 2) Taxonomy Discovery and 3) Non-Taxonomic Relation Extraction. In this paper, we present our system's design for the term typing task. Our approach utilizes automatic prompt generation using soft prompts to enhance term typing accuracy and efficiency. We conducted experiments on several datasets, including WordNet, UMLS, GeoNames, NCI, MEDCIN, and SNOMEDCT_US. Our approach outperformed the baselines on most datasets, except for GeoNames, where it faced challenges due to the complexity and specificity of this domain, resulting in substantially lower scores. Additionally, we report the overall results of our approach in this challenge, which highlight its promise while also indicating areas for further improvement.

Downloads

Download data is not yet available.

References

[1] H. Babaei Giglou, J. D’Souza, and S. Auer, “Llms4ol 2024 overview: The 1st large language models for ontology learning challenge,” Open Conference Proceedings, vol. 4, Oct. 2024.

[2] M. Hearst, Automated discovery of wordnet relations.” wordnet an electronic lexical database, 1998.

[3] L. Khan and F. Luo, “Ontology construction for information selection,” in 14th IEEE International Conference on Tools with Artificial Intelligence, 2002.(ICTAI 2002). Proceedings., IEEE, 2002, pp. 122–127.

[4] J. Watróbski, “Ontology learning methods from text-an extensive knowledge-based approach,” Procedia Computer Science, vol. 176, pp. 3356–3368, 2020.

[5] C. H. Hwang, “Incompletely and imprecisely speaking: Using dynamic ontologies for representing and retrieving information.,” in KRDB, Citeseer, vol. 21, 1999, pp. 14–20.

[6] H. Babaei Giglou, J. D’Souza, and S. Auer, “Llms4ol: Large language models for ontology learning,” in The Semantic Web – ISWC 2023, T. R. Payne, V. Presutti, G. Qi, et al., Eds., Cham: Springer Nature Switzerland, 2023, pp. 408–427, ISBN : 978-3-031-47240-4.

[7] B. Lester, R. Al-Rfou, and N. Constant, “The power of scale for parameter-efficient prompt tuning,” arXiv preprint arXiv:2104.08691, 2021.

[8] H. Babaei Giglou, J. D’Souza, S. Sadruddin, and S. Auer, “Llms4ol 2024 datasets: Toward ontology learning with large language models,” Open Conference Proceedings, vol. 4, Oct. 2024.

Downloads

Published

2024-10-02

How to Cite

Phuttaamart, T., Kertkeidkachorn, N., & Trongratsameethong, A. (2024). The Ghost at LLMs4OL 2024 Task A: Prompt-Tuning-Based Large Language Models for Term Typing. Open Conference Proceedings, 4, 85–91. https://doi.org/10.52825/ocp.v4i.2486

Conference Proceedings Volume

Section

LLMs4OL 2024 Task Participant Papers

Funding data