RWTH-DBIS at LLMs4OL 2024 Tasks A and B

Knowledge-Enhanced Domain-Specific Continual Learning and Prompt-Tuning of Large Language Models for Ontology Learning

Authors

DOI:

https://doi.org/10.52825/ocp.v4i.2491

Keywords:

Ontology Learning, Large Language Models, Domain-specific Continual Learning, Knowledge-enhanced Prompt-tuning, Hierarchical Text Classification

Abstract

The increasing capabilities of Large Language Models (LLMs) have opened new opportunities for enhancing Ontology Learning (OL), a process crucial for structuring domain knowledge in a machine-readable format. This paper reports on the participation of the RWTH-DBIS team in the LLMs4OL Challenge at ISWC 2024, addressing two primary tasks: term typing and taxonomy discovery. We used LLaMA-3-8B and GPT-3.5-Turbo models to find the performance gaps between open-source and commercial LLMs. For open-source LLMs, our methods included domain-specific continual training, fine-tuning, and knowledge-enhanced prompt-tuning. These approaches were evaluated on the benchmark datasets from the challenge, i.e., GeoNames, UMLS, Schema.org, and the Gene Ontology (GO), among others. The results indicate that domain-specific continual training followed by task-specific fine-tuning enhances the performance of open-source LLMs in these tasks. However, performance gaps remain when compared to commercial LLMs. Additionally, the developed prompting strategies demonstrate substantial utility. This research highlights the potential of LLMs to automate and improve the OL process, offering insights into effective methodologies for future developments in this field.

Downloads

Download data is not yet available.

References

[1] T. R. Gruber, “Toward principles for the design of ontologies used for knowledge sharing?” International journal of human-computer studies, vol. 43, no. 5-6, pp. 907–928, 1995.

[2] C. Biemann, “Ontology learning from text: A survey of methods,” Journal for Language Technology and Computational Linguistics, vol. 20, no. 2, pp. 75–93, 2005.

[3] M. N. Asim, M. Wasim, M. U. G. Khan, W. Mahmood, and H. M. Abbasi, “A survey of ontology learning techniques and applications,” Database, vol. 2018, bay101, 2018.

[4] H. Babaei Giglou, J. D’Souza, and S. Auer, “Llms4ol: Large language models for ontology learning,” in International Semantic Web Conference, Springer, 2023, pp. 408–427.

[5] H. Babaei Giglou, J. D’Souza, and S. Auer, “Llms4ol 2024 overview: The 1st large language models for ontology learning challenge,” Open Conference Proceedings, vol. 4, Oct. 2024.

[6] S. Gururangan, A. Marasović, S. Swayamdipta, et al., “Don’t stop pretraining: Adapt language models to domains and tasks,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, D. Jurafsky, J. Chai, N. Schluter, and J. Tetreault, Eds., Online: Association for Computational Linguistics, Jul. 2020, pp. 8342–8360. DOI : 10 . 18653 / v1 / 2020 . acl - main . 740. [Online]. Available: https://aclanthology.org/2020.acl-main.740.

[7] Z. Ke, Y. Shao, H. Lin, T. Konishi, G. Kim, and B. Liu, “Continual pre-training of language models,” arXiv preprint arXiv:2302.03241, 2023.

[8] T. Scialom, T. Chakrabarty, and S. Muresan, “Fine-tuned language models are continual learners,” arXiv preprint arXiv:2205.12393, 2022.

[9] R. Han, X. Ren, and N. Peng, “Econet: Effective continual pretraining of language models for event temporal reasoning,” arXiv preprint arXiv:2012.15283, 2020.

[10] T. Wu, L. Luo, Y.-F. Li, S. Pan, T.-T. Vu, and G. Haffari, “Continual learning for large language models: A survey,” arXiv preprint arXiv:2402.01364, 2024.

[11] L. Wang, X. Zhang, H. Su, and J. Zhu, “A comprehensive survey of continual learning: Theory, method and application,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.

[12] H. Shi, Z. Xu, H. Wang, et al., “Continual learning of large language models: A comprehensive survey,” arXiv preprint arXiv:2404.16789, 2024.

[13] T. Brown, B. Mann, N. Ryder, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.

[14] Y. Lu, X. Zhao, and J. Wang, “Medical knowledge-enhanced prompt learning for diagnosis classification from clinical text,” in Proceedings of the 5th Clinical Natural Language Processing Workshop, 2023, pp. 278–288.

[15] J. Liu and L. Yang, “Knowledge-enhanced prompt learning for few-shot text classification,” Big Data and Cognitive Computing, vol. 8, no. 4, p. 43, 2024.

[16] H. Babaei Giglou, J. D’Souza, S. Sadruddin, and S. Auer, “Llms4ol 2024 datasets: Toward ontology learning with large language models,” Open Conference Proceedings, vol. 4, Oct. 2024.

[17] Princeton University, Wordnet, Accessed: 2024-07-28, 2024. [Online]. Available: https://wordnet.princeton.edu/.

[18] GeoNames, Geonames, Accessed: 2024-07-28, 2024. [Online]. Available: https://www.geonames.org/export/codes.html.

[19] UMLS, Unified medical language system, Accessed: 2024-07-28, 2024. [Online]. Available: https://www.nlm.nih.gov/research/umls/index.html.

[20] Gene Ontology, Gene ontology, Accessed: 2024-07-28, 2024. [Online]. Available: https://www.geneontology.org/.

[21] Schema.org, Schema.org, Accessed: 2024-07-28, 2024. [Online]. Available: https://schema.org/.

[22] MediaWiki API, Api: Main page - mediawiki, Accessed: 2024-07-28, 2024. [Online]. Available: https://www.mediawiki.org/wiki/API:Main_page.

[23] OpenAI, Gpt-4o, Accessed: 2024-07-28, 2024. [Online]. Available: https://openai.com/index/hello-gpt-4o/.

[24] Anthropic, Claude 3.5, Accessed: 2024-07-28, 2024. [Online]. Available: https://www.anthropic.com/claude.

[25] Microsoft, Copilot: Ai-powered assistance in bing, Accessed: 2024-07-28, 2024. [Online]. Available: https://copilot.microsoft.com/.

[26] P. Micikevicius, S. Narang, J. Alben, et al., “Mixed precision training,” arXiv preprint arXiv:1710.03740, 2017.

[27] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.

[28] D. P. Kingma, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.

[29] A. Madaan, N. Tandon, P. Gupta, et al., “Self-refine: Iterative refinement with self-feedback,” Advances in Neural Information Processing Systems, vol. 36, 2024.

Downloads

Published

2024-10-02

How to Cite

Peng, Y., Mou, Y., Zhu, B., Sowe, S., & Decker, S. (2024). RWTH-DBIS at LLMs4OL 2024 Tasks A and B: Knowledge-Enhanced Domain-Specific Continual Learning and Prompt-Tuning of Large Language Models for Ontology Learning. Open Conference Proceedings, 4, 49–63. https://doi.org/10.52825/ocp.v4i.2491

Conference Proceedings Volume

Section

LLMs4OL 2024 Task Participant Papers

Funding data