LLMs4OL 2024 Overview: The 1st Large Language Models for Ontology Learning Challenge
DOI:
https://doi.org/10.52825/ocp.v4i.2473Keywords:
LLMs4OL Challenge, Ontology Learning, Large Language ModelsAbstract
This paper outlines the LLMs4OL 2024, the first edition of the Large Language Models for Ontology Learning Challenge. LLMs4OL is a community development initiative collocated with the 23rd International Semantic Web Conference (ISWC) to explore the potential of Large Language Models (LLMs) in Ontology Learning (OL), a vital process for enhancing the web with structured knowledge to improve interoperability. By leveraging LLMs, the challenge aims to advance understanding and innovation in OL, aligning with the goals of the Semantic Web to create a more intelligent and user-friendly web. In this paper, we give an overview of the 2024 edition of the LLMs4OL challenge and summarize the contributions.
Downloads
References
[1] A. Konys, “Knowledge repository of ontology learning tools from text,” Procedia Computer Science, vol. 159, pp. 1614–1628, 2019.
[2] T. B. Brown, B. Mann, N. Ryder, et al., Language models are few-shot learners, 2020. arXiv: 2005.14165 [cs.CL].
[3] OpenAI, J. Achiam, S. Adler, S. Agarwal, and et al., Gpt-4 technical report, 2024. arXiv: 2303.08774 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2303.08774.
[4] T. R. Gruber, “Toward principles for the design of ontologies used for knowledge sharing?” International journal of human-computer studies, vol. 43, no. 5-6, pp. 907–928, 1995.
[5] H. Babaei Giglou, J. D’Souza, and S. Auer, “Llms4ol: Large language models for ontology learning,” in The Semantic Web – ISWC 2023, T. R. Payne, V. Presutti, G. Qi, et al., Eds., Cham: Springer Nature Switzerland, 2023, pp. 408–427, ISBN : 978-3-031-47240-4.
[6] A. Maedche and S. Staab, “Ontology learning for the semantic web,” IEEE Intelligent systems, vol. 16, no. 2, pp. 72–79, 2001.
[7] N. F. Noy, D. L. McGuinness, et al., Ontology development 101: A guide to creating your first ontology, 2001.
[8] H. Babaei Giglou, J. D’Souza, S. Sadruddin, and S. Auer, “Llms4ol 2024 datasets: Toward ontology learning with large language models,” Open Conference Proceedings, vol. 4, Oct. 2024.
[9] A. Pavao, I. Guyon, A.-C. Letournel, et al., “Codalab competitions: An open source platform to organize scientific challenges,” Journal of Machine Learning Research, vol. 24, no. 198, pp. 1–6, 2023. [Online]. Available: http://jmlr.org/papers/v24/21- 1436.html.
[10] H. Abi Akl, “Dsti at llms4ol 2024 task a: Intrinsic versus extrinsic knowledge for type classification, Applications on wordnet and geonames datasets,” Open Conference Proceedings, vol. 4, Oct. 2024.
[11] A. Barua, S. Saki Norouzi, and P. Hitzler, “Daselab at llms4ol 2024 task a: Towards term typing in ontology learning,” Open Conference Proceedings, vol. 4, Oct. 2024.
[12] Y. Peng, Y. Mou, B. Zhu, S. Sowe, and S. Decker, “Rwth-dbis at llms4ol 2024 tasks a and b, Knowledge-enhanced domain-specific continual learning and prompt-tuning of large language models for ontology learning,” Open Conference Proceedings, vol. 4, Oct. 2024.
[13] S. Hashemi, M. Karimi Manesh, and M. Shamsfard, “Skh-nlp at llms4ol 2024 task b: Taxonomy discovery in ontologies using bert and llama 3,” Open Conference Proceedings, vol. 4, Oct. 2024.
[14] T. Phuttaamart, N. Kertkeidkachorn, and A. Trongratsameethong, “The ghost at llms4ol 2024 task a: Prompt-tuning-based large language models for term typing,” Open Conference Proceedings, vol. 4, Oct. 2024.
[15] P. Kumar Goyal, S. Singh, and U. Shanker Tiwari, “Silp nlp at llms4ol 2024 tasks a, b, and c: Ontology learning through prompts with llms,” Open Conference Proceedings, vol. 4, Oct. 2024.
[16] M. Sanaei, F. Azizi, and H. Babaei Giglou, “Phoenixes at llms4ol 2024 tasks a, b, and c: Retrieval augmented generation for ontology learning,” Open Conference Proceedings, vol. 4, Oct. 2024.
[17] C. Ymele and A. Jiomekong, “Combining rules to large language model for ontology learning,” Open Conference Proceedings, vol. 4, Oct. 2024.
[18] H. W. Chung, L. Hou, S. Longpre, et al., Scaling instruction-finetuned language models, 2022. arXiv: 2210.11416 [cs.LG]. [Online]. Available: https://arxiv.org/abs/2210. 11416.
[19] Z. Li, X. Zhang, Y. Zhang, D. Long, P. Xie, and M. Zhang, Towards general text embeddings with multi-stage contrastive learning, 2023. arXiv: 2308.03281 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2308.03281.
[20] A. Dubey, A. Jauhri, A. Pandey, A. Kadian, and et al., The llama 3 herd of models, 2024. arXiv: 2407.21783 [cs.AI]. [Online]. Available: https://arxiv.org/abs/2407.21783.
[21] OpenAI, Openai gpt-3.5 api [gpt-3.5-turbo], https : / / platform . openai . com / docs / models/gpt-3-5, Available at: https://platform.openai.com/docs/models/gpt-3-5, 2024.
[22] A. Q. Jiang, A. Sablayrolles, A. Roux, et al., Mixtral of experts, 2024. arXiv: 2401.04088 [cs.LG]. [Online]. Available: https://arxiv.org/abs/2401.04088.
[23] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), J. Burstein, C. Doran, and T. Solorio, Eds., Minneapolis, Minnesota: Association for Computational Lin-
guistics, Jun. 2019, pp. 4171–4186. DOI : 10.18653/v1/N19- 1423. [Online]. Available: https://aclanthology.org/N19-1423.
[24] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. arXiv: 1810 . 04805 [cs.CL]. [Online]. Available: https://arxiv.org/abs/1810.04805.
[25] A. Q. Jiang, A. Sablayrolles, A. Mensch, et al., Mistral 7b, 2023. arXiv: 2310 . 06825 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2310.06825.
[26] V. Karpukhin, B. Oğuz, S. Min, et al., Dense passage retrieval for open-domain question answering, 2020. arXiv: 2004.04906 [cs.CL]. [Online]. Available: https://arxiv.org/ abs/2004.04906.
[27] H. B. Giglou, J. D’Souza, F. Engel, and S. Auer, Llms4om: Matching ontologies with large language models, 2024. arXiv: 2404.10317 [cs.AI]. [Online]. Available: https://arxiv. org/abs/2404.10317.
[28] B. Workshop, : T. L. Scao, A. Fan, and et al., Bloom: A 176b-parameter open-access multilingual language model, 2023. arXiv: 2211.05100 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2211.05100.
[29] Y. Labrak, A. Bazoge, E. Morin, P.-A. Gourraud, M. Rouvier, and R. Dufour, Biomistral: A collection of open-source pretrained large language models for medical domains, 2024. arXiv: 2402.10373 [cs.CL].
[30] M. S. Ankit Pal, Openbiollms: Advancing open-source large language models for healthcare and life sciences, https://huggingface.co/aaditya/OpenBioLLM- Llama3- 70B, 2024.
[31] L. Xu, H. Xie, S.-Z. J. Qin, X. Tao, and F. L. Wang, Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment, 2023. arXiv: 2312 . 12148 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2312.12148.
Downloads
Published
How to Cite
Conference Proceedings Volume
Section
License
Copyright (c) 2024 Hamed Babaei Giglou, Jennifer D’Souza, Sören Auer
This work is licensed under a Creative Commons Attribution 4.0 International License.
Funding data
-
Deutsche Forschungsgemeinschaft
Grant numbers 460234259 -
Bundesministerium für Bildung und Forschung
Grant numbers 01lS22070