LABKAG at LLMs4OL 2025 Tasks A and C: Context-Rich Prompting for Ontology Construction

Authors

DOI:

https://doi.org/10.52825/ocp.v6i.2891

Keywords:

Ontology Learning, Large Language Model, Prompt Engineering, In-Context Learning, Entity Extraction, Hierarchical Text Classification

Abstract

This paper presents LABKAG's submission to the LLMs4OL 2025 Challenge, focusing on ontology construction from domain-specific text using large language models (LLMs). Our core methodology prioritizes prompt design over fine-tuning or external knowledge, demonstrating its effectiveness in generating structured knowledge. For Task A (Text2Onto: extracting ontological terms and types), we utilized a locally deployed Qwen3-8B model, while for Task C (Taxonomy Discovery: identifying taxonomic hierarchies), we evaluated the performance of GPT-4o-mini and Gemini 2.5 Pro. Our experiments consistently show that incorporating in-domain examples and providing richer context within prompts significantly enhances performance. These results confirm that well-engineered prompts enable LLMs to effectively extract entities and their hierarchical relationships, offering a lightweight, adaptable, and generalizable approach to structured knowledge extraction.

Downloads

Download data is not yet available.

References

H. Babaei Giglou, J. D’Souza, N. Mihindukulasooriya, and S. Auer, “Llms4ol 2025 overview: The 2nd large language models for ontology learning challenge,” Open Conference Proceedings, 2025.

T. B. Brown, B. Mann, N. Ryder, et al., Language models are few-shot learners, 2020.arXiv: 2005.14165 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2005.14165.

T. Zhao, E. Wallace, S. Feng, D. Klein, and S. Singh, “Calibrate before use: Improving few-shot performance of language models,” in International Conference on Machine Learning, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:231979430.

Q. Dong, L. Li, D. Dai, et al., A survey on in-context learning, 2024. arXiv: 2301.00234 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2301.00234.

O. Honovich, U. Shaham, S. R. Bowman, and O. Levy, “Instruction induction: From few examples to natural language task descriptions,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), A. Rogers, J. Boyd-Graber, and N. Okazaki, Eds., Toronto, Canada: Association for Computational Linguistics, Jul. 2023, pp. 1935–1952. DOI : 10.18653/v1/2023.acl-long.108. [Online]. Available: https://aclanthology.org/2023.acl-long.108/.

J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen, “What makes good in-context examples for GPT-3?” In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, E. Agirre, M. Apidianaki, and I. Vulić, Eds., Dublin, Ireland and Online: Association for Computational Linguistics, May 2022, pp. 100–114. DOI : 10.18653/v1/2022.deelio-1.10. [Online]. Available: https://aclanthology.org/2022.deelio-1.10/.

H. Babaei Giglou, J. D’Souza, and S. Auer, “Llms4ol: Large language models for ontology learning,” in The Semantic Web – ISWC 2023, T. R. Payne, V. Presutti, G. Qi, et al., Eds., Cham: Springer Nature Switzerland, 2023, pp. 408–427, ISBN: 978-3-031-47240-4.

M. Funk, S. Hosemann, J. C. Jung, and C. Lutz, Towards ontology construction with language models, 2023. arXiv: 2309.09898 [cs.AI].

D. Edge, H. Trinh, N. Cheng, et al., From local to global: A graph rag approach to query- focused summarization, 2025. arXiv: 2404.16130 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2404.16130.

Q. Team, Qwen3 technical report, 2025. arXiv: 2505.09388 [cs.CL]. [Online]. Available: https://arxiv.org/abs/2505.09388.

OpenAI, GPT-4o-mini, Large Language Model, accessed June 2025, 2024. [Online]. Available: https://openai.com/index/gpt- 4o- mini- advancing- cost- efficient-intelligence/.

Google, Gemini 2.5 Pro, Large Language Model, accessed June 2025, 2025. [Online]. Available: https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro.

Downloads

Published

2025-10-01

How to Cite

Zhao, X., Drake, K., Watanabe, C., Sasaki, Y., & Hando, H. (2025). LABKAG at LLMs4OL 2025 Tasks A and C: Context-Rich Prompting for Ontology Construction. Open Conference Proceedings, 6. https://doi.org/10.52825/ocp.v6i.2891

Conference Proceedings Volume

Section

LLMs4OL 2025 Task Participant Long Papers