SEMA at LLMs4OL 2025 Task C: Prompt-Decoupled Fine-Tuning on MatOnto with LLaMA

Authors

DOI:

https://doi.org/10.52825/ocp.v6i.2901

Keywords:

Ontology Relation Extraction, Prompt Generalization, LoRA Fine-Tuning

Abstract

This paper presents our submission to Task C (Relation Extraction) of the LLMs4OL 2025 Challenge, which investigates the ability of Large Language Models (LLMs) to identify semantic and taxonomic relations between ontology types. Focusing on the MatOnto subtask—selected for its manageable size—we explore the performance of open-source models under resource constraints. We fine-tune LLaMA 3.1–8B using LoRA adapters and evaluate various strategies including contrastive negative sampling, prompt inversion, and system prompt variation. Inspired by recent findings on prompt sensitivity, we adopt a cross-template setup where the model is trained with one prompt format and tested with another semantically equivalent variant. Our experiments suggest that prompt-decoupling can improve generalization and mitigate overfitting to specific phrasings. While our results are modest, they offer insights into the challenges of adapting LLMs to structured relation extraction tasks and highlight practical considerations for tuning under constrained resources.

Downloads

Download data is not yet available.

References

H. Babaei Giglou, J. D’Souza, and S. Auer, "LLMs4OL: Large language models for ontology learning", in International Semantic Web Conference, 2023, pp. 408–427.

E. J. Hu, Y. Shen, P. Wallis et al., "Lora: Low-rank adaptation of large language models.", ICLR, vol. 1, no. 2, p. 3, 2022.

Y. Peng, Y. Mou, B. Zhu, S. Sowe, and S. Decker, "RWTH-DBIS at LLMs4OL 2024 Tasks A and B: Knowledge-Enhanced Domain-Specific Continual Learning and Prompt-Tuning of Large Language Models for Ontology Learning", in Open Conference Proceedings, vol. 4, 2024, pp. 49–63.

S. M. H. Hashemi, M. K. Manesh, and M. Shamsfard, "Skh-nlp at llms4ol 2024 task b: Taxonomy discovery in ontologies using bert and llama 3", in Open Conference Proceedings, vol. 4, 2024, pp. 103–111.

K. Lyu, H. Zhao, X. Gu, D. Yu, A. Goyal, and S. Arora, "Keeping llms aligned after fine-tuning: The crucial role of prompt templates", arXiv preprint arXiv:2402.18540, 2024.

M. Sclar, Y. Choi, Y. Tsvetkov, and A. Suhr, "Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting", arXiv preprint arXiv:2310.11324, 2023.

Downloads

Published

2025-10-01

How to Cite

Canal, M., Abreu, J. I., & Gutiérrez, Y. (2025). SEMA at LLMs4OL 2025 Task C: Prompt-Decoupled Fine-Tuning on MatOnto with LLaMA. Open Conference Proceedings, 6. https://doi.org/10.52825/ocp.v6i.2901

Conference Proceedings Volume

Section

LLMs4OL 2025 Task Participant Short Papers