Explainability and Human Supervision in AI Systems in Education

Regulatory Framework

Authors

  • André Bonne Olymp.Services Bonne GmbH
  • Uwe Schaffranke Landkreis Oder-Spree

DOI:

https://doi.org/10.52825/th-wildau-ensp.v2i.2942

Keywords:

AI, Vocational Training, EU AI Act, Ethical Aspects, Didactic Innovation, Bias in Training Data, Didactic Teaching Concept, AI Language Models, Trust Authority, Human AI Supervision

Abstract

The use of artificial intelligence (AI) in vocational education and training offers considerable potential, but is associated with ethical and regulatory challenges. This article analyses the ethical and regulatory implications of AI in vocational education and training and presents strategies for minimising potential risks. The implementation of an AI system in a vocational training centre is illustrated using a practical example of an electric motor for motor vehicles. Concrete solutions are presented to address data biases, data protection violations and other ethical concerns. The careful selection and preparation of training data and the use of explainable AI models promote the development of fair, transparent and reliable AI systems in education. The article emphasises the need for compliance with ethical principles in the development and implementation of AI systems. In the context of the EU AI Act, particular attention is paid to the category of high-risk AI systems, which may potentially include AI systems in vocational education. Despite the regulatory challenges, stakeholders are encouraged to use AI to create educational content.
The implemented solution provides for human oversight with appropriate IT systems and guidelines that act as middleware and a trusted authority.

Downloads

Download data is not yet available.

References

KI-Observatorium. "Gute KI braucht hochwertige Daten – ein Modell und Arbeitshilfen zur Bewertung und Verbesserung von KI-Datenqualität." ki-observatorium.de, n.d., https://www.ki-observatorium.de/rubriken/wissen/gute-ki-braucht-hochwertige-daten-ein-modell-und-arbeitshilfen-zur-bewertung-und-verbesserung-von-ki-datenqualitaet.

"I Introducing Meta Llama 3: The most capable openly available LLM to date” Meta, https://ai.meta.com/blog/meta-llama-3/.

"Introducing Phi-4: Microsoft’s Newest Small Language Model Specializing in Complex Reasoning.” techcommunity.microsoft.com, https://techcommunity.microsoft.com/blog/aiplatformblog/introducing-phi-4-microsoft%E2%80%99s-newest-small-language-model-specializing-in-comple/4357090.

"deepseek-ai/DeepSeek-R1-Distill-Llama-70B." deepinfra.com, deepinfra.com/deepseek-ai/DeepSeek-R1-Distill-Llama-70B.

"Choose a Method for Building Generative AI Models." Oracle, docs.oracle.com/en-us/iaas/Content/generative-ai/choose-method.htm.

"Artikel 6: Einstufungsvorschriften für Hochrisiko-KI-Systeme.” AI Act Law EU, https://ai-act-law.eu/de/artikel/6/.

"Anhang 3: Hochrisiko-KI-Systeme gemäß Artikel 6 Absatz 2." AI Act Law EU, https://ai-act-law.eu/de/anhang/3/.

“Artikel 50: Transparenzpflichten für Anbieter und Betreiber bestimmter KI-Systeme.” AI Act Law EU, https://ai-act-law.eu/de/artikel/50/.

Published

2025-09-12

How to Cite

Bonne, A., & Schaffranke, U. (2025). Explainability and Human Supervision in AI Systems in Education: Regulatory Framework. TH Wildau Engineering and Natural Sciences Proceedings , 2. https://doi.org/10.52825/th-wildau-ensp.v2i.2942

Conference Proceedings Volume

Section

Contributions to the Wildau Conference on Artificial Intelligence 2025