Threat Modeling in Machine Learning




Machine Learning, Security, Threat Modeling


Because of the increasing globalization, the technological progress and the degree of networking, the number of threats is constantly increasing. Security requirements often only play a minor role. With the new version of the IT Security Act, its scope has been extended and affected companies need to take actions to increase IT security. Threat modeling is a structured process that is already used in secure hardware and software development. Both the nature of the attacks and the life cycle time differ from traditional SW development. Starting from the machine learning structure, this article offers a top-down approach to the system-oriented perspective of threat modeling.


Download data is not yet available.


„Sicherheit für industrielle Automatisierungssysteme,“ Geneva, 2019.

J. T. F. T. Initiative, „Guide for Conducting Risk Assessments,“ Washington, D.C., 2012.

R. S. Michalski, J. G. Carbonell und T. M. Mitchell, Machine Learning: An Artificial In-telligence Approach, 1 Hrsg., Los Altos: Morgan Kaufmann, 1983. DOI:

M. Xue, C. Yuan, H. Wu, Y. Zhang und W. Liu, „Machine learning security: Threats, countermeasures, and evaluations,“ IEEE Access, Bd. 8, p. 74720–74742, 2020. DOI:

L. Huang, A. D. Joseph, B. Nelson, B. I. P. Rubinstein und J. D. Tygar, „Adversarial machine learning,“ in Proceedings of the 4th ACM workshop on Security and artificial intelligence, 2011. DOI:

M. Barreno, B. Nelson, R. Sears, A. D. Joseph und J. D. Tygar, „Can Machine Learn-ing Be Secure?,“ in Proceedings of the 2006 ACM Symposium on Information, com-puter and communications security, Taiwan, 2006. DOI:

B. Biggio, B. Nelson und P. Laskov, „Poisoning Attacks against Support Vector Ma-chines,“ in 29th International Conference on Machine Learning (ICML 2012), Edin-burgh, 2012.

B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar und K. Xia, „Exploiting machine learning to subvert your spam filter.,“ LEET, Bd. 8, p. 16–17, 2008. DOI:

M. Fredrikson, S. Jha und T. Ristenpart, „Model inversion attacks that exploit confi-dence information and basic countermeasures,“ in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015. DOI:

R. Shokri, M. Stronati, C. Song und V. Shmatikov, „Membership inference attacks against machine learning models,“ in 2017 IEEE symposium on security and privacy (SP), 2017. DOI:

F. Tramèr, F. Zhang, A. Juels, M. K. Reiter und T. Ristenpart, „Stealing Machine Learning Models via Prediction {APIs},“ in 25th USENIX security symposium (USE-NIX Security 16), 2016.

R. Ashmore, R. Calinescu und C. Paterson, „Assuring the machine learning ifecycle: Desiderata, methods, and challenges,“ ACM Computing Surveys (CSUR), Bd. 54, p. 1–39, 2021. DOI:

R. Garcia, V. Sreekanti, N. Yadwadkar, D. Crankshaw, J. E. Gonzalez und J. M. Hel-lerstein, „Context: The missing piece in the machine learning lifecycle,“ in KDD CMI Workshop, 2018.

R. Souza, L. Azevedo, V. Lourenço, E. Soares, R. Thiago, R. Brandão, D. Civitarese, E. Brazil, M. Moreno, P. Valduriez, M. Mattoso, R. Cerqueira und M. A. S. Netto, „Provenance Data in the Machine Learning Lifecycle in Computational Science and Engineering,“ in 2019 IEEE/ACM Workflows in Support of Large-Scale Science (WORKS), 2019. DOI:

Q. Liu, P. Li, W. Zhao, W. Cai, S. Yu und V. C. M. Leung, „A survey on security threats and defensive techniques of machine learning: A data driven view,“ IEEE ac-cess, Bd. 6, p. 12103–12117, 2018. DOI:

A. Selzer, H. Schöning, M. Laabs, S. Dukanovic und T. Henkel, IT-Sicherheit in In-dustrie 4.0 - Mit Bedrohungen und Risiken umgehen, 1 Hrsg., Stuttgart: Kohlhammer Verlag, 2020.

S. Paulus, Basiswissen Sichere Software - Aus- und Weiterbildung zum ISSECO Cer-tified Professionell for Secure Software Engineering, 1 Hrsg., Heidelberg: dpunkt.verlag, 2012.

A. Shostack, Threat Modeling - Designing for Security, New York: John Wiley & Sons, 2014.

N. Messe, V. Chiprianov, N. Belloir, J. El-Hachem, R. Fleurquin und S. Sadou, „Asset-Oriented Threat Modeling,“ in 2020 IEEE 19th International Conference on Trust, Se-curity and Privacy in Computing and Communications (TrustCom), 2020. DOI:

C. Gane und T. Sarson, Structured Systems Analysis - Tools and Techniques, 1 Hrsg., New York: Prentice-Hall, 1979.

R. Ibrahim und S. Y. Yen, „An automatic tool for checking consistency between Data Flow Diagrams (DFDs),“ World Academy of Science, Engineering and Technology, Bd. 69, p. 2010, 2010.

S. J. Russell und P. Norvig, Artificial Intelligence - A Modern Approach, 3 Hrsg., Lon-don: Prentice Hall, 2010.

S. Warnat-Herresthal, H. Schultze, K. L. Shastry, S. Manamohan, S. Mukherjee, V. Garg, R. Sarveswara, K. Händler, P. Pickkers, N. A. Aziz und others, „Swarm learning for decentralized and confidential clinical machine learning,“ Nature, Bd. 594, p. 265–270, 2021.

„Information technology — Security techniques — Systems Security Engineering — Capability Maturity Model,“ Geneva, 2008.

K. Tuma, G. Calikli und R. Scandariato, „Threat analysis of software systems: A sys-tematic literature review,“ Journal of Systems and Software, Bd. 144, pp. 275-294, 2018. DOI:

W. Xiong und R. Lagerström, „Threat modeling – A systematic literature review,“ Computers & Security, Bd. 84, pp. 53-69, 2019. DOI:

M. Deng, K. Wuyts, R. Scandariato, B. Preneel und W. Joosen, „A privacy threat analysis framework: supporting the elicitation and fulfillment of privacy requirements,“ Requirements Engineering, Bd. 16, p. 3–32, 2011. DOI:



How to Cite

Raddatz, M. (2022). Threat Modeling in Machine Learning. Open Conference Proceedings, 2, 173–179.

Conference Proceedings Volume


Beiträge zur / Contributions to the 22. Nachwuchswissenschaftler*innenkonferenz (NWK)