Back to All Events

Testing of Machine Learning Systems – The Importance of “Lagom” Surprising Inputs

Abstract

Testing machine learning (ML) components, such as deep neural nets, is not only about correctness and accuracy; we must ensure many quality properties. While research on how to perform these different forms of testing is still immature, it is growing tremendously. This talk will overview recent results on testing ML models and discuss how it differs from testing standard software. I will exemplify with recent work on finding adequately (“lagom”) surprising test inputs that are not random or look like noise but somewhat realistic. While the current research focuses on neural nets, I’ll also discuss if and how this might be generalized to other types of machine learning models.

Robert Feldt

Professor of Software Engineering @ Chalmers

Robert Feldt is a researcher and teacher with a passion for software and augmenting humans with AI and Artificial Creativity/Innovation. He is a professor at Chalmers University of Technology in Gothenburg and frequently consults for companies in both Europe and Asia. He has broad interests spanning from human factors to hardcore automation and applied AI and statistics and works on software testing and quality and human-centered (behavioral) software engineering.