Are Your Models Resistant to Adversarial Attacks?
The past decade has been marked by incredible advances in machine learning, especially in the area of computer vision. Increased interest in applying machine learning to safety-critical systems (such as self-driving cars) has raised the question of how secure these models are, especially when it comes to adversarial attacks.An adversarial attack is when an attacker slightly modifies the input data in a way that deliberately causes the machine learning classifier to misclassify it while appearing unmodified to human observers. This poses a huge risk since attackers could for example target autonomous vehicles by using stickers to negate stop-signs in traffic.This talk will focus on giving an introduction to adversarial attacks, providing historical and theoretical background. Different methods of both generating and defending your model from adversarial attacks will be discussed, as well as the current state of research in this field.
Marko is team leader & deep learning consultant at Berge. He has worked in the field of machine learning for two years. The main areas of interest include target tracking and vision-based object detection.