Adversarial Machine Learning (AML) is concerned with finding potential security gaps in machine learning (ML) processes and also with the development of suitable countermeasures in this regard. Since methods of ML and in particular so-called deep learning are essentially responsible for the current successes in the field of artificial intelligence (AI), the current developments in the field of AML pose a potentially significant threat to many current AI systems. That explains the increasing related activities (also) on the part of the ML developers, in order to use the knowledge about potential damage mechanisms, if possible, to protect the systems.
In general, ML encompasses methods that allow computers to independently learn certain facts using sample data, e.g. B. whether a certain object is shown in a picture. In the context of AML, input data are examined, so-called adversarial examples, which have been specially designed by an attacker with the intention of causing errors in the corresponding ML method. For example, some ML methods can be caused by subtle changes in the pixels of an image to recognize a completely different object instead of the object actually represented, although the changed image cannot be distinguished from the original image by a person.
Digital day access
for 1.50 € / 1 day
Digital half-year subscription
for 30 € / 6 months
Digital Annual subscription
60 € / 12 months
Enjoy the premium content and other benefits ofESUT Digital:
- Access to all online content
- Comprehensive search in the news archive
- Customizable news area
- Backgrounds, analyzes and technical articles completely and exclusively from the European security and technology and the defense reports
- Current news from the categories Industry / Internal Security / International / Land / Air / Politics / Armaments / Sea / Armed Forces and much more.