Print Friendly, PDF & Email

Adversarial Machine Learning (AML) is concerned with finding potential security gaps in machine learning (ML) processes and also with the development of suitable countermeasures in this regard. Since methods of ML and in particular so-called deep learning are essentially responsible for the current successes in the field of artificial intelligence (AI), the current developments in the field of AML pose a potentially significant threat to many current AI systems. That explains the increasing related activities (also) on the part of the ML developers, in order to use the knowledge about potential damage mechanisms, if possible, to protect the systems.

In general, ML encompasses methods that allow computers to independently learn certain facts using sample data, e.g. B. whether a certain object is shown in a picture. In the context of AML, input data are examined, so-called adversarial examples, which have been specially designed by an attacker with the intention of causing errors in the corresponding ML method. For example, some ML methods can be caused by subtle changes in the pixels of an image to recognize a completely different object instead of the object actually represented, although the changed image cannot be distinguished from the original image by a person.

Print Friendly, PDF & Email