Attacking of AI systems through hardware performance

Neural network training usually requires a lot of computing power and is often running on a special acceleration hardware, for instance GPUs and TPUs. GPUs or Graphical Processor Units are specialized electronic circuits designed to efficient manipulate computer graphics and image processing, but their parallel structure makes them also efficient for algorithms that process large blocks of data in parallel. TPUs or Tensor Processor Units are AI application-specific integrated circuits used to accelerate machine learning workloads.

Those devices are often used to train large-scale neural networks in data centers. They can consume a lot of energy but their designers usually focus on average-case performance.

A group of researchers from University of Cambridge, University of Toronto and Vector Institute found out that an attacker can jam AI systems with malcrafted inputs to increase processing time and power consumption. Researchers has also shown, that an attacker can degrade the performance and increase power consumption so much, that hardware becomes overheated and eventually shuts down.

These attacks are known as sponge examples, and can also heavily increase latency of the AI systems and effectively lead to the so called denial-of-service against the machine learning components of such systems.

As proof of concept, the researchers have shown how translation in Microsoft Azure can be severely slowed down. If normal response time has been 1 millisecond, the response of a service after an attack has been increased to 6 seconds.

Researchers proposed some possible countermeasures against those scenarios, but their research reminds us that AI is not just about new algorithms and more data. Since our society is becoming more and more dependent from the AI technology, investment in research on security of those systems is becoming more and more important.

Author: Matej Kovačič 

Links

Sponge Examples: Energy-Latency Attacks on Neural Networks, https://ieeexplore.ieee.org/document/9581273