Artificial intelligence understands the sound of healthy machines

Noises can tell how well a machine is working. ETH researchers have developed a new machine learning method that automatically determines whether a machine is “healthy” or in need of maintenance.
Whether railway wheels or generators in a power station, whether pumps or valves – they all make noises. To trained ears, these noises even have a meaning: components, machines, plants or rolling stock sound different when they are functioning properly than when they have a defect or fault.

The sounds they make thus give professionals useful clues as to whether a machine is in good – or “healthy” – condition, or whether it will soon need maintenance or urgent repair. If you recognise in time that a machine is sounding faulty, you can pre-empt a costly defect and intervene before it breaks down. Consequently, the monitoring and investigation of sounds are gaining in importance for the operation and maintenance of technical infrastructure – especially since recording sounds, noises and acoustic signals with modern microphones is comparatively inexpensive.

In order to extract the required information from such sounds, proven methods of signal processing and data analysis have been established. One of them is the so-called wavelet transformation. Mathematically, tones, sounds and noises can be represented as waves. With the wavelet transformation, a function is decomposed into a set of wavelets. These are wave-like oscillations that are located in time. The underlying idea here is to determine how much of a wavelet is contained in a signal. Although such methods are quite successful, they often require a lot of experience and manual setting of parameters.

Detect defects at an early stage

ETH researchers have developed a machine learning method that can fully learn the wavelet transform. The new approach is particularly suitable for high-frequency signals such as sound and vibration signals. It makes it possible to automatically recognise whether a machine sounds “healthy” or not.
The approach developed by postdoctoral researchers Gabriel Michau, Gaëtan Frusque, and Olga Fink, Professor of Intelligent Maintenance Systems, and now published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), combines signal processing and machine learning approaches in a new way. With the new approach, an intelligent algorithm can automatically perform acoustic monitoring and sound analysis of machines. Due to its similarity to the proven wavelet transform, the results of the proposed machine learning approach can also be interpreted very well.

The researchers’ goal is that in the near future, professionals who operate machines in industry will be able to use a tool that automatically monitors the equipment and warns them in good time – without the need for special prior knowledge – when conspicuous, abnormal or “unhealthy” noises occur in the equipment. The new machine learning procedure can be applied not only to different types of machines, but also to different types of signals, noises or vibrations. For example, it also recognises sound frequencies that humans – such as high-frequency signals or ultrasound – cannot naturally hear.

However, the learning procedure does not interweave the types of signals. Rather, the researchers designed it to detect the subtle differences in the various sounds and produce machine-specific findings. This is not trivial, as the algorithm has no examples of defective signals to learn from.

Focused on “healthy” sounds

In real industrial applications, it is usually not possible to collect so many meaningful noise examples of defective machines, because defects occur only rarely. Therefore, it is also not easily possible to teach the algorithm how the noise data from faults sound and how they differ from the healthy noises. The ETH researchers therefore trained the algorithms in such a way that the machine learning algorithm learned how a machine normally sounds when it is running properly and then recognises when a noise deviates from the normal case.
In doing so, they used a variety of noise data from pumps, fans, valves and slide rails and chose an “unsupervised learning” approach, where they did not “tell” an algorithm what to learn, but the computer learned the relevant patterns without guidance and on its own. In this way, Olga Fink and her team enabled the learning procedure to recognise related sounds within a certain type of machine and to distinguish between certain types of faults on this basis.

Even if the researchers had a dataset of defect noise data at their disposal, and were thus able to train their algorithms with both healthy and defective samples, they could never be sure that such a labelled dataset actually contained all healthy and defective variants. Their sample might have been incomplete and their learning procedure might have missed important error sounds. Moreover, the same type of machine can produce very different sounds depending on the intensity of use or the local climate, so that sometimes even technically almost identical defects sound very different depending on the machine.

Even if the researchers had a dataset of defect noise data at their disposal, and were thus able to train their algorithms with both healthy and defective samples, they could never be sure that such a labelled dataset actually contained all healthy and defective variants. Their sample might have been incomplete and their learning procedure might have missed important error sounds. Moreover, the same type of machine can produce very different sounds depending on the intensity of use or the local climate, so that sometimes even technically almost identical defects sound very different depending on the machine.

Learning bird calls

The algorithm can by no means only be applied to the sounds of machines. The researchers also tested their algorithms to distinguish between different bird calls. For this, they used recordings from bird lovers. The algorithms had to learn to distinguish between different bird calls of a certain bird species – in such a way that the type of microphone used did not play a role: “Machine learning should recognise the bird calls, not evaluate the recording technique,” says Gabriel Michau.

This learning effect is also important for technical infrastructure: even with machines, the algorithms must exclude the mere background noise and the influences of the recording technology in order to capture the relevant sounds. For an application in industry, it is important that machine learning can detect the subtle differences between sounds. For it to be useful and trustworthy to professionals in the field, it must neither sound the alarm too often nor miss relevant sounds.

“With our research, we were able to show that our machine learning approach detects the anomalies among the sounds, and that it is flexible enough to be applied to different signals and different tasks,” concludes Olga Fink. An important feature of her learning method is that it is also able to monitor the evolution of the sounds, so that it can detect clues to possible errors from the way the sounds evolve over time. This opens up several interesting application possibilities.

Source: https://www.maschinenmarkt.vogel.de/kuenstliche-intelligenz-versteht-den-klang-gesunder-maschinen-a-1101146/

References: Michau G, Frusque, G, Fink, O. Fully learnable deep wavelet transform for unsupervised monitoring of high-​frequency time series. Proceedings of the National Academy of Sciences PNAS, Feb 2022, 119 (8) e2106598119, https://www.pnas.org/doi/10.1073/pnas.2106598119