A hoverfly's vision system serves as the basis for this acoustic drone detection and tracking approach

Having proven that a biologically-inspired model based on a hoverfly's vision system can boost the signal-to-noise ratio of imagery, a team of researchers has now applied it to sound — as a means of detecting and tracking small drones earlier and more accurately.

author avatar

27 Apr, 2022

A model based entirely on the vision system of a hoverfly, and suitable for use alongside convolutional neural network approaches, has been shown to boost audio-based drone detection ranges by up to 50 per cent.

A model based entirely on the vision system of a hoverfly, and suitable for use alongside convolutional neural network approaches, has been shown to boost audio-based drone detection ranges by up to 50 per cent.

This article was discussed in our Next Byte podcast

The full article will continue below.

The idea of taking something nature has spent millions of years perfecting and using it as the basis for man-made technology is far from new and has delivered, and continues to deliver, a range of breakthroughs for everything from computer vision to materials science.

The humble hoverfly, for instance, has had its vision system mapped out and used as the basis for improved visual detection algorithms — but a team of researchers from the University of South Australia, Flinders University, and Midspar Systems have taken it in an unusual new direction, in an effort to develop a system capable of offering detection and tracking of unauthorized drones in protected airspace.

Seeing sound

“Bio-vision processing has been shown to greatly increase the detection range of drones in both visual and infrared data,” explains Anthony Finn, professor of autonomous systems at the University of South Australia and corresponding author, of the inspiration behind the project. “However, we have now shown we can pick up clear and crisp acoustic signatures of drones, including very small and quiet ones, using an algorithm based on the hoverfly’s visual system.”

It may seem counter-intuitive to use the model of a hoverfly’s vision system to detect sound, given that the hoverfly itself uses it to see rather than hear. In science, though, it’s entirely possible — normal, even — to represent audio as an image, using spectrograms and correlograms to create a picture of sound over time. These pictures, the team proposed, could be used as input to a model based on the hoverfly’s photoreceptor system already proven to enhance the signal-to-noise ratio of complex, cluttered, and low-light imagery.

The core concept is simple: Where the hoverfly model has already been proven to boost the signal to noise ratio in photographic imagery, it should be possible for it to do the same to images created based on sound rather than light. The task at hand: Detect and track unauthorized drones, offering an early warning system when they are likely to breach protected airspace and the ability to project their likely trajectory to allow for potential interception.

A diagram showing the mathematical model of the photoreceptor cells in a hoverfly's early vision system, which forms the basis of the biologically-inspired vision (BIV) model as applied to incoming audio signals in this work.Based entirely on the photoreceptor cells of a hoverfly's early vision system, this mathematical model has proven its worth in boosting visual signals — and now audio signals, too.

Unlike many applications of biologically-inspired technology in computing, the team’s work does not rely on convolutional neural networks (CNNs). “This means,” the researchers point out, “the two approaches are not mutually exclusive. In fact, there is reason to believe a CNN trained on outputs from the BIV [Biologically Inspired Vision] model could be smaller and more accurate than one trained on raw data, due to enhancement of the signals relative to the noise.”

Droning on

To prove the concept, the team carried out field tests at the Woomera Test Range in South Australia. They developed an acoustic array, deployed as 49 microphones in a fractal pattern made up of seven groups of seven arrays. Each individual array has a central microphone with two sets of three microphones at 1m and 5m radii (around 3.2 and 16.4 feet respectively.)

The experiment: To use the biologically-inspired vision system to detect and track fixed- and rotating-wing drones — a Skywalker X-8 petrol-driven drone, a Mavic Air electric drone, and a Matrice 600 fitted with an acoustic payload system designed to mimic an ideal signal from a petrol engine under steady load — based on sound alone, while demonstrating improved detection range and accuracy over rival approaches.

An overhead photograph of the testing site at Woomera Test Range, showing two runways. Two diagrams are located beneath, showing the triangular deployment of microphone arrays, seven in total, and the triangular microphone arrangement in each array, again seven in each.Testing was carried out in the field using an array of 49 microphones, signals from which were processed and fed through the biologically-inspired vision system.

Applying narrowband processing to the ideal-signal Matrice 600 and broadband processing to all three drones, the team found notable improvements from their approach. Compared to a traditional processing approach, the bio-inspired processing system extended the range at which the small- and medium-sized drones could be detected by emitted sound by between 30 and 50 per cent. At the same time, the accuracy of flight parameter and trajectory estimation was also boosted, providing a clearer picture of where the drones are and in what direction they are traveling.

“Unauthorized drones pose distinctive threats to airports, individuals, and military bases,” claims Finn. “It is therefore becoming ever-more critical for us to be able to detect specific locations of drones at long distances, using techniques that can pick up even the weakest signals. Our trials using the hoverfly-based algorithms show we can now do this.”

The team’s work has been published in The Journal of the Acoustical Society of America under open-access terms.


Jian Fang, Anthony Finn, Ron Wyber, and Russell S. A. Brinkworth: # Acoustic detection of unmanned aerial vehicles using biologically inspired vision processing, The Journal of the Acoustical Society of America Vol 151 No. 968. DOI 10.1121/10.0009350.

Steven D. Weiderman, Russell S. A. Brinkworth, David C. O’Carroll: Performance of a Bio-Inspired Model for the Robust Detection of Moving Targets in High Dynamic Range Natural Scenes, Journal of Computational and Theoretical Nanoscience, Vol. 7 No. 5. DOI 10.1166/jctn.2010.1438.

27 Apr, 2022

A freelance technology and science journalist and author of best-selling books on the Raspberry Pi, MicroPython, and the BBC micro:bit, Gareth is a passionate technologist with a love for both the cutting edge and more vintage topics.

Stay Informed, Be Inspired

Wevolver’s free newsletter delivers the highlights of our award winning articles weekly to your inbox.