beeps who?: A new software tool analyzes rodent sounds based on the sources of the calls and their intended targets.
People speak; rats squeak. But the rodents also communicate through inaudible ultrasonic vocalizations, which provide insight into their social behavior. A new software tool called TrackUSF could help researchers collect ultrasound data by recording the vocalizations and detecting differences between them, according to a new study.
Whether ultrasonic vocalizations are solid analogs for humans’ communication skills is up for debate, but the tool could help investigate animal models of autism; such vocalizations are altered in animals with some autism-related gene mutations, previous research has shown.
TrackUSF uses an approach commonly used by human speech recognition software: identifying and comparing signatures, said lead researcher Shlomo Wagner, an associate professor of neurobiology and social behavior at the University of Haifa in Israel.
“The idea of TrackUSF is to get the signature of [ultrasonic vocalizations] emitted by different groups of animals, such as: [autism] models and their wild-type littermates, to compare them and identify the differences between them,” he says.
Wagner and his team classified the signatures using a measure called Mel frequency cepstral coefficients (MFCCs). The coefficients are intended to represent what a listener actually hears, and they correspond to sounds of different frequencies, divided into different units.
The benefits of TrackUSF, Wagner says, are that it can be used for any condition, on many animals at once, and users generally don’t need explicit training. “In addition, unlike previous tools, TrackUSF can be used for animals other than rats and mice, such as bats.”
WBecause some vocalization analysis tools require experts to pre-train them on reference or baseline audio samples, TrackUSF is designed to identify changes in vocalizations between groups without any prior training, Wagner says.

vocal clusters: Some types of rat sounds showed clear patterns based on the sources and the intended recipients, while others were less consistent.
To that end, he and his team tested the tool on wild-type rats and rats that lacked one or both copies of the autism-linked gene SHANK3. During 109 recording sessions, TrackUSF classified the animals’ calls into different clusters, grouped by both the animal making the vocalization and the intended listener. For example, wild-type and SHANK3 rats tended to make similar sounds when communicating with their cage mates, but not when communicating with unfamiliar rats.
Adapting human vocal analysis techniques for use with rodent squeaks introduces problems when it comes to processing the sounds, says Ryosuke Tachibana, an associate professor of behavioral neuroscience at the University of Tokyo in Japan, who was not involved in the work. “MFCC emphasizes frequency resonance patterns in the vocal channel using harmonic signal structures,” making it a good unit of measure for analyzing the human voice, he says, but when it comes to mouse vocalizations, which usually lack harmonics, MFCC can’t do the best. be benchmark.
TrackUSF is not designed to characterize ultrasonic vocalizations in detail, Wagner notes. But even high-level screening may be useful for researchers looking for a change in baseline characteristics in rats.
“For example, a pharmaceutical company could use it to screen its drug library and discover which of the drugs restore typical vocalizations in SHANK3 rats, pointing to potential drug treatments for Phelan-McDermid syndrome,” he says. “All they have to do is put ultrasonic microphones in the animal cages and use TrackUSF for analysis.”
Cite this article: https://doi.org/10.53053/QROX5528