Cutaneous Senses for Detection and Localization of Environmental Sound Sources: A Review and Tutorial

1997 ◽  
Vol 26 (4) ◽  
pp. 195-206 ◽  
Author(s):  
Erik Borg
2021 ◽  
Author(s):  
Matthew Kamrath ◽  
Vladimir Ostashev ◽  
D. Wilson ◽  
Michael White ◽  
Carl Hart ◽  
...  

Sound propagation along vertical and slanted paths through the near-ground atmosphere impacts detection and localization of low-altitude sound sources, such as small unmanned aerial vehicles, from ground-based microphone arrays. This article experimentally investigates the amplitude and phase fluctuations of acoustic signals propagating along such paths. The experiment involved nine microphones on three horizontal booms mounted at different heights to a 135-m meteorological tower at the National Wind Technology Center (Boulder, CO). A ground-based loudspeaker was placed at the base of the tower for vertical propagation or 56m from the base of the tower for slanted propagation. Phasor scatterplots qualitatively characterize the amplitude and phase fluctuations of the received signals during different meteorological regimes. The measurements are also compared to a theory describing the log-amplitude and phase variances based on the spectrum of shear and buoyancy driven turbulence near the ground. Generally, the theory correctly predicts the measured log-amplitude variances, which are affected primarily by small-scale, isotropic turbulent eddies. However, the theory overpredicts the measured phase variances, which are affected primarily by large-scale, anisotropic, buoyantly driven eddies. Ground blocking of these large eddies likely explains the overprediction.


1997 ◽  
Vol 6 (4) ◽  
pp. 482-503 ◽  
Author(s):  
Jeff Pressing

The great variety of functions possible for sound in virtual environments is surveyed in relation to the traditions that primarily inform them. These traditions are examined, classifying sound into the three categories of artistic expression, information transfer, and environmental sound. The potentials of and relations between sonification, algorithmic composition, musicogenic and sonigenic displays, virtual musical instruments and virtual sound sources are examined, as well as the practical technical limitations that govern performance control of MIDI and real-time DSP sound synthesis in coordination with visual display. The importance of music-theoretic and psychological research is emphasized. The issues and developed categorizations are then applied to a case study: the examination of a specific virtual environment performance by a team of workers in Australia in which the author worked as composer/performer/ programmer.


2020 ◽  
Vol 16 (1) ◽  
pp. 77-83
Author(s):  
Limsoo Shin

The environmental sound from festival tourist attractions at Haeundae and Gwangalli in Korea was measured and ascertained exposured environments to constant noise by comparing with precedent researches, the law of execution and demonstration, and conditions of festival sites. Noise sound from array speakers for the main stage at Ocean festival in Busan determined not only differences in accordance with distance such as 0, 1, 5, 10, 15, and 20 meters from the speakers but also station coordinate and altitude were used by an application program. As a result, measured noise before and after sunset showed meaningful difference (<i>p</i> < 0.05) and a station 20 meters away reported as the loudest level in Haeundae case. Noise level difference by distance from sound sources before sunset was significantly existed (<i>p</i> < 0.05). Otherwise, it significantly recorded different noise levels between before and after sunset in Gwangalli tourist attraction (<i>p</i> < 0.05). The noise level of farther station (20 m) was the lowest before sunset and noise level by distance statistically was significant (<i>p</i> < 0.05). In conclusion, Gwangalli tourist attraction generally had bigger noise sound than Haeundae’s. Flying array speakers could make the sound measurement different by studying advanced research and temperature would also influence on the noise level between day and night times.


2018 ◽  
Vol 8 (2) ◽  
Author(s):  
Micha Lundbeck ◽  
Giso Grimm ◽  
Volker Hohmann ◽  
Lars Bramsløw ◽  
Tobias Neher

Hearing loss can negatively influence the spatial hearing abilities of hearing-impaired listeners, not only in static but also in dynamic auditory environments. Therefore, ways of addressing these deficits with advanced hearing aid algorithms need to be investigated. In a previous study based on virtual acoustics and a computer simulation of different bilateral hearing aid fittings, we investigated auditory source movement detectability in older hearing- impaired (OHI) listeners. We found that two directional processing algorithms could substantially improve the detectability of left-right and near-far source movements in the presence of reverberation and multiple interfering sounds. In the current study, we carried out similar measurements with a loudspeaker-based setup and wearable hearing aids. We fitted a group of 15 OHI listeners with bilateral behind-the-ear devices that were programmed to have three different directional processing settings. Apart from source movement detectability, we assessed two other aspects of spatial awareness perception. Using a street scene with up to five environmental sound sources, the participants had to count the number of presented sources or to indicate the movement direction of a single target signal. The data analyses showed a clear influence of the number of concurrent sound sources and the starting position of the moving target signal on the participants’ performance, but no influence of the different hearing aid settings. Complementary artificial head recordings showed that the acoustic differences between the three hearing aid settings were rather small. Another explanation for the lack of effects of the tested hearing aid settings could be that the simulated street scenario was not sufficiently sensitive. Possible ways of improving the sensitivity of the laboratory measures while maintaining high ecological validity and complexity are discussed.


2020 ◽  
Author(s):  
James Traer ◽  
Sam V. Norman-Haignere ◽  
Josh H. McDermott

AbstractSound is caused by physical events in the world. Do humans infer these causes when recognizing sound sources? We tested whether the recognition of common environmental sounds depends on the inference of a basic physical variable – the source intensity (i.e. the power that produces a sound). A source’s intensity can be inferred from the intensity it produces at the ear and its distance, which is normally conveyed by reverberation. Listeners could thus use intensity at the ear and reverberation to constrain recognition by inferring the underlying source intensity. Alternatively, listeners might separate these acoustic cues from their representation of a sound’s identity in the interest of invariant recognition. We compared these two hypotheses by measuring recognition accuracy for sounds with typically low or high source intensity (e.g. pepper grinders vs. trucks) that were presented across a range of intensities at the ear or with reverberation cues to distance. The recognition of low-intensity sources (e.g. pepper grinders) was impaired by high presentation intensities or reverberation that conveyed distance, either of which imply high source intensity. Neither effect occurred for high-intensity sources. The results suggest that listeners implicitly use the intensity at the ear along with distance cues to infer a source’s power and constrain its identity. The recognition of real-world sounds thus appears to depend upon the inference of their physical generative parameters, even generative parameters whose cues might otherwise be separated from the representation of a sound’s identity.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


2006 ◽  
Author(s):  
Elizabeth T. Davis ◽  
Kenneth Hailston ◽  
Eileen Kraemer ◽  
Ashley Hamilton-Taylor ◽  
Philippa Rhodes ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document