SPATIAL SEPARATION OF INFORMATION IN THE AIRCRAFT COMMUNICATION DEVICE
Abstract
The article presents some results of research aimed at the design of human-machine interfaces, taking into account the multimodal nature of human perception, for use in the on-board equipment of an aircraft. In particular, we are talking about the possibility of a wider use of audio channels for input and output of information. The advantages of sound interfaces in relation to visual and tactile ones are, first of all, in the absence of the need for directed attention of the pilot, in the ability to create auditory objects in three-dimensional space and indicate the direction to several different objects at the same time. In the experiments, the possibilities of spatial separation of speech information flows in an aircraft intercom in situations where the level of interference significantly exceeded the level of the target speech message were tested. The indicators of target message recognition were evaluated in the presence of two types of sound interference: the sound of another speech message and the noise of an aircraft engine. The results showed that spatial separation of audio messages significantly improves the operator’s ability to recognize their content, regardless of the type of interference. The maximum number of errors when recognizing a target message corresponds to its spatial position in the same direction as the noise of the interference. At the same time, message recognition is significantly better if it is pronounced in a female voice. The fact of spatial asymmetry of correct recognitions was also revealed: messages arriving from the right are recognized better than in cases of their arrival from the left. The practical significance of the research concerns the possibility of creating intercom with increased security against conflicts between different information flows, as well as against the impact of external acoustic noise. The prospect is seen in the use of three-dimensional audio interfaces not only as part of an intercom, but also for navigation and aircraft control systems, as well as monitoring its state.
References
zvuka [Binaural synthesis in the art of sound recording and reproduction], Sovremennye problemy
nauki i obrazovaniya [Modern problems of science and education], 2015, No. 1-1. Available at:
http://www.science-education.ru/ru/article/view?id=17467 (accessed 26 July 2020).
2. Basyul I.A., Obelets V.S. Opyt registratsii HRTF v reverberatsionnykh usloviyakh [Check-in
experience HRTF in reverberation conditions], XIV Vserossiyskaya mul'tikonferentsiya po
problemam upravleniya (MKPU-2021): Mater. XIV mul'tikonferentsii (Divnomorskoe,
Gelendzhik, 27 sentyabrya – 2 oktyabrya 2021 g.) [XIV all-Russian multimedia conference on
management (mcpu-2021): materials of the XIV conference (Divnomorskoye, Gelendzhik, 27
September – 2 October 2021)]: In 4 vol. Vol. 3. Rostov-on-Don; Taganrog: Izd-vo YuFU,
2021, pp. 26-28.
3. Blauert Y. Prostranstvennyy slukh [Spatial hearing]. Moscow: Svyaz', 1979, 220 p.
4. Danilenko I.A., Nosulenko V.N. Prostranstvennaya asimmetriya slukhovogo vospriyatiya [Spatial
asymmetry of auditory perception], Problemy ekologicheskoy psikhoakustiki [Problems of
ecological psychoacoustics]. Moscow: IPAN, 1991, pp. 117-138.
5. Dvorkovich V.P., Dvorkovich A.V. Teoriya, praktika i metrologiya audiovizual'nykh system
[Theory, practice and metrology of audiovisual systems]: In 2nd book. Book 2. Moscow:
Tekhnosfera, 2019.
6. Nosulenko V.N. Psikhologiya slukhovogo vospriyatiya [Psychology of auditory perception].
Moscow: Nauka, 1988, 216 p.
7. Nosulenko V.N. Psikhofizika vospriyatiya estestvennoy sredy. Problema vosprinimaemogo
kachestva [Psychophysics of perception of the natural environment. The problem of perceived
quality]. Moscow: Izd-vo «Institut psikhologii RAN», 2007, 399 p.
8. Nosulenko V.N. Zvuk v interfeysakh vzaimodeystviya cheloveka i tekhniki [Sound in humantechnology
interaction interfaces], Ekopsikhologicheskie issledovaniya-6: ekologiya detstva i
psikhologiya ustoychivogo razvitiya [Ecopsychological research-6: Ecology of childhood and
psychology of sustainable development], ed. by V.I. Panova. Moscow: FGBNU
«Psikhologicheskiy institut RAO»; Kursk: Universitetskaya kniga, 2020, pp. 155-159.
9. Nosulenko V.N., Basyul I.A., Zybin E.Yu., Lelikov M.A. Prostranstvennoe razdelenie
informatsionnykh potokov v samoletnom peregovornom ustroystve [Spatial separation of information
flows in an airplane intercom device], XIV Vserossiyskaya mul'tikonferentsiya po
problemam upravleniya (MKPU-2021): Mater. XIV mul'tikonferentsii (Divnomorskoe,
Gelendzhik, 27 sentyabrya – 2 oktyabrya 2021 g.) [XIV All-Russian Multi-conference on
Management Problems (MCPU-2021): materials of the XIV multi-conference (Divnomorskoe,
Gelendzhik, September 27 - October 2, 2021)]: In 4 vol. Vol. 3. Rostov-on-Don; Taganrog:
Izd-vo YuFU, 2021, pp. 55-57.
10. Algazi V.R., Duda R.O., Thompson D.M. The CIPIC HRTF database, IEEE Workshop on the
Applications of Signal Processing to Audio and Acoustics, 2001, New Paltz, NY, pp. 99-102.
11. Begault D.R., Wenzel E.M. Techniques and Applications for Binaural Sound Manipulation in
Human-Machine Interfaces, NASA Technical Memorandum 102279, 1990.
12. Brungart D.S., Simpson B.D. Cocktail party listening in a dynamic multitalker environment,
Perception & Psychophysics, 2007, Vol. 69, No. 1, pp. 71-99.
13. Bronkhorst A.W. The Cocktail Party Phenomenon: A Review of Research on Speech Intelligibility
in Multiple-Talker Conditions, Acustica – acta Acustica, 2000, Vol. 86, pp. 119-128.
14. Cherry E.C. Some Experiments on the Recognition of Speech, with One and with Two Ears,
The Journal of the Acoustical Society of America, 1953, Vol. 25, No. 5, pp. 975-979.
15. Cunio R.J., Dommett D., Houpt J. Spatial Auditory Cueing for a Dynamic Three-Dimensional
Virtual Reality Visual Search Task, Proceedings of the Human Factors and Ergonomics Society
2019 Annual Meeting, 2019, pp. 1766-177.
16. Deleforge A., Horaud R. The Cocktail Party Robot: Sound Source Separation and Localization
with an Active Binaural Head, HRI 2012 – 7th ACM/IEEE International Conference on Human
Robot Interaction, Mar 2012, Boston, United States, pp. 431-438.
17. Drullman R., Bronkhorst A.W. Multichannel speech intelligibility and talker recognition using
monaural, binaural, and three-dimensional auditory presentation, J. Acoust. Soc. Am., 2000,
Vol. 107, No. 4, pp. 2224-2235.
18. Ephrat A., Mosseri I., Lang O., Dekel T., Wilson K., Hassidim A., Freeman W.T., Rubinstein
M. Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for
Speech Separation, ACM Trans. Graph, 2018, Vol. 37, No. 4. Article 112.
19. Hawley M.L., Litovsky R.Y., Culling J.F. The benefit of binaural hearing in a cocktail party:
Effect of location and type of interfere, The Journal of the Acoustical Society of America,
2004, Vol. 115, No. 2, pp. 833-843.
20. Larsen C.H., Lauritsen D.S., Larsen J.J., Pilgaard M., Madsen J.B. Differences in Human
Audio Localization Performance between a HRTF- and a non-HRTF Audio System, Proceedings
of the AM’13, September 18-20. 2013, Piteå, Sweden, 2013.
21. Lima Y., Gardia A., Sabatinia R., Ramasamya S., Kistana T., Ezerc N., Vinced J., Boliad R.
Avionics Human-Machine Interfaces and Interactions for Manned and Unmanned Aircraft,
Progress in Aerospace Sciences, 2018.
22. MacDonald J.A., Tran P.K. The Effect of Head-Related Transfer Function Measurement
Methodology on Localization Performance in Spatial Audio Interfaces, Human Factors: The
Journal of the Human Factors and Ergonomics Society, 2008, Vol. 50, No. 2, pp. 256-263.
23. Romigh G.D., Brungart D.S., Simpson B.D. Free-Field Localization Performance with a Head-
Tracked Virtual Auditory Display, IEEE Journal of selected topics in signal processing, 2015,
Vol. 9, No. 5, pp. 943-954.
24. Saito K.Y., Iwaya Y., Suzuki Y. The Technique of Choosing the Individualized Head-Related
Transfer Function Based on Localization, Technical Report of IEICE, 2004, Vol. 104, pp. 1-6.
25. Zhang W., Samarasinghe P.N., Chen H., Abhayapala T.D. Surround by Sound: A Review of
Spatial Audio Recording and Reproduction, Appl. Sci., 2017, No. 7, pp. 532-539.
26. Zhong X., Yost W. How many images are in an auditory scene?, The Journal of the Acoustical
Society of America, 2017, Vol. 141, No. 4, pp. 2882-2892. DOI: 10.1121/1.4981118.
27. Ziemer T., Schultheis H. Psychoacoustical signal processing for three-dimensional
sonification, Proceedings of the 25th International Conference on Auditory Display (ICAD
2019). June 23-27 2019, Northumbria University, 2019.








