Virtual Reality (VR) systems are used in engineering, architecture, design and in applications of biomedical research. The component of acoustics in such VR systems enables the creation of audio-visual stimuli for applications in room acoustics, building acoustics, automotive acoustics, environmental noise control, machinery noise control, and hearing research. The basis is an appropriate acoustic simulation and auralization technique together with signal processing tools. Auralization is based on time-domain modelling of the components of sound source characterization, sound propagation, and on spatial audio technology. Whether the virtual environment is considered sufficiently accurate or not, depends on many perceptual factors, and on the pre-conditioning and immersion of the user in the virtual environment. In this paper the processing steps for creation of Virtual Acoustic Environments and the achievable degree of realism are briefly reviewed. Applications are discussed in examples of room acoustics, archeological acoustics, aircraft noise, and audiology.
Virtual reality (VR) technology now provides players with immersive and realistic experiences as never before. Spatial presence plays a crucial role in the introduction of immersive experience in a VR environment. Spatial presence is a special feeling of personal and physical presence in the displayed environment. In this study, we found that the first-person perspective (1PP) was more effective in raising the sense of spatial presence that induces immersive experience compared to the third-person perspective (3PP) in a VR shooting game. Moreover, eye blink rate was significantly higher in the 1PP compared with the 3PP. The 1PP game setting was more realistic than the 3PP setting, and may have raised participants’ sense of immersion and facilitated eye blink. These results indicate that eye blink rate is increased by the sense of spatial presence, and can be a good measure of subjective immersive experience in a VR environment. Neuroscientific evidences suggest that dopaminergic system is involved in such emotional experiences and physiological responses.
Rapid development of computing and visualisation systems has resulted in an unprecedented capability to display, in real time, realistic computer-generated worlds. Advanced techniques, including three-dimensional (3D) projection, supplemented by multi-channel surround sound, create immersive environments whose applications range from entertainment to military to scientific. One of the most advanced virtual reality systems are CAVE-type systems, in which the user is surrounded by projection screens. Knowledge of the screen material scattering properties, which depend on projection geometry and wavelength, is mandatory for proper design of these systems. In this paper this problem is addressed by introducing a scattering distribution function, creating a dedicated measurement setup and investigating the properties of selected materials used for rear projection screens. Based on the obtained results it can be concluded that the choice of the screen material has substantial impact on the performance of the system
This paper analyses the performance of Differential Head-Related Transfer Function (DHRTF), an alternative transfer function for headphone-based virtual sound source positioning within a horizontal plane. This experimental one-channel function is used to reduce processing and avoid timbre affection while preserving signal features important for sound localisation. The use of positioning algorithm employing the DHRTF is compared to two other common positioning methods: amplitude panning and HRTF processing. Results of theoretical comparison and quality assessment of the methods by subjective listening tests are presented. The tests focus on distinctive aspects of the positioning methods: spatial impression, timbre affection, and loudness fluctuations. The results show that the DHRTF positioning method is applicable with very promising performance; it avoids perceptible channel coloration that occurs within the HRTF method, and it delivers spatial impression more successfully than the simple amplitude panning method.
In recent years, many scientific and industrial centres in the world developed virtual reality systems or laboratories. At present, among the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems usually consist of four, five, or six projection screens arranged in the form of a closed or hemi-closed space. The basic task of such systems is to ensure the effect of user “immersion” in the surrounding environment. The effect of user “immersion” into virtual reality in such systems is largely dependent on optical properties of the system, especially on quality of projection of three-dimensional images. In this paper, techniques of projection of three-dimensional (3D) images in CAVE-type virtual reality systems are analysed. The requirements of these techniques for such virtual reality systems are outlined. Based on the results of measurements performed in a unique CAVE-type virtual reality laboratory equipped with two different 3D projection techniques, named Immersive 3D Visualization Lab (I3DVL), that was recently opened at the Gdańsk University of Technology, the stereoscopic parameters and colour gamut of Infitec and Active Stereo stereoscopic projection techniques are examined and discussed. The obtained results enable to estimate the projection system quality for application in CAVE-type virtual reality installations.
The use of virtual reality (VR) has been exponentially increasing and due to that many researchers have started to work on developing new VR based social media. For this purpose it is important to have an avatar of the user which look like them to be easily generated by the devices which are accessible, such as mobile phones. In this paper, we propose a novel method of recreating a 3D human face model captured with a phone camera image or video data. The method focuses more on model shape than texture in order to make the face recognizable. We detect 68 facial feature points and use them to separate a face into four regions. For each area the best fitting models are found and are further morphed combined to find the best fitting models for each area. These are then combined and further morphed in order to restore the original facial proportions. We also present a method of texturing the resulting model, where the aforementioned feature points are used to generate a texture for the resulting model.