The aim of the study was to examine the relationship between tinnitus pitch and maximum hearing loss, frequency range of hearing loss, and the edge frequency of the audiogram, as well as, to analyze tinnitus loudness at tinnitus frequency and normal hearing frequency.
The study included 212 patients, aged between 21 to 75 years (mean age of 54.4 ± 13.5 years) with chronic subjective tinnitus and sensorineural hearing loss. For the statistical data analysis we used Chisquare test and Fisher’s exact test with level of significance p < 0:05.
Tinnitus pitch corresponding to the frequency range of hearing loss, maximum hearing loss and the edge frequency was found in 70.8%, 37.3%, and 16.5% of the patients, respectively. The majority of patients had tinnitus pitch from 3000 to 8000 Hz corresponding to the range of hearing loss (p < 0:001). The mean tinnitus pitch was 3545 Hz ± 2482. The majority (66%) of patients had tinnitus loudness 4–7 dB SL. The mean sensation level at tinnitus frequency was 4.9 dB SL ± 1.9, and 13 dB SL ± 2.9 at normal hearing frequency.
Tinnitus pitch corresponded to the frequency range of hearing loss in majority of patients. There was no relationship between tinnitus pitch and the edge frequency of the audiogram. Loudness matching outside the tinnitus frequency showed higher sensation level than loudness matching at tinnitus frequency.
The main goal of this research study is focused on creating a method for loudness scaling based on categorical perception. Its main features, such as: way of testing, calibration procedure for securing reliable results, employing natural test stimuli, etc., are described in the paper and assessed against a procedure that uses 1/2-octave bands of noise (LGOB) for the loudness growth estimation. The Mann-Whitney U-test is employed to check whether the proposed method is statistically equivalent to LGOB. It is shown that loudness functions obtained in both methods are similar in the statistical context. Moreover, the band-filtered musical instrument signals are experienced as more pleasant than the narrow-band noise stimuli and the proposed test is performed in a shorter time. The method proposed may be incorporated into fitting hearing strategies or used for checking individual loudness growth functions and adapting them to the comfort level settings while listening to music.
The present study was carried out to determine whether recorded musical tones played at various pitches on a clarinet, a flute, an oboe, and a trumpet are perceived as being equal in loudness when presented to listeners at the same A-weighted level. This psychophysical investigation showed systematic effects of both instrument type and pitch that could be related to spectral properties of the sounds under consideration. Level adjustments that were needed to equalize loudness well exceeded typical values of JNDs for signal level, thus confirming the insufficiency of A-weighting as a loudness predictor for musical sounds. Consequently, the use of elaborate computational prediction is stressed, in view of the necessity for thorough investigation of factors affecting the perception of loudness of musical sounds.
In Western music culture instruments have been developed according to unique instrument acoustical features based on types of excitation, resonance, and radiation. These include the woodwind, brass, bowed and plucked string, and percussion families of instruments. On the other hand, instrument performance depends on musical training, and music listening depends on perception of instrument output. Since musical signals are easier to understand in the frequency domain than the time domain, much effort has been made to perform spectral analysis and extract salient parameters, such as spectral centroids, in order to create simplified synthesis models for musical instrument sound synthesis. Moreover, perceptual tests have been made to determine the relative importance of various parameters, such as spectral centroid variation, spectral incoherence, and spectral irregularity. It turns out that the importance of particular parameters depends on both their strengths within musical sounds as well as the robustness of their effect on perception. Methods that the author and his colleagues have used to explore timbre perception are: 1) discrimination of parameter reduction or elimination; 2) dissimilarity judgments together with multidimensional scaling; 3) informal listening to sound morphing examples. This paper discusses ramifications of this work for sound synthesis and timbre transposition.