Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting Departments | Features | ADs | Equipment | Music/Recordings | History |
ROOM ACCOMMODATIONSAs a general rule, conventions of the Audio Engineering Society inundate us with the newest in signal processing, digital workstations, loudspeakers, and multi-channel re cording equipment. But it is not often that we hear something new. At the October 1991 convention at the New York Hilton, we were presented with two sonic treats that deserve special attention. The first of these, auralization, is an outgrowth of several computer programs that allow a sound system designer to model loudspeaker performance in a given environment. These programs normally provide a set of architectural plan views showing loudspeaker coverage in the area occupied by the listeners. Many of these programs have as part of their analysis an image model ling subprogram that allows the designer to examine the pattern of all early reflections arriving at a selected listening position for a given speaker location. The view of this on the computer monitor is simply a group of vertical lines, arrayed horizontally, representing the individual arrival times of the reflections received by the listener. The height of the lines represents the strength of these reflections. However, much more data has been generated in the image modeling pro cess, and the program also stores the "3-D" direction from which each reflection arrives at the listener. In signal analysis, this information regarding reflected acoustical energy is related to the impulse response of the room over the path between the sound source and the listener. Then, through a mathematical technique known as convolution, it is possible to combine an anechoic signal with the pattern of reflections and generate a new signal representing what the room may actually sound like with a listener and loudspeaker at the assumed positions. Imagine that a house of worship wants to purchase a new speech reinforcement system. The sound committee will go to an acoustical consultant who will lay out a proposed system. Making use of auralization, the consultant will be able to "demonstrate" what the system will sound like at various assumed positions in the actual space. Several design options can be auditioned, and the differences between a central array and a distributed array of loudspeakers may easily be demonstrated. The auralization can take place over binaural headphones or, with more detailed convolution, over multiple loudspeaker channels in a dedicated auralization environment. None of this is new in principle. But in the past, such demonstrations have required large mainframe computers and have been well outside the normal commercial scene. Soon, such capability will be more freely available, making use of personal computers outfitted with extended memory. At the convention, an entire technical session was devoted to auralization, and several demonstrations were held. More impressive demonstrations were to be heard in the Bose and Renkus-Heinz suites. All of these systems are still in the early stages of development, and it will probably be another 18 to 24 months before most of the bugs are worked out. Looking a bit further into the future, it is almost certain that concert halls, yet unbuilt, will be able to be accurately auralized, given thoughtful and complete modeling. In another area related to room acoustics, Lexicon was demonstrating a remarkable system for adding reverberation to acoustically dead spaces. Their process is called LARES, the Lexicon Acoustic Reverberance Enhancement System. A demonstration of LARES in a normal-size room made use of eight loudspeakers placed at the junctions of the ceiling and side walls. A stereo microphone was located on a stand in the middle of the room, and the microphone outputs were fed to eight reverberator/amplifier combinations, one for each loudspeaker. When the system was turned on, I immediately felt that I was in a large, natural reverberant space. Even the slightest sounds (whispering, for example) reverberated naturally. Reverberation time and level could be varied, and the simulated space could easily be altered. But under none of these conditions did the system show any signs of instability-or even hint at going into feedback or howling! How could such high sys tem gain be achieved with absolute stability? Years ago, some speech reinforcement system designers made use of frequency shifters to increase gain before feedback. It takes a finite amount of time for a system to go into feedback, and the frequency shifter foiled the process by constantly changing phase relationships through the system. The trouble was that the frequency shift was noticeable as such when it was used in sufficient amount to be an effective deterrent to feedback. The LARES solution is two fold: First, eight uncorrelated reverberators are used; second, each reverberation channel includes a random time-variant delay function. Both of these work to foil the feedback process. So effective is the feedback immunity with LARES that it was possible to position the microphone within about 20 inches--of one of the loudspeakers with no degradation in system performance. Additionally, LARES can be set so that the reverberation time is appropriately long for music; when speech is detected, the reverberation parameters are adjusted for better speech intelligibility. I know dozens of U.S. performance spaces that could benefit by using LARES for more acoustic liveness. Electronic architecture has been making steady strides for years. With LARES, it takes a giant leap! (adapted from Audio magazine, Feb. 1992) = = = = |
Prev. | Next |