New Sound Technology for PCs

Challenges and the future
New Sound Technology for PCs
Introduction
Page 1
  Page 2
  Page 3
   Page 4
    Page 5
Conclusion

Concurrency counts

Each additional algorithm for audio effects or music synthesis creates exciting new opportunities for the content developer, but it creates new challenges for the hardware developer because the algorithm is usually required to run concurrently with a bunch of other algorithms. Each algorithm consumes computational resources, so higher concurrency requires faster hardware. The challenge lies not in devising an algorithm that provides the desired effect, but finding a way to make it work concurrently with all the other algorithms required to provide the full range of desired effects.

For 1998, the concurrency requirement in PCs is largely defined by the APIs that we discussed previously:
DirectSound3D
8 streams (more than 8 will result in a muddle; 4 is about the limit of perceptibility, but market requirements will dictate the higher number)

DirectSound
8 streams (speech, telephony, system events, and unlocalized sounds)

DirectMusic
64 voices (downloadable sounds will be used in part for sound effects, so a 64-voice capability will preserve the ability of the wavetable engine to provide 32-voice musical accompaniment)

This list requires significant horsepower, so hardware developers face a serious challenge.

MIDI enhancements

An industry group is forming to take MIDI synthesis to the next level. One mandate for the group is to enhance the DLS specification. The current version of the standard was designed for compatibility with as broad a range of existing products as possible. The next version will call for refinements that give content developers more precise control over sounds. Another standardization effort that may play into this effort is MPEG-4, which will define not only a technique for compressing digital audio, but also a protocol for specifying music synthesis functionality.

Better audio quality through USB and 1394

Just as AC97 recognized the difficulty of accurately converting a digital representation of a waveform to an analog one with chips that combine analog and digital circuitry, many manufacturers recognize that leaving the codec inside the PC imposes a limitation on audio performance because of the same problem – leakage of all the digital signals into the analog signal. USB and 1394 make it possible to move the codecs out of the PC altogether. The digital signal gets shipped to a suitably equipped loudspeaker or amplifier where it is converted to analog within a relatively controlled, electrically quiet environment. Such a system architecture will make it possible for computers to deliver audio performance comparable to what we expect from our hi-fi systems at little additional cost.

Better music synthesis techniques

While all of these advances to the technology for music synthesis and audio effects will improve sound quality dramatically, new technology on the horizon will improve it even more. A new music synthesis technique known as waveguide synthesis is capable of synthesizing musical sounds that are even more realistic and expressive than wavetable. Waveguide synthesis is based on physical models of instruments – equations that describe mathematically the way an instrument behaves. By programming these equations, it is possible to simulate the instrument electronically. Some parameters of the equations correspond to musically useful characteristics, such as the pressure of the bow against the string or the bite on a reed. Changing these parameters can alter the sound quality in ways comparable to what happens in the real instrument. The use of such advanced synthesis techniques will result in sounds that are not only more realistic, but also vary in natural ways.

On the downside, waveguide synthesis is more compute-intensive than wavetable synthesis, but faster processors will overcome this limitation. Also, it can be exceedingly difficult to create the physical models. The physics of instruments are very complicated, which is why it is still largely a mystery as to what makes a Stradivarius violin sound so good. Furthermore, it is almost impossible to know where to start in creating a physical model for an instrument called “Bright,” which is one of the instruments required by General MIDI. Expect to see synthesizers with greater realism, ease of use, and expressiveness in PCs because people care about sound quality.

Conclusion Next Page