We hear a lot about “high-res audio” these days. Sound digitized at 192,000 samples per second must be a lot better than the usual 44,000, right? Well, maybe not.
We can hear sounds only in a certain frequency range. The popular rule of thumb is 20 to 20,000 Hertz, though there’s a lot of variation among people. Not a lot of people can hear anything higher than 20,000.
The sampling rate of a digital recording determines the highest audio frequency it can capture. To be exact, it needs to be twice the highest audio frequency it records. Some degree of oversampling helps to avoid aliasing. Does quadrupling the sampling rate offer any benefit? I’m no audio engineer, but the consensus of expert views that I’ve seen says that while it may be valuable in the recording process, delivering playback audio at that rate offers no benefit and may introduce problems. An article by “Monty” on Xiph.org is often linked to and gives a detailed argument for this position. I haven’t found any articles convincingly arguing that the higher sample rate produces better audio, but I don’t claim to any technical expertise in digital audio.
The other aspect of “high-res” audio is the number of bits per sample. Most current digital audio is 16 bits, but “high-res” audio often offers 24 bits. This means more dynamic range, and it isn’t likely to cause any problems. Does it offer any benefit? Articles such as this suggest it doesn’t. 16-bit audio gives you a signal to quantization noise ratio of about 96 decibels, which conceivably isn’t enough for people who really like to deafen themselves. It seems unlikely to benefit normal listeners, though.
In practice, over-compressed and otherwise badly processed files look like a much more important issue than insufficient bits in the format. Many MP3 files are really bad that way, but all that’s needed to fix them is a willingness not to skimp.
If any digital audio experts would like to chime in with clarifications and corrections, please do.