Digital recordings

monotonic

The Living Force
I found this article that sums up a widespread belief about digital vs analog audio, IE CD vs turntable. I have always wanted to know what the Cs would have to say about this.

http://www.drjohndiamond.com/digital/717-human-stress-provoked-by-digitized-recordings# said:
I have tested many thousands of phonograph recordings made over a period of more than eighty years, and have found that almost most examples have been therapeutic, often highly so.[3] In 1979 this changed. I suddenly found that I was not achieving the same therapeutic results as before, that playing records of the same compositions to the same patients was producing a completely contrary effect! Instead of their stress being reduced and their Life Energy being actuated, the opposite was occurring! For instance, music that I had long used to promote sleep now seemed to be actually aggravating the insomnia. I found in one case that instead of the music helping a patient withdraw from tranquilizers, it seemed to increase his need for them. Special tapes for businesspeople to use during their rest periods seemed suddenly to increase rather than reduce their stress. These findings were very alarming.

When I investigated these and many other paradoxical phenomena, I found that in all cases they were related to the use of digital recordings. These were vinyl records (and later CDs) made from digital masters.[4] When I substituted analog versions of the same work, sometimes even with the same performers, the positive therapeutic effects were again obtained. There seemed to me little doubt that something was “wrong” with the digital process. Apparently the digital recording technique not only did not enhance Life Energy and reduce stress, but it was actually untherapeutic; that is, it imposed a stress and reduced Life Energy. Through some mechanism, some severely detrimental effect on the Acupuncture Emotional System, the digital process was somehow reversing the therapeutic effects of the music!

My question is, if the effect is real, what makes the digital recordings bad? Is the problem the sound or is it something else that affects the listener having to do with the electronics? Is digital fundamentally flawed or can it be fixed?
 
Along that line I wondered if you could reduce or eliminate the brain-jellifying effects that are encoded in modern music. I wonder if it would be as simple as running it through some filter passes. I have an electronic / Drum and Bass album that gives me a headache, but the songs are so good! :P
 
I don't have any answer, but I too have wondered about this. Everything is being digitalized nowadays, for easier access and use, but I wonder if this change into a completely digital world is "by design", and "inspired" by some 4DSTS thought centers to manipulate us (digitalization also makes it easier to manipulate the information).
 
This is a very good question. On many occasions I am asking the same question for good reason: I use digital homeopathy, that is, EM recording in distilled water of homeopathic frequencies that are recorded in digital mode. Many of these work positively for me, but it is true that if I resample the contents of these records (for example from 44 kHz to 48 kHz) lose much of their effectiveness. So it is possible that in an analogue recording, they were best.

On the other hand, I've always had the intuition that the most powerful means to record the 'original' energy of something is the tape-based ferrite, with their characteristic background noise (sea of white noise from which information fields are formed). You could record healthy intentions on tape (with your voice or thought) and put on water for drinking in mode listen to...
 
Well, I can tell you this much:

Digital audio uses tricks to achieve smaller data sizes. MP3s, for example, use a model of human hearing so that when the music is digitized, not everything is preserved.

Basically, your music contains Frequency A and Frequency B. When you put those 2 frequencies together, you get the sum of the 2 frequencies (linear mixing) - or sometimes you get fA + fB and fA - fB (nonlinear mixer).

Point is, you get something new. That something new may be outside the standard human hearing range. If it is, digital audio compression algorithms will simply throw it out.

In any case, when the digital music is played back, it must be converted back into analog form to be amplified and then drive a speaker coil so you can actually hear the music.

In the case of an analog recording, the sound can be recorded "more faithfully". The quality of the old analog recording may be lower (static, hiss, etc.), but you probably end up with a wider variety of frequencies that are either missing or "reconstructed" in the digital recording. And anyway, at best, converting a bunch of 1's and 0's back into an analog waveform is a facsimile of the original sound waves - a very good one, mind you, but there is still stuff missing.

That's a basic description, but you get the idea. Digitizing audio means you lose something, and compressing the data means you lose even more "something".

If you've ever stood in front a big speaker that was very loud, you could feel the bass. Now, imagine if that bass is a very low frequency. You can't really hear it that well, but you can literally feel it at loud volumes. Well, what about if it has some effect at low volumes? And what if that low frequency is one that is cut out and then reproduced in a digital recording?

So it seems to me that yeah, analog recordings can have all kinds of stuff that digital ones lack.
 
Scottie said:
Well, I can tell you this much:

Digital audio uses tricks to achieve smaller data sizes. MP3s, for example, use a model of human hearing so that when the music is digitized, not everything is preserved.

Basically, your music contains Frequency A and Frequency B. When you put those 2 frequencies together, you get the sum of the 2 frequencies (linear mixing) - or sometimes you get fA + fB and fA - fB (nonlinear mixer).

Point is, you get something new. That something new may be outside the standard human hearing range. If it is, digital audio compression algorithms will simply throw it out.

In any case, when the digital music is played back, it must be converted back into analog form to be amplified and then drive a speaker coil so you can actually hear the music.

In the case of an analog recording, the sound can be recorded "more faithfully". The quality of the old analog recording may be lower (static, hiss, etc.), but you probably end up with a wider variety of frequencies that are either missing or "reconstructed" in the digital recording. And anyway, at best, converting a bunch of 1's and 0's back into an analog waveform is a facsimile of the original sound waves - a very good one, mind you, but there is still stuff missing.

That's a basic description, but you get the idea. Digitizing audio means you lose something, and compressing the data means you lose even more "something".

If you've ever stood in front a big speaker that was very loud, you could feel the bass. Now, imagine if that bass is a very low frequency. You can't really hear it that well, but you can literally feel it at loud volumes. Well, what about if it has some effect at low volumes? And what if that low frequency is one that is cut out and then reproduced in a digital recording?

So it seems to me that yeah, analog recordings can have all kinds of stuff that digital ones lack.

That's my understanding as well. As it is said in the quoted article, the problem is that everything is digital on the production side of things, so it doesn't really matter if you buy a vinyl record or CD, since the music is converted to digital and back, sometimes multiple times!

One aspect that also could play a role is compression/limiting: nowadays, every piece of music is heavily compressed, meaning that the audio content that is high in volume/audio energy is "brought down", so that everything becomes more even. Then, a mastering limiter is applied - which basically "clips" every peak of the signal. This is done to such an extreme extent that many songs look like a "brick" when looking at the signal. See the different releases of a Michael Jackson song over time from 1995-2007:

305px-Michael_Jackson-Black_or_White_Loudness.png

(https://en.wikipedia.org/wiki/Loudness_war)


This is done so that your song is "the loudest" on the radio or iphone and so on, because we normally perceive something loud as "better". Of course, you could just turn up the volume, but apparently that's too much for folks... The problem is that with a lot of compression, you are killing the dynamic range - there are no subtleties anymore, no differences between a light keystroke and a strong one, no subtleties in the voice and so on. One could say that with compression and limiting, you kill the "soul" of a piece of music.

With digitally (re)mastered songs, this is widely done, and got worse over time, so this may play a role in addition to the problems of analog to digital conversion that Scottie mentioned.
 
luc said:
This is done so that your song is "the loudest" on the radio or iphone and so on, because we normally perceive something loud as "better". Of course, you could just turn up the volume, but apparently that's too much for folks... The problem is that with a lot of compression, you are killing the dynamic range - there are no subtleties anymore, no differences between a light keystroke and a strong one, no subtleties in the voice and so on. One could say that with compression and limiting, you kill the "soul" of a piece of music.

:scared:

That's evil!

Well, I guess the only solution is to go to a concert and listen to a live orchestra...
 
Scottie said:
:scared:

That's evil!

Well, I guess the only solution is to go to a concert and listen to a live orchestra...

Yeah, a good live orchestra in a good concert hall is just amazing, and the dynamic range is huge.

That being said, I still think a good song is a good song, and while recorded music may lose something (or even a lot) due to digital processing/compression and so on, good songs can still convey something positive, OSIT...
 
Regarding MP3s the high frequencies and very low frequencies get cut out and that's one of the reasons it is much smaller. Some people say they cannot listen to MP3 files because it does sound horrible. But I don't know if someone can really hear the difference. In the end it also depends on the equipment you use to hear music. But definitely you have quality loss from analog to mp3. Maybe with the years mp3 get bigger and have more quality again, because the storage is increasing.

Some reasons could be as well to make more money, when considering film formats as DVD, Blue-Ray and in the future maybe Everspan with 300gb discs to get a sharper picture.

A similar development can be seen in digital photography imo, where you had grainy b&w pictures in the past and nowadays a goal is getting pictures more and more sharper without any grain. Beside the difference between jpeg and raw (i.e. tiff) files.
 
First, "digital recordings" is a bit unspecific. One also has to specify if the music is uncompressed, or compressed with one of the many available compression algorithms. As Scottie pointed out, MP3's for example are based upon psychoacoustic filtering; here, frequencies are intentionally dropped to get smaller file sizes. Of course an MP3 will have less fidelity than uncompressed (directly sampled) audio. Some people can hear this.

Second, there are also purely psychological effects based on belief which are not real. To give you one hilarious example: A family member told me that his friend believed that music CD's produced lower quality than vinyl records. For this reason, he coated his classical music CDs with transparent violine varnish! He was utmostly convinced that when playing back these specially coated CDs, they would sound better. But from a technical perspective, this is pure nonsense. Music on a CD is digital (and uncompressed at that). It is encoded as 0's and 1's which are represented as differences in reflectivity in the silver layer. And no type of varnish can make the laser read different 0's or 1's. Thus, one will end up with exactly the same result.

However, 'fidelity' of music certainly has changed over the years, not only because of the changes in the practices of the audio engineers, but also because of the evolution and improvement of electronics. In the 'early days' music amplifiers were still made as vacuum tubes. These did not have a perfectly linear correspondence between input and output. In short, they distorted the music slightly (they were adding frequencies due to the inherent nonlinearity). Today, we have switched to highly optimized analog transistors, and they are much more linear than vacuum tubes. This means that in theory, today we have a higher fidelity because the characteristic distortion of the vacuum tubes is gone. Some people may hear this too and miss it.

So, there are many aspects to look at here...
 
The audible difference lies in our ability to differentiate between the stringent numerical representation (digital) of the prime sound from it's source (analogue).

low-wave.gif


This gets more accurate, depending on the sampling codec. A lower bit rate will have a less accurate representation of the wave.

Modern day digital formats are more clever, in that they adapt to the variability of the wave and can produce a mixed rate within the same track, but I find the truest digital representation is either in wav or flaac, though they take up an exponential amount of disk space.

Edit: The method is called Pulse code modulation

re: https://en.wikipedia.org/wiki/Pulse-code_modulation
 
Data said:
However, 'fidelity' of music certainly has changed over the years, not only because of the changes in the practices of the audio engineers, but also because of the evolution and improvement of electronics. In the 'early days' music amplifiers were still made as vacuum tubes. These did not have a perfectly linear correspondence between input and output. In short, they distorted the music slightly (they were adding frequencies due to the inherent nonlinearity). Today, we have switched to highly optimized analog transistors, and they are much more linear than vacuum tubes. This means that in theory, today we have a higher fidelity because the characteristic distortion of the vacuum tubes is gone. Some people may hear this too and miss it.

It's not really accurate to blame vintage audio quality on tubes. Tubes can be used to make very low distortion equipment, a good engineer can work with any part. The problem is not really the absolute distortion, rather the distortion as perceived by the ear/brain system. All distortions are not equal in this respect, and many would say the distortions of tubes are relatively benign compared to the distortions of transistors, which aren't necessarily more linear. Engineering practice has gotten more linear - it's how parts are used that has reduced distortion as technology developed, not so much the devices themselves improving. The same methods can be and are currently being applied to tubes.

If you look at audio technology, many things have improved since then. Speakers were not all that good for many reasons, microphones have changed too. But many people who listen to true analog with good modern equipment still think it sounds better.

But my personal interest isn't really limited to digital vs analog. There is a lot of anecdotal information coming from experimenters that seems to suggest that some ways of how an audio system sounds might not even have anything to do with how engineers would think of a system. An engineer would hook up a signal analyzer, measure noise, distortion, etc. with a checklist in hand and if all the boxes are ticked, he would say the distortion is so low that no one should be able to hear it. But sometimes you will make a change, and the system measures exactly the same, but for some reason it will be harsh and hard to listen to. Or there will just be a change in the level of clarity. But if you try to measure the difference you won't find any. This is what bugs me.

There is plenty of scientific research to establish where the limits of our hearing are, and it is often used to dismiss the idea that there is anything wrong with "digital" sound, as well as many other Hi-fi concepts. But we know that science is often corrupted, and there are still a lot of angles not covered by the research.

At the same time I have very little confidence in the explanations for this given by hi-fi community. There is an enormous amount of incorrect explanation and badly misunderstood science used in the hi-fi community which really muddies the waters. I think they often try to shove a square peg into a round hole because they don't have any round pegs and they don't know they exist.
 
The digitization itself has been shown to not be the issue.
A CD is 16 bits (2^16 bits or 65,536 steps) and sampled at 44.1 Khz (44,100 samples per second!). Bitrate is around 176 KBytes/sec, or 1,408 Kbits/sec (1.4 MB/sec).
The audio range is around 20hz -20 Khz Most cannot hear bass below 100 hz or so and treble above 16 Khz.
That resolution has been shown to be indistinguishable in blind testing using the same master recording.
(a lot of times original recordings get remastered when they get converted to digital formats so it's not a proper comparison to compare the CD vs the original sold copies in that case)

MP3's are also usually quite close to the original CD if they aren't filtered first and have a bitrate of 160 Kbits/sec.
AAC+ can have measured transparency at 96kbit/s for stereo. These stats I have found when I was trying to find the best way to re encode blu-ray movies for my movie storage.
Movies use 5 channels +1 bass, at around 640 kbit, or 1,500 kbit for a competing format.

Nowadays they are using higher bitrates for MP3's/AAC/Movies as space has become cheap. But quality is not better, due to the mixing methinks.

So where did things go wrong? As was mentioned, a lot of dynamic compression has been used.

Being hypersensitive to sound, voices in particular and digitization issues (sott radio on blogtalkradio would drive me nuts!)- I can notice a difference between 70s music digitized vs modern music, even when the modern "retro indie rock" is played in the same fashion. Where I notice the issues are where they do the mixing, like the compression clipping is too extreme and I can "feel" it as pressure in the head, similar to the blogtalk streams or mumbly voices like Noam Chomsky (argh, even before I found out he was a gatekeeper).

There is also something I noticed the other day... Sometimes the balance is off- usually left biased a bit. I thought maybe it was my speakers, card, ears, until I turned around- and heard it louder on the right!?!! Mono test sounds came out perfectly balanced.

Then I fired up the Doors, the Who, King Crimson, and some classical music and compared while the thoughts were fresh.

Wow, it was night and day. The way the music used to be mixed was dynamic at the time- you felt like you were moving around the sound stage. From what I saw in some videos about bands, they would record in the studio with multiple mics in the room and use those to mix together the stereophonic effect.

Nowadays it feels static, like they focus one instrument to the left or right, and so on. That's because they use more tech to separate sounds, like a video game would do with it's samples. It's like the music got broken down to it's "molecules" and split apart to create the effect desired, instead of making the effect and recording that. A similar analogy is CGI vs traditional special effects. CGI also bugs the crap out of me- sometimes looking more artificial than old school effects!

Even being in a live concert is different, because some sounds come from different angles depending on tone and resonation. This is not happening in modern music. It feels flat despite being stereo , or surround. It reminds me of the movies that try to recreate a first person view... nowadays it's very much like having blinders on- looking forwards. Meanwhile the older movies gave you a wider more human like view through a character's eyes.

I kind of wonder if the music, like movies have been changing to become more flat due to the level of awareness of the artists and producers themselves. It's like the music itself takes on the characteristics of how they see (or don't see). Same for visual art, oh my god the crap that is prevalent these days!
 
I believe that the problem is that "digital" is not real music captured, but just "interpretation" with zeros and ones. Maybe the real music energy information somehow travels with the analog recording, which couldn't be faked with the digital music. Digital recording in essence is just faking.


The same goes for digital photography too, it still cant beat the real "capture the light" chemical photography.
 
Let's say for the sake of argument that digital is good enough that the sound pressure appearing at the ears is close enough to analog that they are essentially the same. Is that the only thing that matters?

The only special thing about a digital system is the ADC->software->DAC. All the rest is analog and is interchangeable with whatever you would have in a fully analog system. Digital converters all have their own special kinds of distortion. Many 16-bit DACs actually only have about 12 bits of useful output - the last 4 bits are scrambled and/or buried in noise.

We know that EMF raises stress levels and causes strange things to happen in the body. Digital technology could certainly be an EMF source. So I see that as a potential explanation. It is easy to measure ultrasonic and RF noises on the output of almost all audio players. There are some things it doesn't explain though like I said in my last post.
 

Trending content

Back
Top Bottom