Session 24 January 2026

Wonderful session! And it sounds like we’ve got the all clear for the ol’ Mellotron* used by Floyd, Zeppelin, Moody Blues, etc!

*the Mellotron is an electric keyboard that plays loops of prerecorded strings, flutes, etc, (think In The Court Of The Crimson King).
 
Wonderful session! And it sounds like we’ve got the all clear for the ol’ Mellotron* used by Floyd, Zeppelin, Moody Blues, etc!

*the Mellotron is an electric keyboard that plays loops of prerecorded strings, flutes, etc, (think In The Court Of The Crimson King).

I always liked "The Rain Song" by Zep, it's got wonderful mellotron string sounds by John Paul Jones, who was the quiet genius in the band. Funnily enough however, I always hated their 1979 album "In Through the Out Door", which is overloaded with absolutely ghastly, garish synth sounds. It totally ruined the sound of the band. Jones by that point had bought a polyphonic synth that had been popularised by Stevie Wonder, and he was determined to use it all over that terrible album. Apparently it was known as "the dream machine". Ugh, just thinking about that album makes me want to throw up. The stuff of nightmares more like!
 
Christopher Langan's story reminds me of William James Sidis. Purportedly the most intelligent person to have ever lived (or that scored the highest on i.q tests).

They both developed very unique cosmological theories, but chose to live rather anonymously.
 
Thanks as always for the thought provoking session.

The music question is imo indeed a can o f worms. As a musician, recording and mixing engineer I have a few questions. Electronic music as defined by what exactly? By electronic music do they mean all the parts of the song are made by synths?

Let’s take pink Floyd for example, majority of their songs are drums, bass, guitar and vocals, ( non electronic) with some electronic elements as well, how does that balance out? Or does any electronic element make it “anti human”?

The majority of the music I make is sampled (real) drums, guitar, my voice, bass, sampled orchestral strings etc with some fully electronic elements, so I am interested in how that works?

Also, when we record anything, we transform it from an analogue wave into digital, then we mix and master and export it to an mp3 or wav file to be listened to on Spotify or cd or what ever. So does that process make it anti human too? Digitising it? What about reverb and delay and all the other things we use to enhance the sound? Eq is something that is in everything and it changes the frequency by enhancing or reducing parts of the frequencies in the instrument so it fits together with all the other instruments.

If/when we're doing things correctly, the videos are helping to spur thought, reflection and questioning - less any emotional engagement (though I can see how they can skirt the border of informational/emotional in a few instances). So, like any tool, we just keep firmly in mind that it is the intent behind their (proper) use, and making important distinctions between what's informative and what it isn't, that matters greatly, and counts for so much.

So we keep doing what we are doing?
 
“He proposes that reality is a self-simulating, self-configuring system where mind and universe are fundamentally identical and interconnected. Reality is like a self-processing language that designs and perceives itself. In short, the universe is cognitive at its foundation, self-contained, there's no external creator, unexplained, brute fact, and evolves through goal-directed, that is, tele-processes that resolve paradoxes in physics, logic, and philosophy while proving concepts like God as the global self-aware structure of reality. Now, the thing is, you know, I like Chris Langan, and I... you know, whatever. Is his model correct?”

The model is below. Consciousness created the material physical reality the human body is in. We and everything in this reality is computed/data. Physical/material is not fundamental consciousness is.

We are streams of data/conscousness “logged online” a unit of consciousness to this physical body in the physical reality.

If you read the cosmic anthropic principle it states a few material things exist to allow Earth this physical material planet to exist and consciousness finely tuned this for existence the data is carefully put in place to allow existence. For what reason?

My guess for the same reason a pilot would enter a simulator to become better without risking anything. This consciousness creates simulator and we the individual units of consciousness transmit data through our free will decisions back to the larger consciousness system so it can be a better creator.

A question for the Cs would be what created consciousness or what is beyond consciousness. I ask this question as I have never found anyone that has an answer or a good/close answer. I do know there are other realities other than this earth but again it’s created by consciousness. What created consciousness????????

 
So we keep doing what we are doing?

Just thinking about it and when it comes to making music, the original source may be a factor. Samples are recordings of a human putting in the effort to make the sound being recorded; bowing the strings, hitting the drums, blowing the trumpet. There's energy there that's being recorded in a sample.

A synthesizer creates a sound based on electricity powering a sine wave, there's no human element there. And if the question of analogue synth vs. digital synth comes up, there's still no human effort in generating the sound. There's no soul in a synth.

In my early opinion, a sample can be modified (without losing too much of it's original energy?) because the original sound was still produced by a human. I also wonder if sounds generated by synths are anti-human because they create sounds that might not exist naturally in the universe, or that they generate a facsimile of a sound that does... or what someone believes it should sound like.
 
So I asked perplexity and got this response… so sine waves do exist “approximately” naturally, the others do not.

Natural waves are more complex and irregular and overlap making less “clean” waves. So, does that mean digital waves are too “clean” to precise and that’s why they lack “soul”? Just a thought.

Computer‑generated music ultimately produces sound waves in the air, just like any other sound you hear. These are longitudinal pressure waves: regions of higher and lower air pressure moving outward from a speaker to your ears.

Inside the computer / synth​

Before the sound reaches the speaker, it exists as electrical or digital waveforms:

  • Sine waves: Smooth, pure tones with only a fundamental frequency.
  • Square waves: Buzzy tones made of the fundamental plus odd harmonics.
  • Triangle waves: Softer than square, still using odd harmonics but weaker ones.
  • Sawtooth waves: Very bright and harsh, rich in many harmonics.
Digital instruments (synths, DAWs, plug‑ins) generate and combine these basic waveforms, then shape them with filters and effects to create different timbres.

From electrical to actual sound​

  • The computer or synth outputs an electrical signal that has the same waveform shape as the sound you want.
  • The speaker converts that electrical waveform into vibrations of the speaker cone.
  • Those vibrations push and pull the air, creating longitudinal sound waves that your ears detect as pitch, loudness, and tone color.
So: inside the system you work with electronic/digital waveforms (sine, square, triangle, saw, etc.), and what you finally hear are sound pressure waves in air created from those waveforms.

Do the waves exist in the world naturally
Pure waveforms like sine waves occur approximately in nature, such as in water ripples or simple string vibrations, but they're never perfectly ideal due to real-world imperfections.

Non-sinusoidal waveforms—square, triangle, and sawtooth—do not exist naturally in their ideal forms.

Why These Are Artificial​

  • Square waves require instantaneous jumps, impossible without electronics; nature produces rounded transitions instead.
  • Triangle and sawtooth waves demand precise linear slopes and sharp resets, which don't happen organically due to inertia and friction.
  • They're mathematical constructs used in synthesis for their harmonic content (e.g., square has odd harmonics; sawtooth has all harmonics).
Natural sounds (voices, wind, instruments) are complex, irregular waveforms made of many overlapping partials, not these clean shapes.
 
Thanks for sharing the session.
The question of instrumental music versus electronic music is a can of worms. I like tales, and I immediately thought of the Pied Piper (instrumental), and all the tales, even those of non-Western origin, which generally tend to focus not on the instrument but on the intent and the resonance with it, relating to the use of musical instruments such as zithers, flutes, ocarinas, and drums to synchronize resonance and determine, depending on the intent, whether to create reality and/or dreams as reality, summon devils or saving entities, and/or be used as weapons for good or evil intent.And it's not as simple as it seems, since it is determined by the intent that generates the resonance.
 
Last edited:
It would have been interesting to have the idea of the C's on the fact that music changed from a frequency of 432 Hz to 440 Hz around 1950 or what is the best frequency for humans?
I knew that the frequency of 440Hz was assigned to the "A" note on the piano (first A after middle C). I also knew that all the instruments in an orchestra tune to this particular note, but I never understood why this was or how it came to be. So, out of curiosity I asked Grok...

Prompt:
Tell me about 440hz sound frequency. Why was this assigned the natural sound of the A note, and why do orchestras tune to this note?

Grok:
The 440 Hz frequency refers to the standard pitch for the musical note A above middle C (often denoted as A4). This is known as concert pitch or A440, serving as a universal reference point for tuning instruments in Western music. It's the frequency at which the sound wave vibrates 440 times per second, producing that specific tone.

History and Assignment to the A Note

The assignment of 440 Hz to the A note isn't based on any inherent "natural" property of the universe but rather on historical conventions, practical compromises, and standardization efforts. Pitch standards varied widely for centuries, with different regions and eras using frequencies ranging from around 392 Hz (Baroque era) to over 450 Hz in some 19th-century settings. The push for uniformity arose from the needs of instrument manufacturing, international performances, and emerging technologies like broadcasting and recording.

The idea of tuning A to 440 Hz was first proposed in 1834 by German inventor Johann Heinrich Scheibler, who developed a precise pitch-measuring device called a tonometer. It was endorsed that same year at a conference of physicists in Stuttgart, Germany, as the "Scheibler pitch" or "Stuttgart pitch." However, adoption was slow. In 1859, France standardized to A=435 Hz by government decree to curb rising pitches that strained singers and instruments. Britain followed with A=439 Hz in 1896, partly because it was close to the French standard adjusted for temperature differences, but 439 (a prime number) proved tricky for electronic synthesis.

By the early 20th century, pitches were creeping higher for a "brighter" sound in larger venues, leading to inconsistencies. The American music industry informally adopted 440 Hz in 1926 for manufacturing. The American Standards Association (now ANSI) formalized it in 1936. An international conference in London in 1939, organized amid broadcasting demands for consistency, settled on 440 Hz as a compromise—close to existing traditions, factorable for electronic generation (unlike 439 Hz), and practical for global use. The International Organization for Standardization (ISO) reaffirmed it as ISO 16 in 1955 and 1975. The choice of the A note specifically as the reference stems from historical practice: tuning forks, a common tool for pitch reference since the 18th century, were typically made for A due to its central role in string instrument tuning and its position in the scale. It wasn't tied to any cosmic or "natural" resonance, despite modern myths (e.g., claims favoring 432 Hz as more harmonious with nature, which lack scientific backing and weren't factors in the standardization process).

Why Orchestras Tune to This Note

Orchestras tune to A440 because it's a convenient, shared reference that ensures all instruments align for harmonious play. Every string instrument (violins, violas, cellos, basses) has an open A string, making it easy to match without complex adjustments. The oboe typically provides the initial A note because its pitch is stable (due to its double-reed design and lack of easy adjustment) and its timbre cuts through the ensemble clearly. Winds and brass then tune to the strings, creating a unified sound.

While 440 Hz is the global standard, some orchestras (e.g., Berlin Philharmonic at 443 Hz or Vienna at 442 Hz) tune slightly higher for a brighter, more projecting tone in certain halls or repertoires, as higher tension on strings can enhance volume and brilliance. This isn't a rejection of 440 Hz but a slight variation for artistic reasons.

I bolded the parts that stood out for me - specifically the reference about it being more suitable for "electronic music generation", that fact that different countries used to factor in temperature to adjust the standard, and that the oboe has the most stable pitch and whose unique timbre clearly cuts through the ensemble.

The more you know...
 
So I asked perplexity and got this response… so sine waves do exist “approximately” naturally, the others do not.

Natural waves are more complex and irregular and overlap making less “clean” waves. So, does that mean digital waves are too “clean” to precise and that’s why they lack “soul”? Just a thought.

Been thinking along similar lines.

Basically, a synth produces a sine wave, square wave, or saw wave in most cases at the basis (whether for electronic drum sounds, melodic sounds etc.)

Now, perfect sine waves don't occur in nature, but all natural waveforms can theoretically be decomposed into sine waves (mathematically), i.e. these are sort of the "building blocks" of sound, interacting with each other in complex ways (like many different ripples on water producing patterns).

This means that using pure sine waves to create sound is sort of an artificial "deconstruction" and focusses accoustic power sort of one-dimensionally.

Perhaps a good analogy is how a laser works: normally, light is very complex as seen in nature, and it can be beautiful, like when you watch a sunrise. Now a laser takes a very small part of the spectrum and focuses it in a powerful beam - which can do great harm to the body. I mean, in laser surgery it is literally used to burn and decompose flesh, so there you go...

Similarly, a sine wave is an artificial powerful acoustic "beam" that doesn't really carry information, but just raw energy, which I suppose can bypass the body's natural state of being "tuned to the natural acoustic environment" and do harm.

Also note that if a pure sinewave as a building block is one end, noise is the other end (per Grok:)

White noise (idealized) has a flat continuous spectrum → infinitely many sine waves with equal power density.

So the world of natural sound exists on a spectrum between a pure sine wave on one end and white noise at the other - both of which are very unpleasant to the ear: a pure sine wave is like a needle piercing the body, and white noise is like crashing into a wall. Beauty, creativity, and pleasant sound lie between those extremes in infinite perturbations.

Now, this could mean that if the synthesizer sound is heavily filtered, modulated etc., which it almost always is, this could make it less bad, since this introduces overtones, distortions etc., making the sound more "natural" (it's also more pleasing to the ear as a consequence). But the problem still remains that these sounds are built on an "unnatural" foundation, and parts of the negative consequences remain - you can still hear these basic "deconstructed" artificial waveforms in there.

"Anti-human" in this context means that here in 3D we live in a natural, "human" acoustic environment that carries information produced by processes that generate complex patterns to which we are attuned. Brute-forcing artificial sounds that use the "raw building blocks" into our environment introduces an "alien element" that "pierces" our natural human attunement, and is therefore "anti-human".

Some more on the math by Grok:

Prompt: Can all natural sounds theoretically be resolved into sine waveforms?

Yes, in theory — with very few practically relevant caveats — all natural sounds can be decomposed (resolved) into sine waves.This is one of the most powerful and useful results in mathematical physics and signal processing. Here's the reasoning broken down clearly:
  1. Fourier's theorem (periodic case)
    Any periodic waveform (repeating forever in both directions) with finite power can be exactly represented as an infinite sum of sine and cosine waves whose frequencies are integer multiples (harmonics) of the fundamental frequency.

  2. Fourier transform (non-periodic / finite-duration case)
    Virtually all real-world sounds are not perfectly periodic forever.
    The modern Fourier transform (or more precisely the continuous-time Fourier transform) generalizes the idea:
    Any "reasonable" sound pressure signal s(t) — meaning it has finite energy or at least finite power in any finite time interval — can be represented as a continuous superposition (integral) of sine and cosine waves (or complex exponentials) at every possible frequency, not just harmonics. Mathematically, under very mild conditions (s(t) ∈ L²(ℝ) or even just locally integrable with some decay), we have:s(t) = ∫_{-∞}^{∞} S(f) e^{i 2π f t} df
    (or equivalently using sines & cosines)where S(f) is the frequency content (spectrum).

  3. What "all natural sounds" satisfy
    Real physical sound signals are:
    • finite in duration (nothing vibrates from t = -∞ to +∞),
    • band-limited in practice (air molecules can't transmit arbitrarily high frequencies),
    • square-integrable over any finite time (∫ |s(t)|² dt < ∞ on any interval),
    → they fall comfortably into the class of functions to which the Fourier transform applies.→ they can be represented — to any desired precision — as a sum/integral of sine waves of different frequencies, amplitudes, and phases.
Important practical & theoretical footnotes
  • Pure infinite-duration sine waves do not exist in nature — real sounds start and stop → their spectra have some spread (no Dirac delta at one exact frequency).
  • Infinite number of sine waves are usually needed for perfect representation (especially for transients, clicks, percussive sounds, or noise-like sounds like wind/water).
  • White noise (idealized) has a flat continuous spectrum → infinitely many sine waves with equal power density.
  • In digital audio we truncate both time and frequency range → we get a very good but never mathematically perfect reconstruction (Nyquist-Shannon applies here).
Bottom line (2026 perspective)
Yes — theoretically, every physically realizable natural sound (speech, bird song, thunder, ocean waves, a door slam, a violin note, etc.) can be exactly decomposed into (possibly continuously many) sine waves via the Fourier transform.This is not just a convenient approximation; it's a mathematically exact equivalence for all signals we encounter in the real world. The sine wave is special because:
  • it is the eigenfunction of linear time-invariant systems (resonators, filters, the ear, air propagation, etc.),
  • it is the only waveform that remains unchanged in shape when passed through a linear filter (only amplitude & phase change).
That's why nature "likes" to be analyzed in sine waves, even though pure sines almost never occur alone in the wild.
 
In terms of raising our own FRV and maybe indirectly that of others, just by being an inspiration or field of influence for them (without expectation, of course) and with regards to these voids like AI where emotions are sucked into nothingness, I have been wondering and pondering this: ...

This morning, I got up out of bed and felt gratitude. I noticed myself naturally giving thanks to even the empty toilet paper roll (you know, the cardboard cylinder) for having fulfilled the role of holding the toilet paper when it was still on there. I imagine the empty TP roll belongs to 1D but is not conscious or cannot become conscious like a house can become consious.

Is feeling grateful for a toilet paper roll then also actually draining positive emotions into a black hole because those rolls cannot receive the thanks? Or is the gratefulness I feel fulfilling my own heart as I'm the one feeling it?

Honest question, by the way, no matter how funny it might seem.
 
Back
Top Bottom