Some comments on information theory

@ark I understand, I was reffering to the question wheter math, that equastions we 'discover', existed before their discovery or not. And found an example that prooves it can. I know its not representation of our world.
It's impossible for us to write a computer program with infinite set of rules, BUT... you mentioned prime numbers, and here is interesting problem. Instead of infinite amount of prime numbers as an initial math of example universe, we could write initial universal math with The Sieve of Eratosthenes. Which is finite set of rules. It will lead to unfinite set of prime numbers as well.
 
@ark It will lead to unfinite set of prime numbers as well.
It will be just a finite set of rules. This finite set rules applied to a finite set of numbers creates a smaller set. But it will never create an infinite set from a finite set.
A simple rule that creates an infinite set is the rule n -> n+1. But it will not create an infinite set unless it is applied an infinite number of times.
But perhaps this discussion takes me too far away from the main subject of this thread. I want to know what information is. I want to know what entropy is. And I hope to learn!
 
But perhaps this discussion takes me too far away from the main subject of this thread. I want to know what information is. I want to know what entropy is. And I hope to learn!
Then let us continue. Let us move towards entropy. I have waited a long time for this. However, we will begin atypically by discussing permutations.

Imagine a discrete Universe where space is divided into cells. There can be only one particle per cell. This assumption will apply to all of the examples in this post. To begin with, let's assume that we have an isolated system composed of two cells and two particles. Our particles and cells are distinguishable. Let’s number the particles as 1 and 2, and let the cells intuitively be numbered from left to right.

So we have the following situation:
1 2

In how many ways can we arrange the particles in cells now? How many possible settings are there in this system? We see that only two:
1 2
and
2 1

So let us create a slightly more complicated system. Now we have three cells and three particles. How many possible settings are there? Let's list them:
1 2 3
1 3 2
2 1 3
2 3 1
3 1 2
3 2 1

So we have six possible settings. How many possible settings will be in a system that consists of four particles and four cells? And how many of them will be in a system with n particles and n cells?

I will answer this question. The general formula is n!. We need to know that

n!=n∙(n-1)∙(n-2)∙…∙1.

Hence we know that:
1!=1,
2!=2∙1=2,
3!=3∙2∙1=6,
4!=4∙3∙2∙1=24,
5!=5∙4∙3∙2∙1=120
and so on.

This function is called factorial, and it is by no means a new invention. As a digression, I will quote an excerpt from Wikipedia:

The use of factorials is documented since the Talmudic period (200 to 500 CE), one of the earliest examples being the Hebrew Book of Creation Sefer Yetzirah which lists factorials (up to 7!) as a means of counting permutations. Indian scholars have been using factorial formulas since at least the 12th century. Siddhānta Shiromani by Bhāskara II (c. 1114–1185) mentioned factorials for permutations in Volume I, the Līlāvatī. Fabian Stedman later described factorials as applied to change ringing, a musical art involving the ringing of several tuned bells.

Why is that and how does this apply to our settings? Well, when we have n cells and n particles, after assigning the first particle to a specific cell, the next particle has only n-1 possibilities. Another one has just n - 2. We can check it on any simple example, for example for n = 3.

Nevertheless, we must remember that 0! = 1. Why is it like that? If anyone is interested, I can write a separate comment on this. I will then write about the function as a mapping of sets. For now, however, we can ignore this and make this assumption. So 0! = 1. Let us remember this.

Now one of my favourite questions should be asked. When does matter begin (I mean classical matter, like solid or gas)? Is one atom already classical matter? Or maybe one molecule of water is classical matter? And 5 atoms? And 100 atoms/molecules? And 6.02214076∙1023? But why 6.02214076∙1023? Is that some kind of magic number? I will come back to this question in a moment.

First of all, what does it actually mean that we are dealing with matter? Well, I'm going to start with the statistical idea of entropy, which is applicable to thermodynamics and statistical physics, so I’m abandoning my beloved quantum world, because I want a macroscopic model, my world is to be Newtonian. Therefore, I will write my question more precisely: How many atoms or molecules do I need to be able to describe their behaviour in classical (Newtonian) terms?

Now let’s go back to our magic number: 6.02214076∙1023. In physics and chemistry, this constant is called Avogadro’s constant and denoted by the symbol NA. And more specifically:
NA = 6.02214076∙1023 mol-1,
where mol-1 is equal to 1/mol.

So it is nothing more than a school definition of a mole. One mole contains 6,02214076∙1023 elementary objects. Elementary objects may be atoms, molecules, and even pieces of paper, unicorns or whatever we can think of. When we look at the periodic table of elements, we find, among others, information about their atomic mass. This atomic mass corresponds to the number of grams of the given substance in one mole. So we see that one mole of hydrogen weighs 1 gram, one mole of oxygen weighs 16 grams, hence one mole of H2O weighs 2 ∙ 1 g + 16 g = 18 g and so on.

So when we talk about a mole of atoms or molecules, we have macroscopic sizes that meet the principles of classical mechanics. However, the question about the origin of matter still remains unanswered. And half of the mole is already matter or not? And a quarter of a mole? Is this border sharp or is it fluid? These are just more questions, but let’s make an assumption to make it easier to understand the basic concept of entropy.

Imagine a typical classical system. Suppose we have, for example, 1025 atoms or molecules. Going back to our world that is made up of cells, we also have 1025 cells.

How many possible settings do we have now? Let us look at the following table:
n
n!
0​
1​
1​
1​
2​
2​
3​
6​
4​
24​
5​
120​
6​
720​
7​
5040​
8​
40320​
9​
362880​
10​
3628800​
11​
39916800​
12​
479001600​
13​
6227020800​
14​
87178291200​
15​
1307674368000​
16​
20922789888000​
17​
355687428096000​
18​
6402373705728000​
19​
121645100408832000​
20​
2432902008176640000​
25​
1.551121004×1025
50​
3.041409320×1064
70​
1.197857167×10100
100​
9.332621544×10157
450​
1.733368733×101000
1000​
4.023872601×102567
3249​
6.412337688×1010000
10000​
2.846259681×1035659
25206​
1.205703438×10100000
100000​
2.824229408×10456573
205023​
2.503898932×101000004
1000000​
8.263931688×105565708
1010101.9981097754820

Now let us move on to our everyday life. Let us imagine we have two paints - red and blue. Red paint consists of red particles, and blue - of blue particles. We mix them together and get a violet colour. How long do we have to mix the paints to separate them again and to have red and blue paint again?

If we mix forever, we will surely succeed. However, let’s try to estimate the probability that we will be able to achieve the intended effect much faster. Let the number of all molecules equal the number of all cells and be equal to 1025. We assume that molecules can move between cells in any way. We do not pay attention to whether the cells are close or far apart. Contrary to appearances, it is by no means so important when we consider such large numbers.

Our set of all possible settings includes now 1025! elements. How many settings are there where the colours are separated? Of course, more than one, maybe even a billion, but that's still not much compared to the number 1025!. Many more settings are the ones in which the paints are not separated, and this is the fundamental idea of entropy. We say that the Universe tends towards disorder, is it really so? Or maybe the Universe only selects states that do not seem ordered to us?

As I mentioned in one of the previous posts, when we roll the dice ten times and we get the same number of dots ten times, we are surprised, this result seems different, interesting. However, it is just as likely as any other “uninteresting” result that is no less unique.

This observation is the first secret of entropy. In the next post, I will mention the Maxwell’s demon, who will at all costs want to find harmony within the chaos.
 
It will be just a finite set of rules. This finite set rules applied to a finite set of numbers creates a smaller set. But it will never create an infinite set from a finite set.
A simple rule that creates an infinite set is the rule n -> n+1. But it will not create an infinite set unless it is applied an infinite number of times.
But perhaps this discussion takes me too far away from the main subject of this thread. I want to know what information is. I want to know what entropy is. And I hope to learn!
Tony tried to describe the finite to infinity situation with Grothendieck universes like this:
Realistic Physics/Math can be described using Three Grothendieck universes:
1 - Empty Set - the seed from which everything grows.
2 - Hereditarily Finite Sets - computer programs, discrete lattices,
discrete Clifford algebras, cellular automata,
Feynman Checkerboards.
3 - Completion of Union of all tensor products of Cl(16) real Clifford algebra -
a generalized hyperfinite II1 von Neumann factor algebra
that, through its Cl(16) structure, contains such useful Physics/Math objects as:
Spinor Spaces
Vector Spaces
BiVector Lie Algebras and Lie Groups
Symmetric Spaces
Complex Domains, their Shilov boundaries, and Harmonic Analysis
E8 Lie Algebra
Sl(8)xH92 Algebra (Contraction of E8)
Classical Physics Lagrangian structures
Base Manifold
Spinor Fermion term
Standard Model Gauge Boson term
MacDowell-Mansouri Gravity term
Quantum Physics Hamiltonian/Heisenberg Algebra
Position/Momentum Spaces
Gravity + SM boson Creation/Annihilation Operators
Fermion Creation/Annihilation Operators

Basically the finite cellular automata (game of life, elementary cellular automata), Feynman checkerboard (itself a type of cellular automata) do have a Clifford algebra structure and things like your complex domain work can show up here but I have no idea what Tony is doing with something as large as Cl(16) though the game of life is an even larger thing. Something cellular automata related is though more likely fundamental than like you say Shannon entropy which is kind of too statistical; it could be good for predicting how many black holes there likely are but it's kind of always talking in "likely" terms rather than something more exact like Clifford algebra derived from cellular automata.
 
Tony tried to describe the finite to infinity situation with Grothendieck universes like this:
Tony Smith was a very bright person. I am rather slow. I do like Clifford algebraa, for sure. My last paper on Clifford algebras was finally accepted for publication, but only after the referee wrote his report almost as long as my paper checking everything in the corrected version. In the original version I have made several completely dumb errors. Yet I do not think that Clifford algebras and exotic mathematical structures will give us the final answer. Quantum theory is fundamental and yet quantum theory is in big trouble. Consciousness and time must be explained, and neither Clifford algebras, nor some E(8) or conformal groups will give us answer either. The world is primarily made od information. The world is becoming form potential to actual. Buit what is this "information"? And what is "becoming"? Even if I use mathematics I never forget the need to answere fundamental questions. Where do the souls reside? I do not think they reside inside Clifford algebras or in conformal domains. Something entirely different is needed. Topology is mostly lost in quantum theory. But it is important. Nonlinearity. Unstable gravity waves. Who needs elementary particle masses in order to understand the paranormal? Souls communicate - how? Electromagnetic fields are not enough. Tables of quantum numbers of elementary particles are too much. The truth is somewhere in the middle.
 
Nevertheless, hitting the chosen point is by no means impossible. Why is it like that? As mentioned earlier, in the case of infinite sets, the probability of a given event is not an inherent feature of the set and the event, but depends on how the event is selected. By choosing fancifully, the probability can be manipulated. In particular, it makes no sense to talk about probability if we choose from an infinite set and do not specify how we choose an element.

I don’t understand the above - for me this is a contradiction: If the probability of an event (hitting the point with zero extension) is zero, how would it NOT be impossible to hit the point? Can you please clarify?


I am inclined to the hypothesis that the concept of time is true only for receivers with ability to perceive information.

Would that mean that radioactive decay only happens if there is an observer? But then, how ‘close’ does the observer need to be? And how clever? Is an animal enough? Because as far as we know, radioactive decay has existed since the creation of the Universe, or I am just assuming something here?

I am not very mathematically trained (high-school maths) so I don’t have the bases needed to understand all that is discussed. And I also lack SOA‘s philosophically perspicacious mind.

But even though, I find this discussion fascinating! Thanks for starting this thread.
 
Some random thoughts that I hope are not too out of the field. We all have an intuition of what information could be but there are no formal definitions, the same way "energy" is ued left and right without specifying what energy one is talking about. To me, information, in contrast to other physical quantities, that have a ontological flavour to their definitions, could be approached by merging ontology and epistemology, and also the miscroscopic and macroscopic.
On the one hand, we have quantum phenomena, from which we have the uncertainty principle. A particle (or a thing), doesn't disclose its whole story. For instance it gives you either its position or its momentum, or a fuzzy mix of the two, but never both at the same time. Even time is an information that it gives to the detriment of energy. Strange.
From the classical point of view, we have the Mach conjecture or paradox. I tried to refresh my idea about this one but I couldn't find in the internet anything satisfactory. Basically, there is a frame of reference for the rotation of a body, which is the rest of the universe. If one takes out the rest of the universe, will the centrifugal forces on the body continue to be applied or the body will behave as if it were not rotating?

That's for rotation, but if we consider what an interaction between particles is, it's an exchange of momentum or energy. In a universe with one infinitesimally small particle, nothing happens. In a universe with two infinitesimally small particles, what happens? Do they interact? How would each particle sense the distance of the other to interact with. Is there a sense to distance (and consequently time) in such a universe?
Now with a universe with three particles, things will start to become interesting. One particle may be more interacting than another (or "closer") and the three particles can assume different relative configurations. Each particles can "know" where it stands in comaprison to the two others. Is that the beginning of information? Is information at this level the awareness of each particle or the self-awareness of the whole universe of its consitituents? In this case one can maybe define interaction, space, and maybe time as the level of awareness these particles have of each other (I interact more with the particle I know more, therefore it's "closest" to me and we exchange stuff (like relative space), but since I'm also aware of the other particle, I interact with it too, and the different configurations are time). Or maybe it's the whole thing that is aware of the awareness of its constituents. If the particles have other attributes, the interactions are more complex and the information too. As information is clustered in small and large structures (in a universe that has more than three constituants), things become more complex, but I think probably one could maybe start thinking of information in these terms, through toy closed universes, then extant to complex open universes, or something like that. OSIT
 
Daily routine of reading news gathering websites 'accidentally' brought me to this today: https://aip.scitation.org/doi/full/10.1063/5.0064475

Later, Landauer speculated that the universe could be a giant computer simulating itself,4 an idea further developed in more recent studies by Lloyd.5–7 Wheeler considered the universe made up of particles, fields, and information, proposing that everything emanates from the information inherent within it, i.e., “It from Bit.”8.

Vopson postulated that “information” is the fifth form of dominant matter in the universe along solid, liquid, gas, and plasma.23,24

Due to the mass-energy-information equivalence principle,22 we postulate that information can only be stored in particles that are stable and have a non-zero rest mass, while interaction/force carrier bosons can only transfer information via waveform.

but it is important to mention that information could also be stored in other forms, including on the surface of the space–time fabric itself, according to the holographic principle.13
 
When does matter begin (I mean classical matter, like solid or gas)? Is one atom already classical matter?
According to wiki:
"matter is any substance that has mass and takes up space by having volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light."

But later we read about quantum interpretation that is as follows:
"However this is only somewhat correct, because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles and they do not have well-defined sizes or positions."

This seems similar by analogy to my previous example with world of 6 sided dices, when 1 dice object behaved completely random when 'rolled', while an object created by 36 exact same dices, behaved in more 'predictable' (we can call it 'solid') way.
 
I have thought something a bit outside the mathematical subject.

You postulated that it has the same mathematical possibility that we see the number 1111, that the 1342 or the 3245, etc ...

On this I want to comment on something that happens to many members of this forum.

Many of us in a state of total anticipation (this is important), when they look at the clock, they / we see the time 11:11, or 23:23, or 12:21, or 13:13 etc ...

A minute later it would be 11:12, 23:24, etc and it would no longer call our attention.

What Causes This?

The essential factor, I believe, is "conscience."

There is only one minute in the day that marks 11:11 and just at that minute we are looking.

And ... when you look for it, the number is not there ...

Just a few thoughts.
 
@Wandering Star, @Cleopatre VII 's example was for even distribution of probability for each event in certain set.
What time you see on the clock has no even distribution, for example, you sleep between 24:00 and 7:00 am, so probability of you seeing time between this range is very low. While more likely you will see time between 7am and mindnight. But even these range has not even distribution of probability to see certain time between it because of many other factors in your daily routine. Humans have ability to peripheral vision. Also we more or less know what our currently 'should be' by sun position on the sky and other things.
 
Many of us in a state of total anticipation (this is important)
Here I wanted to say "a state of total absence of anticipation."

I believe that there is a part of our "consciousness" that has access to the "information field", and the absence of anticipation means that there is no "constriction". Information "flows."
 
@Wandering Star, @Cleopatre VII 's example was for even distribution of probability for each event in certain set.
What time you see on the clock has no even distribution, for example, you sleep between 24:00 and 7:00 am, so probability of you seeing time between this range is very low. While more likely you will see time between 7am and mindnight. But even these range has not even distribution of probability to see certain time between it because of many other factors in your daily routine. Humans have ability to peripheral vision. Also we more or less know what our currently 'should be' by sun position on the sky and other things.
However, for example it also happens with car license plates, which is perhaps a closer example.

In my country the license plates of the cars have 4 numbers and three letters. 3425-FGH For example.

Well, 9999 license plate numbers would not call my attention, but if I see a license plate with the number 0000-three letters, I assure you that I am left with my mouth open in surprise.

And if I see it twice in the course of an hour even more.

And I'm not looking to see this ... It just happens.

Well like I said just one thought.
 
Back
Top Bottom