Psalehesost
The Living Force
Cassiopaean Session 1997-12-31 mentions truth percentages, and the following numbers have been provided for the (older) Cassiopaean material and for the (old published version of) the Ra material:
Ra: ~63%
C's: 71.7%
The algorithm is: Total word count divided by true word count.
Counts of true, false, and neutral words are possible (by non-mechanical means), according to the information in the session. (Since words can be individually classified, it follows that a count of neutral words is also possible.)
The neutral word count is unused in calculating the percentages. The C's said that neutral "belong to the 37% as they cannot be counted subjectively as accurate".
However, there is one more alternative. Instead of counting neutral words as true (the rejected option) or as false (the option the C's used), they can be subtracted from the total word count for a non-neutral word count.
Alternative algorithm: Non-neutral word count divided by true word count.
This would increase the percentages. It would almost certainly increase the percentage more for the Ra material than it would do for the Cassiopaean material, given the style difference.
The Cassiopaean algorithm is bad because it lets noise skew the measurement, instead of filtering it out. The means for filtering are pre-provided if counts are available for all three word categories, as they apparently are.
This leads to the question: Why did the C's provide bad statistics, compared to what they could have provided with a trivial improvement of their algorithm? (Only the purely mechanical part of the process needed a small change.)
They also made a false claim in presenting the false dichotomy motivating their choice of algorithm (their choice of how to treat neutral words).
This is a riddle. This is exactly the kind of thing that someone can discern and point out on purely intellectual grounds. If the C's had wanted to lie, they could have presented a good algorithm and provided bad numbers, and there would have been no rational, clear-cut way to find and point out the flaw.
Instead, the C's did provide a bad algorithm, and this is the only thing I criticize regarding the percentages. The percentages are bad because the algorithm is flawed. Beyond that, other questions regarding the numbers remain as before.
Perhaps there is some kind of symbolic message in the choice of a bad algorithm. The theme is this: Counting the neutral as negative instead of counting according to (or focusing on) what matters. ("Counting" may symbolically be mapped to thinking, and/or perceiving, more generally. That's the track I'm exploring, anyway.)
I think it certainly wasn't an accident, whatever the specifics turn out to be. The C's know too much for it to be an accident. (Were it accidental, it would then be possible to immediately rule out the option that they are what they have claimed to be.) It may also be the case that they wanted the error to be found; otherwise, quite trivially, they could have presented something different.
Ra: ~63%
C's: 71.7%
The algorithm is: Total word count divided by true word count.
Counts of true, false, and neutral words are possible (by non-mechanical means), according to the information in the session. (Since words can be individually classified, it follows that a count of neutral words is also possible.)
The neutral word count is unused in calculating the percentages. The C's said that neutral "belong to the 37% as they cannot be counted subjectively as accurate".
However, there is one more alternative. Instead of counting neutral words as true (the rejected option) or as false (the option the C's used), they can be subtracted from the total word count for a non-neutral word count.
Alternative algorithm: Non-neutral word count divided by true word count.
This would increase the percentages. It would almost certainly increase the percentage more for the Ra material than it would do for the Cassiopaean material, given the style difference.
The Cassiopaean algorithm is bad because it lets noise skew the measurement, instead of filtering it out. The means for filtering are pre-provided if counts are available for all three word categories, as they apparently are.
This leads to the question: Why did the C's provide bad statistics, compared to what they could have provided with a trivial improvement of their algorithm? (Only the purely mechanical part of the process needed a small change.)
They also made a false claim in presenting the false dichotomy motivating their choice of algorithm (their choice of how to treat neutral words).
This is a riddle. This is exactly the kind of thing that someone can discern and point out on purely intellectual grounds. If the C's had wanted to lie, they could have presented a good algorithm and provided bad numbers, and there would have been no rational, clear-cut way to find and point out the flaw.
Instead, the C's did provide a bad algorithm, and this is the only thing I criticize regarding the percentages. The percentages are bad because the algorithm is flawed. Beyond that, other questions regarding the numbers remain as before.
Perhaps there is some kind of symbolic message in the choice of a bad algorithm. The theme is this: Counting the neutral as negative instead of counting according to (or focusing on) what matters. ("Counting" may symbolically be mapped to thinking, and/or perceiving, more generally. That's the track I'm exploring, anyway.)
I think it certainly wasn't an accident, whatever the specifics turn out to be. The C's know too much for it to be an accident. (Were it accidental, it would then be possible to immediately rule out the option that they are what they have claimed to be.) It may also be the case that they wanted the error to be found; otherwise, quite trivially, they could have presented something different.