Music,sound and resonance

Too many threads can have the expected opposite effect, so it's time to find a greatest hit !!!!

Citation de: Cs on 23 Sept 2000:


Q: Now, let me get to MY questions! You once said that the core of DNA is an as yet
undiscovered enzyme related to carbon. Is that correct?
A: Yes.
Q: Here in this book it says: "Evidence is accumulating that only a relatively small portion of the
DNA sequence is for so-called structural genes. Structural genes lead to the production of
protein. There are an estimated 50,000 structural genes with an average sized of approximately
5,000 base pairs, which then accounts for only 250 million of the estimated 3 billion base pairs.
What is the rest of the DNA for? Some of the DNA is so-called repetitive sequences, repeated
thousands of times. The function is unknown. The ALU, repeat, for instance, contains over
300,000 copies of the same 300 base pair sequence. Certainly this DNA is not junk and plays
some important role in the gene regulation chromosomal architecture or chromosomal replication.
Until 1977, it was thought that genes were single sequences of DNA that are coded into RNA and
then into protein. However, further study has shown greater complexity. It is now known that
there are pieces of DNA within a gene that are not translated into protein. These intervening
sequences, or INTRONS, are somewhat of a mystery, but appear to be a very common
phenomenon." Now, is this thing they are talking about, these INTRONS, are these the core that
you were talking about?
A: In part.
Q: What about this ALU repeat with over 300,000 copies of the same base pair sequence. What
is it?
A: Tribal unit.
Q: What is a tribal unit?
A: Sectionalized zone of significant marker compounds.
Q: What does this code for?
A: Physiological/spiritual union profile.
Q: Could you define "tribal" for me?
A: You define.
Q: What does the rest of the DNA code for that is not coding for structural genes. What else can
it be doing?
A: Truncated flow.
Q: Truncated flow of what?
A: Liquids.
Q: Liquids from where to where?
A: What is your sense?
Q: Well, what liquids?
A: Time for your input.
Q: Do some of these...
A: No. Not alright: we asked you a question!
Q: Okay. Truncated flow of liquids. I'm not even sure what that means. (A) Maybe something
was flowing and something cut it off and stopped it and it cannot be developed. It means that
something was cut. (L) Does truncated flow mean a flow of liquid that has been stopped?
A: Yes. Because of design alteration!
Q: Is this liquid that has been truncated a chemical transmitter?
A: Yes.
Q: And would this chemical transmitter, if it were allowed to flow, cause significant alterations
in other segments of the DNA?
A: Yes.
Q: So, there is a segment of code that is in there, that is deliberately inserted, to truncate this flow
of liquid, which is a chemical transmitter, or neuropeptide, which would unlock significant
portions of our DNA?
A: Close Biogenetic engineering.
Q: I assume that this was truncated by the Lizzies and cohorts?
A: Close, but more likely Orion STS designers.
Q: Okay, can you tell us what this specific liquid or transmitter was truncated?
A: Think of the most efficient conductor of chemical compounds for low wave frequency charge.
Q: (A) Well, gold is one... (L) Acetylcholine?
A: No.
Q: (L) Water?
A: No.
Q: Saline?
A: Closer. It is a naturally bonding combination.
Q: (L) Well, I'll have to research it. The fact is, we've got 3 billion base pairs... do some of these
so-called segments of "junk DNA," if they were activated, would they instruct chromosomal
replication to take place with more than 23 pairs as a result?
A: In part.
Q: Is there anything we can do in terms of activities or...
A: No. Biogenetic engineering.
Q: Was my insight that I had one night that, at some point in time something may happen that
will turn genes on in our bodies that will cause us to physically transform, an accurate perception
of what could happen at the time of transition to 4th density?
A: For the most part, yes.
Q: Are there any limitations to what our physical bodies can transform to if instructed by the
DNA? Could we literally grow taller, rejuvenate, change our physical appearance, capabilities, or
whatever, if instructed by the DNA?
A: Receivership capability.
Q: What is receivership capability?
A: Change to broader receivership capability.
Q: (A) That means that you can receive more of something.
A: Close.
Q: (A) It means how good is your receiver.
A: Yes.
Q: (L) What is your receiver? The physical body?
A: Mind through central nervous system connection to higher levels.
Q: So, that is the whole issue of gaining knowledge and developing control over your
body. If your mind and CNS are tuned to higher levels of consciousness, that has
significance in terms of your receivership capability?
A: Close.

Something to have in mind and in bed:
Xman:
Think of the most efficient conductor of chemical compounds for low wave frequency charge.

At one point I thought perhaps that 'low wave frequency charge' may be sound. I am not sure, but with all that has arisen recently over the last year or so in regards to FAR infrared I am thinking it may be FAR infrared. FAR infrared is indeed 'low wave frequency' as opposed to high frequency electromagnetic wave (ultraviolet, xray, gammaray). Not only is FAR infrared used in chlorophyll/photosynthesis, but we as warm blooded animals use FAR infrared apparently within the little power centers of each and every cell (mitochondria). So maybe this naturally bonding combination (compound) is a very efficient thermal conductor. And probably not just a thermal (infrared) conductor, but maybe a good conductor of FAR infrared, which would facilitate the mitochondrial work of all kinds, cellular growth and repair, transcription of proteins, antibodies, removal of toxins.

Anyway, I am not sure but the best thermal conductors elementally seem to be carbon (diamond), silver, gold, copper, aluminum, magnesium. As far as FAR infrared conductors I am not sure and it being a compound it may be an oxide or probably a salt of one of these. I know silver chloride is used as an infrared conductor (as a lens) in IR detection devices and silver chloride does indeed get blocked in us. It also might be magnesium chloride as I think it also has bands in the infrared and FAR infrared at which it is basically transparent ( passes the wavelengths through - conducts ).

So I am not sure, but the session itself seems to be talking about an actual chemical compound and that we are bio-genetically altered so that compound no longer flows freely. And if FAR infrared is indeed the light of life within us cellularly/mitochondrially, then some compound that is a conductor of FAR infrared would possibly make sense.

Well keep on jumping.
Marcus Aurelius:
When I first read the session excerpt, the 1st thing I thought about was Cs sayings that sufferings change DNA. Since this liquid has the same ability, was wondering if there can be any relation.
My 2nd thought was after reading Xman quote of the session with most important terms highlighted: DESIGN alteration. By design alteration, I understood a change in the form, the physical appearance of something. So I thought to G's kundabuffer too and the way it appeared. Angel Looisos helped to make a something grow at the base of our spinal column at the root of our former tail. And I came to the sacrum(5 fused vertebrae). Then I came to spinal cord becoming just filum terminale when approaching the sacrum. And I came to the recent realisation of scientists that the spinal cord can do more than what they actually thought. They are now thinking it is a complete extension of the central nervous system.
Then I thought about kundalini, related to sacrum, coming up along the spinal column to connect all chakras and I thought again to spinal cord. Perhaps have I been diverted there? Learning is fun.
P.S.: Sacrum vertebraes and coccyx are not fused in babies. They progressively fuse during growth.
Heard also that malevolent sorcerers in Africa uses a type of energy stored in their coccyx to fly during night but needs to confirm this information.
Gurdjieff's 'organ kundabuffer' approaches the theme from another angle. There, the organ is forcibly installed into man in order to generally anesthetize him against reality and see the insignificant as great and the great as insignificant. This event too is depicted as taking place at the very beginning of man's existence on Earth, in response to a cataclysmic situation. Again, we have radical shift of perception and cataclysm together.'

In Beelzebub's Tales we have the constant theme of vibrations being required of the Earth. When man would not produce the right quality consciously, nature shifted the circumstance to cater for accidental shocks which would provide the required amount of flashes of awareness, or 'higher hydrogens.' The predator feeds man problems and crises upon crisis. {Castaneda] 'Planetary influences arrange for wars and catastrophies simply to obtain required vibrations' [Gurdjieff] Man may also play his role consciously, at least in theory, and thus be free from these arbitrary influences and serve the universe in another manner, suggests Gurdjieff.

Man is born sane and spoiled by contemporary education, inculcated with the 'values' of ego, hypocrisy, self-calming, empty wiseacrings, vanity and self-love. [Gurdjieff] When man reaches adult age, only the fringe of the glowing coat of awareness is left, barely covering the toes. This fringe is the center of self-reflection, the only awareness left to man. [Castaneda]
Citation de: Cs session

11-26-94

Q: (L) What was the true event behind the story of the "Mark of Cain?"

A: Advent of jealousy.

Q: (L) What occurred to allow jealousy to enter into human interaction?

A: Lizard takeover.

Q: (L) Wasn't the Lizard takeover an event that occurred at the time of the fall of Eden?

A: Yes.

Q: (L) Was this story of Cain and Abel part of that takeover?

A: Symbolism of story.

Q: (L) This was symbolic of the Lizzie takeover, the advent of jealousy, and the attitude of brother against brother, is that correct?

A: Partly. The mark of Cain means the "jealousy factor" of change facilitated by Lizard takeover of earth's vibrational frequency. Knot on spine is physical residue of DNA restriction deliberately added by Lizards. See?

Q: (L) You mean the area around the occipital ridge? The structures underneath?

A: Yes.

Q: (L) What was the configuration of the spine and skull prior to this addition?

A: Spine had no ridge there. Jealousy emanates from there, you can even feel it.

Q: (L) Do any of these emotions that we have talked about that were generated by DNA breakdown, were any of these related to what Carl Sagan discusses when he talks about the "Reptilian Brain"?

A: In a roundabout way.

Q: (L) Okay, at the time this "Mark of Cain" came about, were there other humans on the planet that did not have this configuration?

A: It was added to all simultaneously.

Q: (L) How did they physically go about performing this act? What was the mechanism of this event, the nuts and bolts of it?

A: DNA core is as yet undiscovered enzyme relating to carbon. Light waves were used to cancel the first ten factors of DNA by burning them off. At that point, a number of physical changes took place including knot at top of spine. Each of these is equally reflected in the ethereal.

Q: (L) Well, the question I do have is, how many people were there on the planet and did they have to take each one and do this individually? How did they effect this change on all of them?

A: Light wave alteration.

Q: (L) And light waves, actual light waves, affect DNA?

A: Yes.

Q: (T) What was the origin of the light waves?

A: Our center. Our realm. STO. The Reptilian beings used sophisticated technology to interrupt light frequency waves.

With:

Scientists take a step towards uncovering the histone code

http://www.rdmag.com/News/Feeds/2009/12/life-sciences-scientists-take-a-step-towards-uncovering-the-hist/?wnnvz=1737,01270977585


Researchers at Emory Univ. School of Medicine have determined the structures of two enzymes that customize histones, the spool-like proteins around which DNA coils inside the cell.

The structures provide insight into how DNA's packaging is just as important and intricate as the information in the DNA itself, and how these enzymes are part of a system of inspectors making sure the packaging is in order.

The results are published online this week in the journal Nature Structural and Molecular Biology.

A team of scientists led by Xiaodong Cheng, PhD, professor of biochemistry at Emory and a Georgia Research Alliance eminent scholar, used X-rays to probe the architecture of two enzymes, PHF8 and KIAA1718. The enzymes are known as histone demethylases because they remove methyl groups (chemical modifications of a protein) from histones.

Mutations in the gene encoding one of the enzymes, PHF8, cause a type of inherited mental retardation. Understanding how PHF8 works may help doctors better understand or even prevent mental retardation.

Many biologists believe the modifications on histones are a code, analogous to the genetic code. Depending on the histones' structure, access to DNA in the nucleus can be restricted or relatively free. The idea is: the modifications tell enzymes that act on DNA valuable information about getting to the DNA itself.

"This work represents a step toward uncovering the molecular basis for how demethylases handle multiple signals on histones," says Paula Flicker, PhD, who oversees cell signaling grants at the National Institutes of Health's National Institute of General Medical Sciences. "Knowledge of how these complex signals help govern patterns of gene activity will bring us closer to understanding how cells determine their identity during development."

To understand histone demethylases' role in the cell, Cheng says, think of the cell as a library with thousands of books in it.

"To find a particular book in a library, you need some signs telling you how the stacks are organized," he says. "Similarly, the machinery that reads DNA needs some guidance to get to the right place."

Histones have a core that the DNA wraps around and flexible tails extending beyond the core. The cells' enzymes attach a variety of bells and whistles--methyl groups are just one--to the histone tails to remind the cell how to handle the associated DNA.

Methyl groups mean different things depending on where they are on the histone. In addition, the modifications vary from cell to cell. In the brain, for example, the modifications on a particular gene might signal "this gene should be read frequently," and in muscle, a different set of modifications will say "keep quiet."

"What these enzymes do is make sure all the signs are consistent with each other," Cheng says. "If a sign is out of place, they remove it."

PHF8 and KIAA1718 are each made up of two attached modules. One module (called PHD) grabs a histone tail with a methyl group on it, while the other module (Jumonji) removes a methyl group from somewhere else on the tail.

Scientists previously knew the structures of the methyl-binding and methyl-removing modules in isolation. What is new is seeing how the modules are connected and how one part regulates the other, Cheng says.

Biomiast:
The microRNAs are small RNA molecules between 18-25 base pair long. They are used for RNA interference which means when an mRNA is produced, microRNA can bind that mRNA(of course if their sequences are complementary) and degrades it with the help of RISC complex. Consider it like you send a messenger for some place but that messenger is killed along the way so the message has never been received. This implies even you activate your DNA there are mechanisms in the cellular structure that prevent that DNA to become protein by interrupting the message. Therefore it can not be functional and you can not use it.

After reading the article above and what C's said about Alu elements I think the Alu elements and microRNAs are working together to prevent the activated DNA's functionality. This is of course just one of my ideas along the way so not necessarily true, it is only a probability but here it goes: When DNA is activated it can produce functional proteins without RNA interference because it is not complementary to the microRNAs. But when this DNA gives signs of activation Alu elements jump from their location and bind inside that DNA. When that DNA becomes mRNA now it has complementary sequence of microRNAs which is said in the abstract of above paper: Base-pair complementarity could be demonstrated between the seed sequence of a subset of human microRNAs and Alu repeats that are integrated parallel (sense) in mRNAs. The most common target site coincides with the evolutionary most conserved part of Alu.

Because of this, complementary sequence microRNA now can destroy our beneficial mRNA and hinder our progress. I think when C's said truncated flow of liquids it is not any specific thing like neurotransmitters but all kinds of proteins that can end our slavery. When Laura said saline they said close because it is basically saline with additional proteins in it. What are these proteins? I suspect there are a lot of them and since they are not synthesized in us I do not think they are known to us. Maybe one day we can learn everything about them...
Another Hit for the Cassiopaeans - DNA
 
:cuckoo: As said before let's go back to make a turn in the Wiki cave.

Auditory cortex

The primary auditory cortex is the part of the temporal lobe that processes auditory information in humans and other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing
Like many areas in the neocortex, the functional properties of the adult primary auditory cortex (A1) are highly dependent on the sounds encountered early in life. This has been best studied using animal models, especially cats and rats. In the rat, exposure to a single frequency during postnatal day (P) 11 to 13 can cause a 2-fold expansion in the representation of that frequency in A1.[6] Importantly, the change is persistent, in that it lasts throughout the animal's life, and specific, in that the same exposure outside of that period causes no lasting change in the tonotopy of A1
More on A1 here: -http://www.jneurosci.org/content/27/1/180 and TONOTOPIC reorganisation of developing auditory brainstem circuits.

Function
As with other primary sensory cortical areas, auditory sensations reach perception only if received and processed by a cortical area. Evidence for this comes from lesion studies in human patients who have sustained damage to cortical areas through tumors or strokes,[7] or from animal experiments in which cortical areas were deactivated by surgical lesions or other methods.[8] Damage to the Primary Auditory Cortex in humans leads to a loss of any awareness of sound, but an ability to react reflexively to sounds remains as there is a great deal of subcortical processing in the auditory brainstem and midbrain.[9][10][11]

Neurons in the auditory cortex are organized according to the frequency of sound to which they respond best. Neurons at one end of the auditory cortex respond best to low frequencies; neurons at the other respond best to high frequencies. There are multiple auditory areas (much like the multiple areas in the visual cortex), which can be distinguished anatomically and on the basis that they contain a complete "frequency map." The purpose of this frequency map (known as a tonotopic map) is unknown, and is likely to reflect the fact that the cochlea is arranged according to sound frequency. The auditory cortex is involved in tasks such as identifying and segregating auditory "objects" and identifying the location of a sound in space.

Human brain scans have indicated that a peripheral bit of this brain region is active when trying to identify musical pitch. Individual cells consistently get excited by sounds at specific frequencies, or multiples of that frequency.

The auditory cortex plays an important yet ambiguous role in hearing. When the auditory information passes into the cortex, the specifics of what exactly takes place are unclear. There is a large degree of individual variation in the auditory cortex, as noted by biologist James Beament, who wrote, “The cortex is so complex that the most we may ever hope for is to understand it in principle, since the evidence we already have suggests that no two cortices work in precisely the same way."[12]

In the hearing process, multiple sounds are transduced simultaneously. The role of the auditory system is to decide which components form the sound link. Many have surmised that this linkage is based on the location of sounds. However, there are numerous distortions of sound when reflected off different media, which makes this thinking unlikely[citation needed]. The auditory cortex forms groupings based on fundamentals; in music, for example, this would include harmony, timing, and pitch.[13]

The primary auditory cortex lies in the superior temporal gyrus of the temporal lobe and extends into the lateral sulcus and the transverse temporal gyri (also called Heschl's gyri). Final sound processing is then performed by the parietal and frontal lobes of the human cerebral cortex. Animal studies indicate that auditory fields of the cerebral cortex receive ascending input from the auditory thalamus, and that they are interconnected on the same and on the opposite cerebral hemispheres.

The auditory cortex is composed of fields which differ from each other in both structure and function.[14] The number of fields varies in different species, from as few as 2 in rodents to as many as 15 in the rhesus monkey. The number, location, and organization of fields in the human auditory cortex are not known at this time. What is known about the human auditory cortex comes from a base of knowledge gained from studies in mammals, including primates, used to interpret electrophysiological tests and functional imaging studies of the brain in humans.

When each instrument of a symphony orchestra or the jazz band plays the same note, the quality of each sound is different — but the musician perceives each note as having the same pitch. The neurons of the auditory cortex of the brain are able to respond to pitch. Studies in the marmoset monkey have shown that pitch-selective neurons are located in a cortical region near the anterolateral border of the primary auditory cortex. This location of a pitch-selective area has also been identified in recent functional imaging studies in humans.[15][16]

The primary auditory cortex is subject to modulation by numerous neurotransmitters, including norepinephrine, which has been shown to decrease cellular excitability in all layers of the temporal cortex. alpha-1 adrenergic receptor activation, by norepinephrine, decreases glutamatergic excitatory postsynaptic potentials at AMPA receptors.

Relationship to the auditory system
Areas of localization on lateral surface of hemisphere. Motor area in red. Area of general sensations in blue. Auditory area in green. Visual area in yellow.

The auditory cortex is the most highly organized processing unit of sound in the brain. This cortex area is the neural crux of hearing, and—in humans—language and music. The auditory cortex is divided into three separate parts: the primary, secondary, and tertiary auditory cortex. These structures are formed concentrically around one another, with the primary cortex in the middle and the tertiary cortex on the outside.

The primary auditory cortex is tonotopically organized, which means that neighboring cells in the cortex respond to neighboring frequencies.[18] Tonotopic mapping is preserved throughout most of the audition circuit. The primary auditory cortex receives direct input from the medial geniculate nucleus of the thalamus and thus is thought to identify the fundamental elements of music, such as pitch and loudness.

An evoked response study of congenitally deaf kittens by Klinke et al. utilized local field potentials to measure cortical plasticity in the auditory cortex. These kittens were stimulated and measured against a control (an un-stimulated congenitally deaf cat (CDC)) and normal hearing cats. The field potentials measured for artificially stimulated CDC were eventually much stronger than that of a normal hearing cat.[19] This finding accords with a study by Eckart Altenmuller’s, in which it was observed that students who received musical instruction had greater cortical activation than those who did not.[20]

The auditory cortex has distinct responses to sounds in the gamma band. When subjects are exposed to three or four cycles of a 40 hertz click, an abnormal spike appears in the EEG data, which is not present for other stimuli. The spike in neuronal activity correlating to this frequency is not restrained to the tonotopic organization of the auditory cortex. It has been theorized that gamma frequencies are resonant frequencies of certain areas of the brain, and appear to affect the visual cortex as well.[21] Gamma band activation (25 to 100 Hz) has been shown to be present during the perception of sensory events and the process of recognition. In a 2000 study by Kneif and colleagues, subjects were presented with eight musical notes to well-known tunes, such as Yankee Doodle and Frère Jacques. Randomly, the sixth and seventh notes were omitted and an electroencephalogram, as well as a magnetoencephalogram were each employed to measure the neural results. Specifically, the presence of gamma waves, induced by the auditory task at hand, were measured from the temples of the subjects. The OSP response, or omitted stimulus response, was located in a slightly different position; 7 mm more anterior, 13 mm more medial and 13 mm more superior in respect to the complete sets. The OSP recordings were also characteristically lower in gamma waves, as compared to the complete musical set. The evoked responses during the sixth and seventh omitted notes are assumed to be imagined, and were characteristically different, especially in the right hemisphere.[22] The right auditory cortex has long been shown to be more sensitive to tonality, while the left auditory cortex has been shown to be more sensitive to minute sequential differences in sound, such as in speech.

Tonality is represented in more places than just the auditory cortex; one other specific area is the rostromedial prefrontal cortex (RMPFC).[23] Janata et al., in their 2002 study, explored the areas of the brain which were active during tonality processing, by means of the fMRI technique. The results of this experiment showed preferential blood-oxygen-level dependent activation of specific voxels in RMPFC for specific tonal arrangements. Though these collections of voxels do not represent the same tonal arrangements between subjects or within subjects over multiple trials, it is interesting and informative that RMPFC, an area not usually associated with audition, seems to code for immediate tonal arrangements in this respect. RMPFC is a subsection of the medial prefrontal cortex, which projects to many diverse areas including the amygdala, and is thought to aid in the inhibition of negative emotion

Neuronal encoding of sound

The neuronal encoding of sound is the representation of auditory sensation and perception in the nervous system.
Nerve fibers from the cochlea
There are two types of afferent neurons found in the cochlear nerve: Type I and Type II. Each type of neuron has specific cell selectivity within the cochlea.[8] The mechanism that determines the selectivity of each type of neuron for a specific hair cell has been proposed by two diametrically opposed theories in neuroscience known as the peripheral instruction hypothesis and the cell autonomous instruction hypothesis. The peripheral instruction hypothesis states that phenotypic differentiation between the two neurons are not made until after these undifferentiated neurons attach to hair cells which in turn will dictate the differentiation pathway. The cell autonomous instruction hypothesis states that differentiation into Type I and Type II neurons occur following the last phase of mitotic division but preceding innervations.[8] Both types of neuron participate in the encoding of sound for transmission to the brain.
Type I neurons
Type I neurons innervate inner hair cells. There is significantly greater convergence of this type of neuron towards the basal end in comparison with the apical end.[8] A radial fiber bundle acts as an intermediary between Type I neurons and inner hair cells. The ratio of innervation that is seen between Type I neurons and inner hair cells is 1:1 which results in high signal transmission fidelity and resolution.[8]

Type II neurons

Type II neurons on the other hand innervate outer hair cells. However, there is significantly greater convergence of this type of neuron towards the apex end in comparison with the basal end. A 1:30-60 ratio of innervation is seen between Type II neurons and outer hair cells which in turn make these neurons ideal for electromechanical feedback.[8] Type II neurons can be physiologically manipulated to innervate inner hair cells provided outer hair cells have been destroyed either through mechanical damage or by chemical damage induced by drugs such as gentamicin

Auditory system
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs (the ears) and the auditory parts of the sensory system.
Simplified, nerve fibers’ signals are transported by bushy cells to the binaural areas in the olivary complex, while signal peaks and valleys are noted by stellate cells, and signal timing is extracted by octopus cells. The lateral lemniscus has three nuclei: dorsal nuclei respond best to bilateral input and have complexity tuned responses; intermediate nuclei have broad tuning responses; and ventral nuclei have broad and moderately complex tuning curves. Ventral nuclei of lateral lemniscus help the inferior colliculus (IC) decode amplitude modulated sounds by giving both phasic and tonic responses (short and long notes, respectively). IC receives inputs not shown, including visual (pretectal area: moves eyes to sound. superior colliculus: orientation and behavior toward objects, as well as eye movements (saccade)) areas, Pons (superior cerebellar peduncle: thalamus to cerebellum connection/hear sound and learn behavioral response), spinal cord (periaqueductal grey: hear sound and instinctually move), and thalamus. The above are what implicate IC in the ‘startle response’ and ocular reflexes. Beyond multi-sensory integration IC responds to specific amplitude modulation frequencies, allowing for the detection of pitch. IC also determines time differences in binaural hearing. The medial geniculate nucleus divides into ventral (relay and relay-inhibitory cells: frequency, intensity, and binaural info topographically relayed), dorsal (broad and complex tuned nuclei: connection to somatosensory info), and medial (broad, complex, and narrow tuned nuclei: relay intensity and sound duration). The auditory cortex (AC) brings sound into awareness/perception. AC identifies sounds (sound-name recognition) and also identifies the sound’s origin location. AC is a topographical frequency map with bundles reacting to different harmonies, timing and pitch. Right-hand-side AC is more sensitive to tonality, left-hand-side AC is more sensitive to minute sequential differences in sound. Rostromedial and ventrolateral prefrontal cortices are involved in activation during tonal space and storing short-term memories, respectively.The Heschl’s gyrus/transverse temporal gyrus includes Wernicke’s area and functionality, it is heavily involved in emotion-sound, emotion-facial-expression, and sound-memory processes. The entorhinal cortex is the part of the ‘hippocampus system’ that aids and stores visual and auditory memories. The supramarginal gyrus (SMG) aids in language comprehension and is responsible for compassionate responses. SMG links sounds to words with the angular gyrus and aids in word choice. SMG integrates tactile, visual, and auditory info

Olivary_body

Basal view of a human brain
https://en.wikipedia.org/wiki/Olivary_body#/media/File:Gehirn,_basal_-_beschriftet_lat.svg

Perception of sound is associated with the left posterior superior temporal gyrus (STG). The superior temporal gyrus contains several important structures of the brain, including Brodmann areas 41 and 42, marking the location of the primary auditory cortex, the cortical region responsible for the sensation of basic characteristics of sound such as pitch and rhythm. We know from work in nonhuman primates that primary auditory cortex can probably itself be divided further into functionally differentiable subregions.[38][39][40][41] [42][43][44] The neurons of the primary auditory cortex can be considered to have receptive fields covering a range of auditory frequencies and have selective responses to harmonic pitches.[45] Neurons integrating information from the two ears have receptive fields covering a particular region of auditory space.

The primary auditory cortex is surrounded by secondary auditory cortex, and interconnects with it. These secondary areas interconnect with further processing areas in the superior temporal gyrus, in the dorsal bank of the superior temporal sulcus, and in the frontal lobe. In humans, connections of these regions with the middle temporal gyrus are probably important for speech perception. The frontotemporal system underlying auditory perception allows us to distinguish sounds as speech, music, or noise


Precedence effect

The precedence effect or law of the first wavefront is a binaural psychoacoustic effect. When a sound is followed by another sound separated by a sufficiently short time delay (below the listener's echo threshold), listeners perceive a single fused auditory image; its perceived spatial location is dominated by the location of the first-arriving sound (the first wave front). The lagging sound also affects the perceived location. However, its effect is suppressed by the first-arriving sound.


Franssen effect
Franssen effect F1

Setup
There are two speakers to the left and right of the listener. Each is about 1 meter in distance from the listener, at approximately 45° angles.
Producing the illusion

The left speaker suddenly begins to produce a sharp pure tone. The two speakers are complementary to each other: i.e., as one increases, the other decreases. The left one is decreased exponentially, and the right speaker becomes the main source of the sound. The interesting illusion achieved here is that the listener perceives the sound as only coming from the left speaker, although the right speaker has been on most of the time.

Franssen effect F2


Experiment
Inside a room (auditorium) there are 2 loudspeakers at different positions. At the beginning of the presentation, loudspeaker 1 emits a sinusoidal signal with a steep attacking slope. Subsequently the power of this loudspeaker remains constant. The listeners can localize this loudspeaker easily. During the stationary part of the envelope the signal is very smoothly faded over from loudspeaker 1 to loudspeaker 2. Although loudspeaker 2 emits all the sound at the end, the listener's auditory events remain at the position of loudspeaker 1. This mislocalization remains, even if the test supervisor plugs off the cables of loudspeaker 1 demonstratively.

Conclusions

This effect gives some information about the capabilities of the human auditory system to localize sound sources in enclosed rooms:

- The human auditory system is able to localize a sound source in reverberant sound fields, if there are fast signal changes or signal onsets. (Loudspeaker 1 was correctly localized at the beginning of the experiment.)
- The human auditory system is not able to localize signals with a constant amplitude and spectrum in reverberant sound fields. (The fade over to loudspeaker 2 was not recognized by the listeners.)
- As long as no sound source can be localized, the direction of the last localized sound source remains as the perceived direction. (The auditory event remained at loudspeaker 1, although loudspeaker 2 emitted all the sound at the end of the experiment.)

When looking at the sound, which arrives at the listener's ears, the following situation appears:

- At the beginning of the experiment, when loudspeaker 1 started to emit sound, there was a short time period, where only the direct sound of loudspeaker 1 arrived at the listener's ears. In this time period the localization of loudspeaker 1 was surely possible, because it was not yet disturbed by wall reflections.
- Some milliseconds later the sound of the wall reflections arrived and disturbed the localization of sound sources.
- During the fade over the level and the spectrum of the emitted sound remained constant. This fade over was overlaid by many wall reflections from the sound situation before. Obviously no sound source localization was possible during this phase.
- At the end, when only loudspeaker 2 emitted sound, the situation was quite similar, the sound of the wall reflections, which arrived simultaneously, prevented a localization of this sound source.

As a consequence the auditory system seems only to be able to localize sound sources in reverberant environment at sound onsets or at bigger spectral changes. Then the direct sound of the sound source prevails at least in some frequency ranges and the direction of the sound source can be determined. Some milliseconds later, when the sound of the wall reflections arrives, a sound source localization seems no more to be possible. As long as no new localization is possible, the auditory systems seems to keep the last localized direction as perceived sound source direction.


Psychoacoustics
Psychoacoustics is the scientific study of sound perception. More specifically, it is the branch of science studying the psychological and physiological responses associated with sound (including noise, speech and music). It can be further categorized as a branch of psychophysics. Psychoacoustics received its name from a field within psychology—i.e., recognition science—which deals with all kinds of human perceptions. It is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.


Background
Hearing is not a purely mechanical phenomenon of wave propagation, but is also a sensory and perceptual event; in other words, when a person hears something, that something arrives at the ear as a mechanical sound wave traveling through the air, but within the ear it is transformed into neural action potentials. These nerve pulses then travel to the brain where they are perceived. Hence, in many problems in acoustics, such as for audio processing, it is advantageous to take into account not just the mechanics of the environment, but also the fact that both the ear and the brain are involved in a person’s listening experience.

The inner ear, for example, does significant signal processing in converting sound waveforms into neural stimuli, so certain differences between waveforms may be imperceptible.[2] Data compression techniques, such as MP3, make use of this fact.[3] In addition, the ear has a nonlinear response to sounds of different intensity levels; this nonlinear response is called loudness. Telephone networks and audio noise reduction systems make use of this fact by nonlinearly compressing data samples before transmission, and then expanding them for playback.[4] Another effect of the ear's nonlinear response is that sounds that are close in frequency produce phantom beat notes, or intermodulation distortion products.[5]

The term "psychoacoustics" also arises in discussions about cognitive psychology and the effects that personal expectations, prejudices, and predispositions may have on listeners' relative evaluations and comparisons of sonic aesthetics and acuity and on listeners' varying determinations about the relative qualities of various musical instruments and performers. The expression that one "hears what one wants (or expects) to hear" may pertain in such discussions
Limits of perception
The human ear can nominally hear sounds in the range 20 Hz (0.02 kHz) to 20,000 Hz (20 kHz). The upper limit tends to decrease with age; most adults are unable to hear above 16 kHz. The lowest frequency that has been identified as a musical tone is 12 Hz under ideal laboratory conditions.[6] Tones between 4 and 16 Hz can be perceived via the body's sense of touch.

Frequency resolution of the ear is 3.6 Hz within the octave of 1000 – 2000 Hz. That is, changes in pitch larger than 3.6 Hz can be perceived in a clinical setting.[6] However, even smaller pitch differences can be perceived through other means. For example, the interference of two pitches can often be heard as a repetitive variation in volume of the tone. This amplitude modulation occurs with a frequency equal to the difference in frequencies of the two tones and is known as beating.

The semitone scale used in Western musical notation is not a linear frequency scale but logarithmic. Other scales have been derived directly from experiments on human hearing perception, such as the mel scale and Bark scale (these are used in studying perception, but not usually in musical composition), and these are approximately logarithmic in frequency at the high-frequency end, but nearly linear at the low-frequency end.

The intensity range of audible sounds is enormous. Human ear drums are sensitive to variations in the sound pressure, and can detect pressure changes from as small as a few micropascals (µPa) to greater than 100 kPa. For this reason, sound pressure level is also measured logarithmically, with all pressures referenced to 20 µPa (or 1.97385×10−10 atm). The lower limit of audibility is therefore defined as 0 dB, but the upper limit is not as clearly defined. The upper limit is more a question of the limit where the ear will be physically harmed or with the potential to cause noise-induced hearing loss.

A more rigorous exploration of the lower limits of audibility determines that the minimum threshold at which a sound can be heard is frequency dependent. By measuring this minimum intensity for testing tones of various frequencies, a frequency dependent absolute threshold of hearing (ATH) curve may be derived. Typically, the ear shows a peak of sensitivity (i.e., its lowest ATH) between 1–5 kHz, though the threshold changes with age, with older ears showing decreased sensitivity above 2 kHz.

The ATH is the lowest of the equal-loudness contours. Equal-loudness contours indicate the sound pressure level (dB SPL), over the range of audible frequencies, that are perceived as being of equal loudness. Equal-loudness contours were first measured by Fletcher and Munson at Bell Labs in 1933 using pure tones reproduced via headphones, and the data they collected are called Fletcher–Munson curves. Because subjective loudness was difficult to measure, the Fletcher–Munson curves were averaged over many subjects.

Robinson and Dadson refined the process in 1956 to obtain a new set of equal-loudness curves for a frontal sound source measured in an anechoic chamber. The Robinson-Dadson curves were standardized as ISO 226 in 1986. In 2003, ISO 226 was revised as equal-loudness contour using data collected from 12 international studies.
Psychoacoustics is based heavily on human anatomy, especially the ear's limitations in perceiving sound as outlined previously. To summarize, these limitations are:

- High frequency limit
- Absolute threshold of hearing
- Temporal masking
- Simultaneous masking

Given that the ear will not be at peak perceptive capacity when dealing with these limitations, a compression algorithm can assign a lower priority to sounds outside the range of human hearing. By carefully shifting bits away from the unimportant components and toward the important ones, the algorithm ensures that the sounds a listener is most likely to perceive are of the highest quality.

Research areas

Perception and cognition
Much work within music psychology seeks to understand the cognitive processes that support musical behaviors, including perception, comprehension, memory, attention, and performance. Originally arising in fields of psychoacoustics and sensation, cognitive theories of how people understand music more recently encompass neuroscience, cognitive science, music theory, music therapy, computer science, psychology, philosophy, and linguistics.

Affective response
Music has been shown to consistently elicit emotional responses in its listeners, and this relationship between human affect and music has been studied in depth.[13] This includes isolating which specific features of a musical work or performance convey or elicit certain reactions, the nature of the reactions themselves, and how characteristics of the listener may determine which emotions are felt. The field draws upon and has significant implications for such areas as philosophy, musicology, and aesthetics, as well the acts of musical composition and performance. The implications for casual listeners are also great; research has shown that the pleasurable feelings associated with emotional music are the result of dopamine release in the striatum—the same anatomical areas that underpin the anticipatory and rewarding aspects of drug addiction.

Neuropsychology
A significant amount of research concerns brain-based mechanisms involved in the cognitive processes underlying music perception and performance. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other allied fields, and use such techniques as functional magnetic resonance imaging (fMRI), transcranial magnetic stimulation (TMS), magnetoencephalography (MEG), electroencephalography (EEG), and positron emission tomography (PET).

The cognitive process of performing music requires the interaction of neural mechanisms in both motor and auditory systems. Since every action expressed in a performance produces a sound that influences subsequent expression, this leads to impressive sensorimotor interplay.
Processing pitch
The primary auditory cortex is one of the main areas associated with superior pitch resolution.

Perceived pitch typically depends on the fundamental frequency, though the dependence could be mediated solely by the presence of harmonics corresponding to that fundamental frequency. The perception of a pitch without the corresponding fundamental frequency in the physical stimulus is called the pitch of the missing fundamental.[16] Neurons lateral to A1 in marmoset monkeys were found to be sensitive specifically to the fundamental frequency of a complex tone,[17] suggesting that pitch constancy may be enabled by such a neural mechanism.

Pitch constancy refers to the ability to perceive pitch identity across changes in acoustical properties, such as loudness, temporal envelope, or timbre. The importance of cortical regions lateral to A1 for pitch coding is also supported by studies of human cortical lesions and functional magnetic resonance imaging (fMRI) of the brain.These data suggest a hierarchical system for pitch processing, with more abstract properties of sound stimulus processed further along the processing pathways.

Absolute pitch
Absolute pitch (AP) is defined as the ability to identify the pitch of a musical tone or to produce a musical tone at a given pitch without the use of an external reference pitch. Researchers estimate the occurrence of AP to be 1 in 10,000 people. The extent to which this ability is innate or learned is debated, with evidence for both a genetic basis and for a "critical period" in which the ability can be learned, especially in conjunction with early musical training.
Processing rhythm

Behavioural studies demonstrate that rhythm and pitch can be perceived separately, but that they also interact in creating a musical perception. Studies of auditory rhythm discrimination and reproduction in patients with brain injury have linked these functions to the auditory regions of the temporal lobe, but have shown no consistent localization or lateralization.Neuropsychological and neuroimaging studies have shown that the motor regions of the brain contribute to both perception and production of rhythms.

Even in studies where subjects only listen to rhythms, the basal ganglia, cerebellum, dPMC and SMA are often implicated.[31][32][33] The analysis of rhythm may depend on interactions between the auditory and motor systems.
Neural correlates of musical training

Although auditory–motor interactions can be observed in people without formal musical training, musicians are an excellent population to study because of their long-established and rich associations between auditory and motor systems. Musicians have been shown to have anatomical adaptations that correlate with their training.[16] Some neuroimaging studies have observed that musicians show lower levels of activity in motor regions than non-musicians during the performance of simple motor tasks, which may suggest a more efficient pattern of neural recruitment.
Motor imagery
Previous neuroimaging studies have consistently reported activity in the SMA and premotor areas, as well as in auditory cortices, when non-musicians imagine hearing musical excerpts.[16] Recruitment of the SMA and premotor areas is also reported when musicians are asked to imagine performing
https://en.wikipedia.org/wiki/Music_psychology

Mental concerts: musical imagery and auditory cortex.
Most people intuitively understand what it means to "hear a tune in your head." Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.

More about this matter here: Mental Concerts: Musical Imagery and Auditory Cortex -http://www.cell.com/neuron/fulltext/S0896-6273(05)00518-0


Binaural fusion

Binaural fusion or binaural integration is a cognitive process that involves the "fusion" of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.

The process of binaural fusion is important for computing the location of sound sources in the horizontal plane (sound localization), and it is important for sound segregation.[1] Sound segregation refers the ability to identify acoustic components from one or more sound sources.[2] The binaural auditory system is highly dynamic and capable of rapidly adjusting tuning properties depending on the context in which sounds are heard. Each eardrum moves one-dimensionally; the auditory brain analyzes and compares movements of both eardrums to extract physical cues and synthesize auditory objects.

When stimulation from a sound reaches the ear, the eardrum deflects in a mechanical fashion, and the three middle ear bones (ossicles) transmit the mechanical signal to the cochlea, where hair cells transform the mechanical signal into an electrical signal. The auditory nerve, also called the cochlear nerve, then transmits action potentials to the central auditory nervous system.[3]

In binaural fusion, inputs from both ears integrate and fuse to create a complete auditory picture at the brainstem. Therefore, the signals sent to the central auditory nervous system are representative of this complete picture, integrated information from both ears instead of a single ear.

The binaural squelch effect is a result of nuclei of the brainstem processing timing, amplitude, and spectral differences between the two ears. Sounds are integrated and then separated into auditory objects. For this effect to take place, neural integration from both sides is required.
Binaural_fusion

Transmissions from the SOC, in the pons of the brainstem, travel along the lateral lemniscus to the IC, located in the midbrain. Signals are then relayed to the thalamus and further ascending auditory pathway.
https://en.wikipedia.org/wiki/Binaural_fusion#/media/File:Lateral_lemniscus.PNG

Function

The ear functions to analyze and encode a sound’s dimensions.[10] Binaural fusion is responsible for avoiding the creation of multiple sound images from a sound source and its reflections. The advantages of this phenomenon are more noticeable in small rooms, decreasing as the reflective surfaces are placed farther from the listener.

Central auditory system
The central auditory system converges inputs from both ears (inputs contain no explicit spatial information) onto single neurons within the brainstem. This system contains many subcortical sites that have integrative functions. The auditory nuclei collect, integrate, and analyze afferent supply,[10] the outcome is a representation of auditory space.[3] The subcortical auditory nuclei are responsible for extraction and analysis of dimensions of sounds.

The integration of a sound stimulus is a result of analyzing frequency (pitch), intensity, and spatial localization of the sound source. Once a sound source has been identified, the cells of lower auditory pathways are specialized to analyze physical sound parameters. Summation is observed when the loudness of a sound from one stimulus is perceived as having been doubled when heard by both ears instead of only one. This process of summation is called binaural summation and is the result of different acoustics at each ear, depending on where sound is coming from.

The cochlear nerve spans from the cochlea of the inner ear to the ventral cochlear nuclei located in the pons of the brainstem, relaying auditory signals to the superior olivary complex where it is to be binaurally integrated.
Medial superior olive and lateral superior olive

The MSO contains cells that function in comparing inputs from the left and right cochlear nuclei.The tuning of neurons in the MSO favors low frequencies, whereas those in the LSO favor high frequencies.

GABAB receptors in the LSO and MSO are involved in balance of excitatory and inhibitory inputs. The GABAB receptors are coupled to G proteins and provide a way of regulating synaptic efficacy. Specifically, GABAB receptors modulate excitatory and inhibitory inputs to the LSO.[3] Whether the GABAB receptor functions as excitatory or inhibitory for the postsynaptic neuron, depends on the exact location and action of the receptor.[1]
Sound localization

Sound localization is the ability to correctly identify the directional location of sounds. A sound stimulus localized in the horizontal plane is called azimuth; in the vertical plane it is referred to as elevation. The time, intensity, and spectral differences in the sound arriving at the two ears are used in localization. Localization of low frequency sounds is accomplished by analyzing interaural time difference (ITD). Localization of high frequency sounds is accomplished by analyzing interaural level difference (ILD).

Mechanism
Binaural hearing
Action potentials originate in the hair cells of the cochlea and propagate to the brainstem; both the timing of these action potentials and the signal they transmit provide information to the SOC about the orientation of sound in space. The processing and propagation of action potentials is rapid, and therefore, information about the timing of the sounds that were heard, which is crucial to binaural processing, is conserved.[15] Each eardrum moves in one dimension, and the auditory brain analyzes and compares the movements of both eardrums in order to synthesize auditory objects. This integration of information from both ears is the essence of binaural fusion. The binaural system of hearing involves sound localization in the horizontal plane, contrasting with the monaural system of hearing, which involves sound localization in the vertical plane.

Superior olivary complex
The primary stage of binaural fusion, the processing of binaural signals, occurs at the SOC, where afferent fibers of the left and right auditory pathways first converge. This processing occurs because of the interaction of excitatory and inhibitory inputs in the LSO and MSO.[1][3][13] The SOC processes and integrates binaural information, in the form of ITD and ILD, entering the brainstem from the cochleae. This initial processing of ILD and ITD is regulated by GABAB receptors.

ITD and ILD
The auditory space of binaural hearing is constructed based on the analysis of differences in two different binaural cues in the horizontal plane: sound level, or ILD, and arrival time at the two ears, or ITD, which allow for the comparison of the sound heard at each eardrum.ITD is processed in the MSO and results from sounds arriving earlier at one ear than the other; this occurs when the sound does not arise from directly in front or directly behind the hearer. ILD is processed in the LSO and results from the shadowing effect that is produced at the ear that is farther from the sound source. Outputs from the SOC are targeted to the dorsal nucleus of the lateral lemniscus as well as the IC.

Lateral superior olive
LSO neurons are excited by inputs from one ear and inhibited by inputs from the other, and are therefore referred to as IE neurons. Excitatory inputs are received at the LSO from spherical bushy cells of the ipsilateral cochlear nucleus, which combine inputs coming from several auditory nerve fibers. Inhibitory inputs are received at the LSO from globular bushy cells of the contralateral cochlear nucleus.

Medial superior olive
MSO neurons are excited bilaterally, meaning that they are excited by inputs from both ears, and they are therefore referred to as EE neurons.[3] Fibers from the left cochlear nucleus terminate on the left of MSO neurons, and fibers from the right cochlear nucleus terminate on the right of MSO neurons. Excitatory inputs to the MSO from spherical bushy cells are mediated by glutamate, and inhibitory inputs to the MSO from globular bushy cells are mediated by glycine. MSO neurons extract ITD information from binaural inputs and resolve small differences in the time of arrival of sounds at each ear.[3] Outputs from the MSO and LSO are sent via the lateral lemniscus to the IC, which integrates the spatial localization of sound. In the IC, acoustic cues have been processed and filtered into separate streams, forming the basis of auditory object recognition.

Binaural fusion abnormalities in autism
Current research is being performed on the dysfunction of binaural fusion in individuals with autism. The neurological disorder autism is associated with many symptoms of impaired brain function, including the degradation of hearing, both unilateral and bilateral.[16] Individuals with autism who experience hearing loss maintain symptoms such as difficulty listening to background noise and impairments in sound localization. Both the ability to distinguish particular speakers from background noise and the process of sound localization are key products of binaural fusion. They are particularly related to the proper function of the SOC, and there is increasing evidence that morphological abnormalities within the brainstem, namely in the SOC, of autistic individuals are a cause of the hearing difficulties.The neurons of the MSO of individuals with autism display atypical anatomical features, including atypical cell shape and orientation of the cell body as well as stellate and fusiform formations.Data also suggests that neurons of the LSO and MNTB contain distinct dysmorphology in autistic individuals, such as irregular stellate and fusiform shapes and a smaller than normal size. Moreover, a significant depletion of SOC neurons is seen in the brainstem of autistic individuals. All of these structures play a crucial role in the proper functioning of binaural fusion, so their dysmorphology may be at least partially responsible for the incidence of these auditory symptoms in autistic patients.


Cognitive neuroscience of music

Differences
Brain structure within musicians and non-musicians is distinctly different. Gaser and Schlaug (2003) compared brain structures of professional musicians with non-musicians and discovered gray matter volume differences in motor, auditory and visual-spatial brain regions. Specifically, positive correlations were discovered between musician status (professional, amateur and non-musician) and gray matter volume in the primary motor and somatosensory areas, premotor areas, anterior superior parietal areas and in the inferior temporal gyrus bilaterally. This strong association between musician status and gray matter differences supports the notion that musicians' brains show use-dependent structural changes. Due to the distinct differences in several brain regions, it is unlikely that these differences are innate but rather due to the long-term acquisition and repetitive rehearsal of musical skills.

Brains of musicians also show functional differences from those of non-musicians. Krings, Topper, Foltys, Erberich, Sparing, Willmes and Thron (2000) utilized fMRI to study brain area involvement of professional pianists and a control group while performing complex finger movements. Krings et al. found that the professional piano players showed lower levels of cortical activation in motor areas of the brain. It was concluded that a lesser amount of neurons needed to be activated for the piano players due to long-term motor practice which results in the different cortical activation patterns. Koeneke, Lutz, Wustenberg and Jancke (2004) reported similar findings in keyboard players. Skilled keyboard players and a control group performed complex tasks involving unimanual and bimanual finger movements. During task conditions, strong hemodynamic responses in the cerebellum were shown by both non-musicians and keyboard players, but non-musicians showed the stronger response. This finding indicates that different cortical activation patterns emerge from long-term motor practice. This evidence supports previous data showing that musicians require fewer neurons to perform the same movements.

Musicians have been shown to have significantly more developed left planum temporales, and have also shown to have a greater word memory. Chan's study controlled for age, grade point average and years of education and found that when given a 16 word memory test, the musicians averaged one to two more words above their non musical counterparts.
Similarities
Studies have shown that the human brain has an implicit musical ability.Koelsch, Gunter, Friederici and Schoger (2000) investigated the influence of preceding musical context, task relevance of unexpected chords and the degree of probability of violation on music processing in both musicians and non-musicians.Findings showed that the human brain unintentionally extrapolates expectations about impending auditory input. Even in non-musicians, the extrapolated expectations are consistent with music theory. The ability to process information musically supports the idea of an implicit musical ability in the human brain. In a follow-up study, Koelsch, Schroger, and Gunter (2002) investigated whether ERAN and N5 could be evoked preattentively in non-musicians.Findings showed that both ERAN and N5 can be elicited even in a situation where the musical stimulus is ignored by the listener indicating that there is a highly differentiated preattentive musicality in the human brain.
Emotions induced by music activate similar frontal brain regions compared to emotions elicited by other stimuli. Schmidt and Trainor (2001) discovered that valence (i.e. positive vs. negative) of musical segments was distinguished by patterns of frontal EEG activity. Joyful and happy musical segments were associated with increases in left frontal EEG activity whereas fearful and sad musical segments were associated with increases in right frontal EEG activity. Additionally, the intensity of emotions was differentiated by the pattern of overall frontal EEG activity. Overall frontal region activity increased as affective musical stimuli became more intense.

Music is able to create an incredibly pleasurable experience that can be described as "chills". Blood and Zatorre (2001) used PET to measure changes in cerebral blood flow while participants listened to music that they knew to give them the "chills" or any sort of intensely pleasant emotional response. They found that as these chills increase, many changes in cerebral blood flow are seen in brain regions such as the amygdala, orbitofrontal cortex, ventral striatum, midbrain, and the ventral medial prefrontal cortex. Many of these areas appear to be linked to reward, motivation, emotion, and arousal, and are also activated in other pleasurable situations. Nucleus accumbens (a part of striatum) is involved in both music related emotions, as well as rhythmic timing.

When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain.The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingulate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.

:cuckoo:
 
:cuckoo:
After the Wiki's Cave, let's putting some "matière grise" in the DANA fondation's Cave and also
Keep on going with NO COMMENT links :ninja: :

Mastering Our Brain’s Electrical Rhythms

THE BRAIN AS GENERATOR


The human brain is a wondrous marriage of electricity and chemistry. Neurons are microscopic power sources that build up an electrical charge by chemical means (like a battery), then briefly reverse the voltage over and over again. In this way, electrical potentials can shoot along the neuron’s major extension—the axon—and be translated into chemicals that cross the synapse, the tiny gap between neurons, to produce electrical potentials in the receiving dendrites of the next neuron. In that neuron, the process can begin again, so that the electrical potential can keep moving. Neurons fire these action potentials in unison to accomplish whatever task the brain is doing. The number of times a cell builds up a charge and reverses it determines the frequency of the cortical rhythms in the brain. The cortical rhythm is the sum of the brain’s action and dendrite potentials.
Neal Miller at Yale University trained a mouse to raise or lower its heart rate by 20 percent by rewarding the animal with a jolt to its brain’s pleasure center every time it changed its heart rate in the desired direction. Later he taught humans with tachycardia, or abnormally fast heartbeats, to slow the beat—they were rewarded not with a shock but a pleasant musical tone.

THE POWER OF ALPHA WAVES

Research has not established conclusively whether learning to control the frequency, amplitude, and synchrony of brain waves can permanently alter the brain. I believe that it can. One of the critical discoveries of the past decade is that our brains are far more plastic than we ever imagined. The handful of controlled studies on this subject, and years of clinical observation, support the idea that brain activity—and the brain’s structure—may be changed by operant conditioning, with a resulting increase in the flexibility of attention. Just as learning a new task permanently alters neural circuits, so too does conditioning via EEG biofeedback, another kind of learning.

The human brain operates along a spectrum from 1 Hertz (a frequency of one cycle per second, abbreviated as Hz.) up to as much as 100 Hz, although most commonly we record up to about 40 Hz. Brain wave frequencies are clustered into four basic categories associated with different mental functions and named with letters of the Greek alphabet:

- Delta, from 1 Hz to 4 Hz, is associated with sleep.
- Theta, from 4 Hz to 8 Hz, is associated with hypnogagic and hypnopompic states (states between sleep and wakefulness).
- Alpha, from 8 Hz to 13 Hz, is associated with a deeply relaxed, yet waking state.
- Beta, from 13 Hz to 40 Hz, is the frequency range in which we operate in our day-to-day waking state.

Neurofeedback, which gives a person information about the amplitude and frequency of his brain waves at a particular moment, can be used in many ways. Let me explain.

My journey began in the 1960s at UCLA’s Brain Research Institute, where I was a doctoral student working with Donald B. Lindsley, a pioneer of physiological psychology. Studying visual processing in macaque monkeys, we realized that the nerve and brain activity being triggered in them must be coded in their brains in the form of synchronous cell firing. The speed at which information processing occurred was possible only if the cell firings were traveling in parallel from region to region of the central nervous system, instead of in a series from point to point.5 In other words, I realized that the brain was most efficiently processing and conducting information when the neurons worked simultaneously— in synchrony. Recent research has shed light on the idea that alpha frequencies are generally where the brain is most synchronous— that is, where the most neurons are being simultaneously excited. I hypothesized, therefore, that conscious control of alpha might be a key to enhancing brain function.

In the early 1960s, Kamiya’s successful conditioning of college students to recognize and produce alpha frequencies further piqued my curiosity. Using biofeedback instrumentation I designed, I became my own first subject. One saline sensor was placed over the midline between my occipital lobes, while reference and ground sensors were clipped to my ears. In the course of a dozen sessions, I sought to produce alpha waves. I tried everything imaginable: meditation, visual imagery, music, colored lights, incense, negative ions, and muscle relaxation—all to little avail.

Exasperated and disappointed, I gave up. Fortunately, I was still attached to the instrument. At the moment of my mental surrender, the pen and ink EEG scratched high-amplitude alpha waves across the strip of paper. I had stumbled into the state of aware effortlessness, without daydreaming or sleep, which is associated with alpha.

After practicing in this alpha-abundant state for over a week, I observed surprising changes in myself. My whole body, and some particularly tense muscles in my face and neck, relaxed, yet I felt alert, centered, and poised. My sleep improved. I experienced an unusual mental sharpness and clarity. Colors seemed more vibrant and rich. My rather compulsive personality style softened. Most unusual of all, it seemed that the scope of my vision opened; when I looked around, I took in more with less effort. This was important to what would come later.

Synchrony, heightened in an alpha state, relates directly to amplitude and power. In the February 23, 2001, issue of the journal Science, two researchers at the National Institute of Mental Health, Robert Desimone and Pascal Fries, reported that synchronous neuronal firing may be a fundamental mechanism for boosting the volume of brain signals that represent important stimuli. In effect, they said, synchrony is the brain’s way of encouraging more voices in the choir, which helps important signals to stand out from the “noise.”

Synchrony, in other words, is necessary for optimal information processing in the brain, something animals understand instinctively. While grooming and at rest, monkeys and other animals produce more synchrony. When neurofeedback conditions the whole brain into synchrony, that synchrony moves the brain cells and the physiological systems they govern toward homeostasis, or better, more flexible physiological regulation.

SYNCHRONY MULTIPLIED BY FIVE

If training one part of the brain could produce these effects, what would happen if the lesson of synchrony were applied to the whole brain—for example, if five locations over the cortex were trained to produce alpha simultaneously, and in phase? That would mean that the brain’s electrical waves were reaching their peaks and then their troughs at the same time, a uniform hum across the whole brain. This pattern is called rhythmic entrainment, and it can be very powerful. Soldiers marching in step over a bridge must break their cadence or risk bringing the bridge down. I sometimes refer to the goal of neurofeedback, as I practice it, as entraining cortical rhythms.

Working with research volunteers, I sampled eight lobes with five saline sensors secured to the scalp at places above the brain’s midline prefrontal, the midline occipital lobe, the midline motor areas, and both temporal lobes. Placements of sensors on the midline between the lobes helped monitor bilateral synchrony; and feedback was provided to condition in-phase synchrony. The effects were far more powerful than training one site alone. People who underwent the five-channel training had experiences similar to mine—enhanced sensory experiences, a feeling of well-being, and a profound kind of relaxation—but often could achieve some of these effects after a single training session. Sometimes these effects lasted for days. Usually the effects of a series of sessions lasted months, years, or indefinitely.

How the cortical electrical information is sampled (for example, where the sensors are placed on the scalp) is critical to brain-wave training. In the early days, many failures by researchers to replicate biofeedback studies were caused by poor research design.

One common error, bi-polar placement of sensors, actually cancels out the observable in-phase synchronous activity for which the subject is supposed to be being trained.6 Unfortunately, the distinction was lost on researchers. Because of these and other problems, it became virtually impossible to get funding for neurofeedback research and difficult to get research results published.

HOW TO IMAGINE NOTHING

Searching for ways to help people produce phase-synchronous alpha, I monitored their EEGs while I varied the imagery and relaxation exercises. I asked them to imagine a series of sensory images—a waterfall or a sunset—designed to induce relaxation. Here another clue revealed itself. Out of 20 images, only 2 produced immediate alpha amplitude increases: “Can you imagine the space between your eyes?” and “Can you imagine the space between your ears?”

Imagining space became a simple, effective tool for helping people get into alpha more quickly. With light and sound feedback to reinforce them when they achieved alpha synchrony, they listened to an audiotape of questions that required them to imagine the space around them, inside their heads, or occupied by their bodies. As they imagined space, their EEG moved into phase-synchronous alpha. We set for them the goal of listening to the taped questions about space in a way that produced maximum light and sound feedback. We then asked them to listen the same way during home practice. This basic protocol (much-simplified in this description) has worked in my clinic for three decades.

Imagining an object can create desynchronized brain waves, but when space is imagined there is nothing for the brain to grip, nothing to struggle to make sense of. Profound relaxation results because tension is released. I later learned that this use of space is similar to the techniques that some Eastern religions use in meditation. The EEGs of veteran meditators display a phase-synchronous alpha pattern across the whole head during meditation; and some longtime meditators describe an opening of their focus as the result of this practice. But while I believe that phase-synchronous training has effects similar to some types of meditation, it is not the same. Imagery protocols with light and sound neurofeedback are a faster, Western scientific way to achieve some of the same benefits. Moreover, it does not require adherence to a specific path or belief; it is a physiological process that everyone can access.

ATTENTION THAT IS STUCK IN “FIGHT OR FLIGHT”

Remember when I explained that after many hours of alpha training, my vision seemed to change and open? If I looked at a landscape, I took in more of what I was looking at, with less effort. Many of my students and clinic participants have noticed the same phenomena. What I came to understand is that we modern humans pay attention very narrowly. This is an emergency mode of paying attention. We can speculate that, in our evolutionary past, if we were walking in the forest and heard a twig snap, we would instantly narrow our focus. Our heart rate, adrenaline production, and other physiological indicators would shoot up, preparing us to fight or flee. After we discovered the source of the sound, and if it was not threatening, we would gradually return to a more open kind of attention.

Modern society and its almost continuous concerns demand narrow, objective focus. We teach our children to watch out for cars, to focus on their schoolwork, to pay close attention to what they are doing. We do not teach them, for the most part, that they should relax and open their focus, at least some of the time, while they are learning or performing. With the constant exhortation to focus a narrow beam of attention on the world, it becomes habitual early in life. So we spend our whole lives stuck in this mode of attention, without realizing it until our focus opens.

What is more, we pay attention with our whole body. Living in narrow focus, our heart rate, respiration, blood pressure, and other systems stay in overdrive. Eventually we either get so revved up that we become chronically anxious and irritable, perhaps unable to sleep, or our bodies start to burn out. To keep pace, we drink too much coffee to maintain narrow focus, then use alcohol or prescription drugs to relax. Chronic narrow focus is akin to keeping a hand constantly clenched; after a while the muscles stiffen and we lose control of them.

An extreme form of this can be seen in some children who grow up in an abusive environment. They are chronically hypervigilant and so narrowly focused on the possibility of harm that they have trouble with tasks such as reading. The scope of their visual focus is so small that they can see only one word at a time, and they must be taught to relax and broaden their focus so that they can learn to read.

Phase-synchronous alpha activity is an antidote to narrow focus, and thus a way to break its grip and reduce stress. Imagining space, while receiving feedback for brain wave synchrony, teaches people to access a form of attention in which the brain and body diffuse stress. The brain and central nervous system are the master control system for mind and body. Unresolved activity from chronic fight or flight responses—hyperactive nervous and glandular systems and tense muscles, for example—are dissolved.

Animals assume this state. Watch your dog or cat lying on the rug, eyes half open, seeming near sleep. The minute a morsel of food hits the bowl or there is a knock on the door, however, your pet springs up to investigate. This as yet uncommitted readiness to perform is what I call zero bias. It is an optimal attention in which people can throttle back and rejuvenate, instead of being in a chronic narrow state of hypervigilance. In this state, synchrony takes over and normalizes the nervous system. When something demands a narrow, object-oriented attention, we can flexibly zoom in with less effort. Ideally, we can stay in that mode of attention as long as it is needed, then return quickly to an open attention.

Although paying attention is essential to who we are, we seldom give it much thought, assuming that we are either paying attention or not. The truth is that we have an eclectic range of attentional styles within four basic types:
- objective: maintaining distance from experience
- objective: maintening distance from experience
- objective: maintening distance from experience
- objective: maintaining distance from experience.
{ :referee: I AM JOKING, the right types are here:}
- narrow: focus on something and exclude everything around it;
- diffuse: encompassing awareness of all present sensory experience;
- immersed: absorbed in experience;
- objective: maintening distance from experience.
One type is not superior to the others; each is appropriate to a particular situation The goal should be flexibility, being ready to emphasize any type of attention that a situation demands. With advanced training, all styles of attention may be accessed simultaneously. To get a feel for the changes created almost immediately by a shift in attentional style, look at this printed page and continue reading, but also be aware of the space between your eyes and the page. Focus on the space behind and to the sides of the page. After a few minutes, your face and eye muscles may start to relax. Or sit still, and for 10 to 15 minutes, with your eyes closed, imagine space in, and between, various parts of your body, and space extending limitlessly in every direction.

Our style of attention both reflects and influences our brains’ cortical rhythms. Phase-synchronous alpha waves, for example, appear to be inherently stable. Much of the anxiety, fear, and depression that we experience—and repress—was never meant to remain in our bodies for extended periods. The feelings were meant to be experienced as needed, and then to dissipate. But in a heightened state of arousal, brought on by narrow and exclusive focus, these feelings are either tormenting us (because they are spotlighted) or chronically blocked from our awareness—avoided or repressed.

Freud appears to have understood the role of attention in therapy. He asked patients to lie down in a darkened room facing away from him, which cultivated a style of attention that promoted free association. When someone’s focus is opened, the unconscious also gradually opens and the contents become conscious. Freud wrote that he listened to his patients in a state of “evenly hovering attention.”

How we pay attention—and how our attention has been conditioned to react to situations and emotional stress—is at the root of more problems than we realize. Taking medication to mask emotions does not necessarily solve the problem, any more than disconnecting a warning light in your car gets at the cause of the mechanical problems the light is warning you about.

MANAGING OUR ATTENTION: CAN IT HEAL?


Fortunately, stress is beginning to get more attention from scientists. Bruce Perry, M.D., chief of psychiatry at Texas Children’s Hospital and a research professor at Baylor College of Medicine, has studied abused and neglected children and believes that humans have an exquisitely sensitive stress response as a matter of survival. “The prime directive of the brain,” he writes, “is to promote survival and procreation. The brain is ‘overdetermined’ to sense, process, store, perceive and mobilize in response to threatening information from the external and internal environments.”

There are, however, natural antidotes to this hyperactive stress response. Certain types of meditation tend to rhythmically entrain the brain and heal both body and mind. Music, chanting, and drumming are entrainment rituals that have served cultures for centuries. All stress antidotes, I believe, are effective because they help our attention become more diffuse and absorbed. The difference is that neurofeedback training tries to give people more direct control over that attentional shift. When people become aware of a more diffuse style of attention, they can locate where in their body their pain or other unpleasant experience is most intense: anxiety in their stomach or chest, for example. Resisting anxiety and pain takes energy. Diverting energy for resistance, energy a person needs to operate efficiently, can cause depression. But as anxiety is re-experienced in the diffuse style of attention, it can diffuse or disappear. Depression lifts as the need for repression wanes and the system normalizes.

Migraine headaches, irritable bowel syndrome, anxiety, and insomnia appear as disparate symptoms but often they are manifestations of one problem, a stressed nervous system, and all these—and many more conditions—have been helped through attention training. We may ask how a single approach can be effective for disorders as varied as these, but often one medication is used for a similarly wide range of problems. For example, only a small fraction of prescriptions for the anticonvulsant Neurontin are prescribed for seizure; it is also prescribed for chronic rage, migraines, restless leg syndrome, bipolar affective disorder, chronic fatigue syndrome, chronic pain, ALS (Lou Gerhrig’s disease), and tremors. If indeed it has such broad efficacy, it is probably because it stabilizes the brain. Certain forms of attention do exactly the same thing, while other forms destabilize the central nervous system.

Our ability to operantly condition brain activity says something fundamental about us. Perhaps the brain is not so inadequately designed or so frequently deficient as the array of chemicals now being prescribed by psychiatrists would suggest. Could the problems that we face largely be operator error—functional rather than biochemical or structural? If we know how to create the appropriate attentional environment, the brain can self-regulate to take care of many of its own problems without outside interference. Ongoing return to homeostasis in the central nervous system is designed to be our normal state, but as a rule we do not know how to permit and maintain it. Changing the way we pay attention is one fundamental way to do that.

THE SPARK AND THE SOUP

For decades, research institutions and funding agencies have favored research on the brain’s biochemistry, what scientists once dubbed the “soup.” But brain chemistry is just part of the picture. The brain is not static, and the flow of neurotransmitters is not fixed. When we change the way we pay attention, we alter cortical rhythms—the spark—and in turn alter the chemical milieu—the soup— as well as the brain’s structure.

Spark and soup are inseparable. We should start funding investigations into fundamental alternatives to the reigning orthodoxy of only brain chemistry. Medication has a role to play, but when we are dosing ourselves and our children with a growing number of antidepressants, stimulants, and anticonvulsants, many without long-term safety or efficacy data to support them, it is time to review our scientific premises. Meanwhile, before resorting to sending trial-and-error doses of medications into the unimaginably delicate neurocircuitry, I think the gentler and less risky approach of brainwave training is worth a try. It has few contraindications as a therapy, and the only requirement is motivated involvement.

Neurofeedback is a work in progress. There is much room for improvement. Brain-wave training has the potential to be a first line of treatment for many physical, neurological, and psychological problems, but research is needed to determine for whom it will work best and why, and to improve its efficacy. The approach to operant conditioning of the brain described here is one of many. Hundreds of professionals are treating various conditions with neurofeedback and getting excellent results—all with scant research funding.

There has been some renewal of research in the field of brainwave training. For example, John Gruzelier, Ph.D., a professor of psychology and head of the department of cognitive neuroscience and behavior at the Imperial College School of Medicine in London, is using low frequency neurofeedback training to teach musicians at the Royal College of Music enhanced control over mental and emotional processes as an aid to performance. Research on using brainwave training in connection with Tourette’s syndrome, mild traumatic brain injuries, depression, and autism is taking place here in the United States.

Neurofeedback techniques are certainly not a panacea, but powerful and proven tools. Much clinical experience suggests that they can help patients become healthier by teaching them skills that effect long-term changes in the brain. It is to their detriment that formal research aimed at understanding the operant conditioning of the brain’s electrical frequencies was abandoned. The time has come for scientists and clinicians to revisit the brain’s spark.


How Music Can Reach the Silenced Brain

Music is a complex stimulus, involving everything from pitch to rhythm, melody to volume. Consequently, it is not processed in a single area of the brain. We can see this in what is called “amusia,” in which a single musical skill is lost when a specific area of the brain is damaged—for example, loss of pitch perception resulting from lesions to the right temporal lobe. But while a component of music, such as pitch, may be processed in a specific region of the brain, the overall experience of music is a gestalt of perceptual and psychological processes occurring in synchrony and involving a spectrum of neurologic activity and brain regions.

We now know from clinical case studies that music can affect—in very specific ways—human neurological, psychological, and physical functioning in areas such as learning, processing language, expressing emotion, memory, and physiological and motor responses. How your brain perceives and processes music also differs depending on whether or not you are a musician. The effects of music raise intriguing questions about both early brain development and brain plasticity later in life.
Rhythm is, in fact, the primary property of music and is critical to human life in other ways. Plato defined rhythm as “the order in movement,” and the temporal structure of music (its movement) has suggestive parallels in human motor development. At five months of age, when a fetus’s neural circuits and auditory memory are forming, it experiences rhythm through the mother’s heartbeat and respiration. Immediately after birth, basic motor patterns begin to develop. While eating, crawling, and walking, each child finds a cadence, particular motor rhythms that will remain fairly consistent throughout life. Our natural and spontaneous body movements may be outward representations of inner timing mechanisms. Leon Glass, Ph.D., at McGill University, and other scientists are investigating the complex mathematics of physiological rhythms and how they interact to maintain our health. We know that an alteration in internal rhythm—cardiac arrhythmia, for example— can be the harbinger of ill health or death.

Some internal rhythms can come to match external rhythms. In effect, a rhythm in the external world is heard and internalized, evoking an answering rhythm within us. When we understand how and when external auditory rhythms, or cues, influence various internal timing mechanisms, rhythm can become a powerful therapeutic tool.

The effect of external rhythmic cues on motor function, as we saw with Sam, is a prime example of how this influence occurs. Brain-imaging studies show that an area in the prefrontal motor cortex will start to become active at precise intervals in anticipation of a sequence of motor activity, such as finger tapping at one-second intervals. The resiliency of this motor-timing mechanism is strikingly apparent in people whose motor control, or motor initiation, has been lost as a result of a stroke or Parkinson’s disease, but whose brains still respond to a rhythmic stimulus.

In neuromuscular diseases affecting the ability to initiate and control movement, external rhythm seems to supply the timing information that makes movement possible. For Sam, even singing the song to himself provided the required neurologic benefit, the external cue. Writer and neurologist Oliver Sacks, M.D., author of The Man Who Mistook His Wife for a Hat, eloquently describes a similar response to music in one of his post-encephalitic patients, who had great difficulty walking alone but walked perfectly if someone walked with her—or could time her steps to music. She said: “Whether it is others, in their own natural movement, or the movement of music itself, the feeling of movement, of living movement, is communicated to me. And not just movement, but existence itself.” Sacks studied this phenomenon in the EEGs of some of these patients when they merely imagined a specific piece of music. Although their regular EEGs were very abnormal—the brain was slow on one side while convulsive on the other, for example— when they played the piano or simply imagined a piece of music, their EEGs became more normal.

WHY MOVEMENT RESPONDS TO RHYTHM
Michael Thaut, Ph.D., and his colleagues at Colorado State University suggest that the sensitivity of our motor systems to influences from sounds may have developed during human evolution so we could use the way we process what we hear to enhance our ability to organize and control our movements. Our basic auditory-arousal mechanisms (for example, our movements in reaction to a sudden loud noise) operate primarily through the amygdala in the brain’s limbic system and may have originated in adaptive evolutionary processes, namely, the fight-or-flight response. In any case, the auditory system has connections to the brain stem, midbrain, and higher cortical structures, and normal motor function requires that these subcortical and cortical regions work in concert with each other.

The basal ganglia, a brain region affected in Parkinson’s disease, provides a link to still other areas of the brain that connect mental processes and the initiation of movement. While the thought or wish to move depends on higher cortical processing, the actual ability to move depends on lower brain regions. If the higher cognitive processes that can initiate movement are damaged in traumatic brain injury or stroke, the requisite will to move may nevertheless get a “jump-start” by stimulating motor nerves that are still functional. Does the patterned auditory cue supplied by musical rhythms excite the more primitive motor areas first, and only then recruit or drive higher cortical circuits into action?

New evidence from studies by Wen Jun Gao, Ph.D., and Sarah L. Pallas, Ph.D., at Georgia State University suggests that learning, or at least the organization and development of cortical circuits in the brain, is influenced by patterned sensory activity, such as listening to sound clicks presented at specific time intervals. If such sensory signals turn out to enhance neural development, what role does rhythm—patterned auditory stimulation—play in the restimulation of these networks once they have been laid down? In patients like Sam, regaining physical function began on a spontaneous, unconscious level, indicating that the subcortical areas of his brain were being activated before the restoring of the higher cortical areas involved with the thought and the intent to initiate movement.
WORDS SPOKEN AND SUNG
Because music has parallels to spoken language, much research on music and the brain has zeroed in on the similarities and differences between them. The similarities could be clues to more successful methods of using musical cueing to stimulate similar language responses in people with brain injuries. One remarkable example of the functional difference between music and language, however, occurs in people who have suffered a left-side stroke, resulting in a type of aphasia where verbal comprehension still exists but the ability to speak or find the right words is lost. In these cases, the brain lesion is often located in what is called Broca’s area; speech is slow, not fluent, and hesitant, with great difficulties in articulation. Yet, despite the loss of speech, many people with this type of aphasia can sing complete lyrics to familiar songs. This has usually been attributed to the separation of function of the left and right hemispheres of the brain, speech being dominant on the left and singing on the right.

Because many clinicians assume a complete separation of function between singing and speaking, they give little attention to the potential for using music to aid speech. But there are several cases in which a patient has recovered speech through the systematic use of rhythmic patterning, leading first to recovery of familiar lyrics and words embedded in songs, then to self-initiation of normal, fluent speech. In each case, however, this remarkable change had been attributed not to the music but to spontaneous recovery during the early months after the stroke.

A similarity shared by music and speech is what we call “prosody,” which includes the elements of stress, pitch direction, pitch height, and intonation contour, or inflection. People with nonfluent aphasia can perform a type of prosodic speech that includes the inflection and contour of previously known phrases. This speech differs, however, from propositional speech (which includes verbal expression of new thoughts and ideas) in its rate, discrete pitch, and increased predictability. Aniruddh D. Patel, Ph.D., a scientist at the Neurosciences Institute in California, theorizes that rhythm and song, which are inherently predictable, may create a “supra-linguistic” structure that helps cue what is coming next in an utterance.

Brain-imaging studies by Dr. Pascal Berlin, of the Service Hospitalier Frederic Joliot in France, and more recently by Dr. Burkhard Maess at the Max Planck Institute of Cognitive Neuroscience, used PET and MEG scans to determine that areas peripheral to the left language regions of the brain are involved in processing the singing of single words. Additional imaging studies suggest that some aspects of music and language are processed in both the right and left sides of the brain. In many patients who are able to carry over speech techniques from music, success seems to come from their increased ability to attend to sounds and to initiate them, perhaps because parallel mechanisms for these functions have been called into play by music and singing.
Just as rhythm can affect motor function and the initiation of movement, a familiar tune or melody can reawaken in persons with dementia, or with traumatic brain injury, seemingly lost memories and feelings. We are so much the sum of our experiences and memories that we cannot help associating each new experience with something that came before it. Imagine how the world must seem to someone with no memory link from past to present. But sometimes music can provide a bridge.
We do not know specifically how music affects memory, but most of us experience that effect every time we hear a favorite song. Indeed, music is capable of arousing in us deep and significant emotions. Memories of music can be so well preserved that the merest fragment of a melody stimulates recall of the song’s title or lyrics. Emotionally charged responses to familiar music are probably the result of connections from the auditory nerve to key limbic structures in the brain. The limbic area, which is associated with emotion, includes the olfactory cortex, amygdala, and hippocampus. The amygdala gets its input from our senses and directly affects our autonomic responses; it is also involved with our moods through interconnections with the frontal cortex and thalamus. The hippocampus plays a significant role in storage of factual information, including conscious (declarative) memory.

Because memories persist when they have personal significance, the emotional content of music seems to be processed immediately, even by people with severe dementia. Is this a possible pathway we can use to reach their sense of self? Ernest G. Schachtel said in 1947 that memory, as a function of the living personality, can be understood only as the capacity to organize and reconstruct past experiences and impressions in the service of present needs, fears, and interests. Just as there is no such thing as impersonal perception and impersonal experience, there is no impersonal memory. Thus, familiar songs may serve as cues to recall memories. People with dementia, who may have lost the capacity to process many types of information, including the ability to identify a song, may still respond to that song spontaneously and emotionally. In “Music and the Brain,” Oliver Sacks writes that “it is the inner life of music which can still make contact with their inner lives which can awaken the hidden, seemingly extinguished soul; and evoke a wholly personal response of memory, associations, feelings, images, a return of thought and sensibility, an answering identity.”
Observing how people with dementia respond to music gives us an inkling of how remarkable and instantaneous some of these subcortical processes are. But if, as pointed out earlier, the brain’s processing of music is complex, involving many areas, what specific component of music does a person perceive and process to allow for these immediate responses?

In some instances, factual memories return. New research is shedding light on how this may happen. Ann Blood, Ph.D., Robert Zatorre, Ph.D., and their colleagues at the Montreal Neurological Institute investigated the brain mechanisms involved in emotional responses to music. They found that regions previously identified with pleasant or unpleasant emotional states (with the exception of fear) were activated in the para-limbic brain regions, rather than areas normally associated with music perception. Studies like this reinforce the concept of musical processing as a “whole brain” phenomenon. With the proper musical cue, we may gain access to another system, with enough overlap to jump-start similar areas that are now dysfunctional. That is, when higher cortical processing is compromised, there may be another way into the brain.

HARNESSING MUSIC’S POWER
Perhaps if we understood more about the relationship between the auditory system and other aspects of human cognitive function, we could reach more people like Sam, Mary, and Sally. For those with neurologic impairments and diseases like Parkinson’s or multiple sclerosis, music therapy is only beginning to be recognized as a promising treatment. In its “Primer on Reimbursement,” the American Music Therapy Association notes that music therapy is recognized as a viable treatment option, including in federal law and by accrediting agencies. It is included in the Older Americans Act Amendments of 1992 and the Individuals with Disabilities Education Act, and recognized by the Rehabilitation Accreditation Commission and the Joint Commission on the Accreditation of Health Care Organizations. Even so, the availability of music therapy for the whole range of situations where it could help is gravely limited.

Although much is being discovered about music’s effects on the brain’s functioning, we have no cohesive, detailed theory of how this takes place. For example, what specific element of music aids in the recovery of language in a person with aphasia? Is it the articulation and rhythmic cueing of familiar speech patterns? Or does singing the lyrics stimulate and improve word retrieval for normal speech? How, specifically, does music affect retrieval of memories? When stimulated by music, what role do lower brain areas (the cerebellum, reticular formation, and others) have in the upward activation of higher cortical mechanisms?

The great Russian neuropsychologist Alexander Luria observed that what we know of brain function is based on what has been lost and what remains following a traumatic brain injury. Music therapists who do neurologic rehabilitation know that it is almost impossible to lose all aspects of music perception. Knowing how the brain processes the elements of music—rhythm, pitch, harmony, timbre, tempo, contour, loudness, spatial location, and melody— as well as associations and memories, and where overlapping or parallel regions share this processing, could support increased use of these components of music early in treatment, the better to take advantage of brain functions that have been preserved.

With the advent of new imaging techniques, we know that the brain is a dynamic, ever-changing system of interconnecting neurons that work in concert to produce our complex, dynamic responses to the world around us. The discovery that new networks and connections may be formed in the brain every time we learn a new skill has implications not only for early childhood development, but also for potential recovery of function after injury.

I will never forget one patient, admitted for short-term rehabilitation when he was in the early stages of dementia. He no longer could dress himself. He seemed not to have the fine motor skills to button his shirt, yet he could play the opening of the “Hungarian Rhapsody” on the violin. Both skills obviously had been used almost every day throughout this man’s life, yet he had lost one and not the other. How can rehabilitation take advantage of such similar but subtly different functions?

It is highly unlikely, for example, that a symphony conductor and a tennis player would have the same motor skills and memories for movement in the left and right hands and arms, yet standard physical and occupational rehabilitation practices would treat them as identical. Conductors, at least the good ones, must be able to give two simultaneous signals that may convey completely different messages—for example, cueing the violins while setting the timing patterns for the percussion section. They will tell you that they can separate the functioning of their left and right sides. In musicians with these overlearned motor skills, certain motor neural networks and overlaying motor areas in the brain may remain intact even after a stroke, and could aid in earlier recovery of function or even development of compensatory mechanisms. But to help, we simply have to know more.

Both basic research and clinical investigations on the underlying brain mechanisms stimulated by different elements of music will continue. It is fairly safe to predict that we will discover that certain elements of music are processed in “primitive” brain regions, including some that are highly resistant to the ravages of traumatic injury and disease. Then we must ask: How do these deeper regions of the silenced brain, reached by rhythms or melodies of music, in turn stimulate the brain’s higher regions (or bypass them) so as to switch on motor, cognitive, or emotion-related functions that had appeared lost forever? The answers will come, though no one can predict how rapidly, and then we may see more often— even routinely—what now seems (and is) a miracle: the man struggling to walk will dance; the haunted, weeping woman who walks the halls will rejoin us, singing; and the mind drained of its memories will know the comfort of a familiar old tune.
 
Keep on going...


The Dancing Brain

How can watching one dance performance, whether classical ballet or the newest modern choreography, be so engaging—even thrilling— and watching another leave us indifferent? Dutch choreographer and researcher Ivar Hagendoorn argues that contemporary neuroscience points at the answer. The limbs move, but it is the brain that dances.

HOW SLOW BRAINS COPE WITH FAST MOVEMENTS
An article in Nature by vision researcher Romi Nijhawan of the University of Sussex shows the consequences of neural processing delays for the perception of moving objects. Once a light particle hits the retina, it takes 50 to 100 milliseconds to be transformed into an electrical impulse and reach the visual cortex. This interval may seem negligible, but in this short time a car traveling at 65 mph will cover 7 to 10 feet. Several scientists have proposed that to make up for this delay the brain somehow extrapolates the moving object’s trajectory. By shifting the percept—that is, the object’s neural representation—forward, its perceived position would coincide with its actual position. Nijhawan suggests that this extrapolation takes place at the starting line of the process of perception—in the retina. Although this is still debated, many scientists hold that some kind of prediction takes place at various stages of visual processing, including the earliest.

In the French neuroscientist Alain Berthoz’s book The Brain’s Sense of Movement, 1 the theme is that both perception and action are essentially predictive. In his view, the brain acts as a simulator, creating mental models of the body and the world, models that it constantly updates with newly arrived information from the senses.

Some of the best examples of the predictive nature of motion perception come from sports. Sports psychologists showed in the early 1980s that a tennis serve is too fast for an opponent to decide after the ball has been hit in which direction it is going, to his forehand or backhand (and this was before the “power serves” of today). Players nevertheless frequently manage to return the ball. They do so by reading from the movements of the serving player where he is going to hit the ball, so they move in that direction even before the racquet touches the ball.

In a recent elegant study, the psychologists Michael Land of the University of Sussex and Peter McLeod of Oxford University showed how cricket players solve the same problem. The researchers measured the batsmen’s eye movements as they prepared to hit an approaching ball. They found that the eyes monitored the ball shortly after its release by the bowler then made a predictive movement to where they expected it to hit the ground, waited for it to bounce, and then tracked its trajectory for 100 to 200 milliseconds afterward. Using a combination of statistical measurements and physical calculations, Land and McLeod showed that “information provided by these fixations may allow precise prediction of the ball’s timing and placement.”

From my brain’s viewpoint, watching a dancer on stage is virtually the same as a tennis player’s watching his opponent’s serve. But how did this help me begin to understand my fascination with dance?

THE BEAUTIFUL AND THE SUBLIME
In his analysis of aesthetic judgment, 18thcentury German philosopher Immanuel Kant made a distinction between the beautiful and the sublime. Beauty, according to Kant, is the feeling we experience when we discover a harmonious order in art or in nature that appeals to our mind’s own drive towards creating order. This feeling may be instantaneous, as when we look at a perfectly symmetrical sculpture, or the result of careful analysis, the way a Bach fugue becomes more beautiful the more we listen to it.

Sublime also refers to a feeling—or, better, a state of mind—but one that is characterized by initial discord rather than harmony. Faced with an immense object, a skyscraper or the Grand Canyon, or a powerful phenomenon like a raging storm, our senses are overwhelmed, unable to comprehend the object or phenomenon in its totality. According to Kant, this causes an initial feeling of displeasure or even fear. Shortly after this moment of disorientation, the self regains its composure as it succeeds in framing the information that overflowed the senses. It is at this moment that a feeling of pleasure sets in, as the self realizes that it has survived and, by implication, is more powerful than the vast object or event.

When reading Kant alongside the work of Berthoz, Nijhawan, and other neuroscientists studying perceptual anticipation, I realized that this reasoning may also apply to our perception of movement. If the brain fails to predict correctly the unfolding of a movement, we are taken by surprise. We often give out a sigh if a tennis ball hits the rim of the net, or if we see someone stumble. If the movement should continue to elude us, we become intensely aware of a “presence” (the movement) that cannot be wholly represented neurally or internally. The brain struggles to keep up with the movement, but before a motion percept (the internal representation of a movement in our brain) has been created, a buildup of new stimuli is already queuing to be processed. It is at this moment that the state of mind Kant described as sublime has a chance of arising. The brain responds to the flood of motion stimuli by focusing attention on the object’s movement, blocking other stimuli, and increasing processing in the relevant brain areas. In my article “Some speculative hypotheses about the nature and perception of dance and choreography,” I have outlined some of the possible neural mechanisms involved in this chain of reactions.2 Essentially the responses are the same as when the brain prepares the body for action, such as returning a tennis serve.

By reversing the argument, we can also account for beauty in the Kantian sense of harmony between mind and object. If the movement trajectory predicted by the brain coincides with the actual movement, we are filled with pleasure, which we ascribe to the movement that gave rise to it by calling the movement beautiful or graceful. In summary, when our expectations are fulfilled, when the brain’s simulations are correct, we delight in grace or elegance; when they are challenged, we are pleasantly alarmed.

This speculative account offers possible leads for research. It would be interesting to measure the brain’s responses while subjects watched a live dance performance (using EEG) or one on tape (using fMRI) and ask them afterward to describe their subjective experience. My reference to the Kantian notions of beauty and the sublime only serves the goal of showing that some neural mechanisms can be related to aesthetic concepts. Actual experience is a combination of feelings, and beauty and the sublime are but aspects of a rather more indeterminate experience. For instance, we may find something interesting, yet might hesitate to call it beautiful.

The idea that the appreciation of dance has something to do with the interplay of expectations and their fulfillment has antecedents in other fields. For example, the same interplay has been proposed as an explanation for music and for humor. Indeed, what I here call the sublime may be similar to what some people describe as “chills” or “shivers down the spine” at certain moments during particular pieces of music. Neuroscientists Anne Blood and Robert Zatorre recently investigated the neural mechanisms of this phenomenon in a neuroimaging study at the Montreal Neurological Institute of McGill University. They found that the intensity of the “chills” correlated with increased activity in brain regions associated with emotion and arousal. Interestingly the map of activation overlaps considerably with the one that, based on a survey of the literature, I myself have sketched for the perception of dance. This may also explain why music and dance mix so well: A buildup of expectation on an auditory level can find its realization on a visual level. The final moments in many ballets are either a concurrence of exaltation in both sound and movement or the opposite, a slow fading away.

APPARENT MOTION
How does the brain anticipate movement when watching dance? An article from 1983 by cognitive psychologist Jennifer Freyd showed that still photographs of an object in motion—for instance a falling glass— convey information about its dynamics. This phenomenon is called “implied motion.” What is more, watching such “action stills” activates the brain regions associated with perceiving motion, as was recently shown by Zoe Kourtzi and Nancy Kanwisher of the Massachusetts Institute of Technology. Of course, this will not surprise dance audiences, who know that a great dance photo somehow captures the performance. But cognitive neuroscience tells us how: The camera freezes the movement, but in such a way that it appears to continue, creating a sensation of movement in the viewer.

We experience motion not only in dynamic single images but also when a sequence of still frames is rapidly displayed. This is the principle behind film, of course, to which the vintage term “motion picture” is still applied. Consider a simple film consisting of two frames, one with a dot to the left, the other with a dot to the right. If the two frames are shown sequentially, the dot appears to move from left to right and back. Scientists believe this illusion occurs because the brain chooses the shortest path to connect the two images. However, as shown in a classic study by Freyd and Maggie Shiffrar, if the dots are replaced by images of a human body in two different positions, the brain chooses an anatomically possible route to connect the two positions. This suggests that the brain’s implicit knowledge of the movements that the body is capable of making somehow influences perception. An intriguing thought! This is corroborated by a recent neuroimaging experiment by neuroscientist Jennifer Stevens and her collaborators, which showed that motor areas in the brain become active when people watch two rapidly alternating body positions, but only if the connecting movement is physically possible and the interval between the two frames long enough for the unobserved movement to have been made.

Laboratory experiments like these bring out capabilities that the brain evolved to solve everyday challenges. In dance, as in other areas, sometimes a movement is so quick that the brain processes the beginning and end positions more quickly than the movement connecting them. Sometimes, too, part of the dancer’s movement is blocked from view, for instance, because a limb disappears behind the body or because the body as a whole disappears behind one of the stage props. The brain will then interpolate between the positions it did perceive and retrospectively infer the connecting movement. Enemy in the Figure (1989) a stunning ballet by William Forsythe, features a wavy-shaped wooden panel positioned diagonally in the middle of the stage. In combination with a mobile light pushed along by the dancers, it offers a fascinating visual spectacle. Sometimes dancers disappear behind the panel, leaving only the shadows of their movements visible, at other times one side of the stage is brightly lighted while the other is shrouded in semi-darkness. This led Anna Kisselgoff, dance critic for the New York Times, to write that “[the dancers on the dark side of the stage] could be barely made out but their movement was sensed. One could regard them as images caught in the web of distant memory: present but not visible.”3

Research discoveries about how the brain “fills in the blank” between movements, using only anatomically feasible actions, lend support to theories that propose a common framework for perception and action. According to this view, perception is constrained by the properties and limitations of the observer’s own motor system. To name one rather peculiar finding in this respect, psychologists Günther Knoblich and Wolfgang Prinz of the Max Planck Institute for Psychological Research in Munich, Germany, recently showed that people are able to recognize their own drawings from a set of samples, even when they were blindfolded when drawing them. This strongly suggests that action planning somehow contributes to perception.

MOTOR IMAGERY
Everybody knows what it is like to imagine an object. You do not even have to close your eyes to picture an elephant or the Eiffel Tower, though blocking sensory stimuli helps. This is what is called visual imagery. Motor imagery is the similar process of imagining a movement. But there is a key difference between visual and motor imagery. In the former you are a spectator, whereas in the latter you are an actor. You perform the movement virtually, in your mind.

The precise nature of motor imagery is debated. According to the French neuroscientist Marc Jeannerod, motor imagery corresponds to the covert activation of the motor system. Motor areas of the brain are activated, but the actual motor command is inhibited. The inhibition is likely controlled by the prefrontal cortex. In a much quoted article from 1986, the French neurologists François Lhermitte, Bernard Pillon, and Michel Serdaru showed that some patients with damage to the prefrontal cortex compulsively imitate movements performed in front of them. This suggests that the mechanisms that normally inhibit motor output are impaired in these patients, and therefore motor images are instantly translated into motor commands.

Motor imagery is peculiar. Experiments have shown that it takes people about as long to imagine walking somewhere as it would to walk there. What is more, it takes longer when they imagine carrying a heavy box! It is unknown whether people also get tired from imagining movements, but their breathing does speed up slightly when they imagine themselves running.

We engage in motor imagery whenever we prepare, intend, mentally rehearse, describe, or listen to a verbal description of a movement: for instance, when someone gives us directions. In fact, mentally rehearsing movements enhances actual performance, as sports psychologists have shown. Intuitively it makes sense that the same motor areas are activated regardless of whether you imagine or actually perform a movement. But could it be that, when watching movement—let us say, dance—the brain also engages in a form of motor imagery?

Some years ago, French cognitive neuroscientists Jean Decety and Julie Grèzes did a positron emission tomography (PET) experiment in which people observed a series of hand and arm movements. Participants in the study were instructed either simply to watch the movements or to watch so that they would be able to recognize or imitate the movements after the PET session. This allowed the researchers to investigate the influence of each task on motion processing. It turned out that motor areas were activated no matter what the instructions, but the activation was stronger when participants watched to prepare to imitate the movements.

In another study, Giacomo Rizzolatti’s group at the University of Parma showed that observing human movements activates the same muscle groups and motor circuits in the brain as actually executing the movements. So maybe watching movements can be tiring after all. “Why do you look so exhausted?” “I went to see a dance performance.”

If motor areas are activated when we watch movements, then we could say that when watching dance, the brain dances. Some caution is warranted, however; the movements in the French study lasted for as little as four seconds and involved only the arms and hands. The results are intriguing, nonetheless. Anna Kisselgoff’s review of William Forsythe’s Enemy in the Figure may have been closer to the truth than she realized.

MIRROR NEURONS
Assuming that the brain engages in motor imagery when we are watching human movement, how and where do motor and visual information come together, and how does knowledge embedded in the motor system influence how we process visual motion?

In 1996, Italian neuroscientist Giacomo Rizzolatti and his colleagues at the University of Parma discovered a group of neurons in the premotor cortex of a monkey that become active both when the monkey performs an action and when it sees another monkey take similar action. These so-called mirror neurons have become the object of intense speculation and research. For one thing, they could provide a neural bridge between action and perception. If they do, they could also provide part of the neural basis for imitation and the understanding of intentions, the latter because by mentally simulating another’s actions you can perhaps infer his intentions.4 That the area in monkeys where the mirror neurons were discovered may correspond to Broca’s area in the human brain—one of the brain regions associated with language—added to the excitement. Could mirror neurons somehow be associated with the evolution of language from gestural communication? 5

Over the past few years, neuroimaging studies have yielded tentative evidence that a similar mirror system exists in the human brain. Marco Iacoboni and his colleagues at UCLA found that if processing in Broca’s area was temporarily disrupted with transcranial magnetic stimulation (application of a magnetic field), imitation was impaired—but simple execution of a motor task was not.

In 2001, I was invited to contribute to a television documentary on mirror neurons, for which I created a short dance duet. (Here I am getting a bit ahead of my story by revealing that my explorations of dance and the brain eventually led me into choreography.) The duet began with one dancer copying the movements of the other dancer, then gradually shifted to the dancers changing lead and copying each other’s movements but transferring them to another limb, and ended with the dancers taking general cues from each other that were communicated in movement. The piece therefore symbolized a possible route from imitation to gestural communication, but it also enacted it. Although the concept was set and rehearsed in advance, the actual performance was improvised.

HOW TO ENGINEER AUDIENCE ATTENTION
It may sound like a truism to say that choreographers draw the audience’s attention primarily to the dancers’ movements, not, for example, to the costumes they are wearing. But it also reveals something about the brain and dance. Attention refers to the various processes by which the brain selects among internal and external stimuli. Attention can be “caught” by a sensory event or actively directed at events happening inside or outside our brain. By orienting the senses to the source of a sudden change in the environment, by filtering the information that reaches the senses, by searching for specific clues in the environment, and by preparing for the occurrence of an expected event, attention modulates perception.

Now, catching attention is one thing, keeping it another, especially because the brain exhibits what is called “inhibition of return,” a biological bias against looking back to a previously attended situation. This may occur because when the brain first catches sight of something, it extracts all the available useful information. It follows that to catch our attention something has to stand out. To keep our attention, it either has to be more prominent than what might distract us from it or provide for ongoing novelty by altering its appearance, intensity, location, and so forth. Motion therefore has high potential for catching attention.

Under normal circumstances, the brain is likely to focus on the most promising incoming signals, while ignoring all other input. Once an object and its motion characteristics have been detected, attention is likely not to stick with motion but either to shift to something else or focus on other features to try to identify the object: “Is it a fly or a wasp?” Once the object is identified, attention may turn to something else (“It’s a fly.”) or focus exclusively on the movement (“It’s a wasp!”), in which case all other stimuli are temporarily ignored.

It follows that if attention is to focus on movement and movement alone, the brain’s natural tendencies in perceiving motion should be enhanced. If the brain is inclined to turn to other features once an object’s motion characteristics have been identified, the movement should be such that this tendency is suspended, while at the same time other features (color, patterning) are played down. The brain will then quickly turn away from these features and back to the movement. So if you ever wondered why in many dance performances the costumes are plain and simple, this could be the answer.

How can movement itself be made interesting enough to hold our attention? Choreographers intuitively know the answer. One way of emphasizing movement is to build up and play with the brain’s natural tendency to form an expectation of what is coming next. The human body provides an excellent device for this kind of play and by implication for bringing about a sensation of movement. In dance, at any moment the potential is there for the movement to stop, expand, contract, or continue in any other direction. Dance demands an agility of perception equal to the agility of the dancer. The dancer controls a wide range of parameters he or she can vary: speed, direction, the number of limbs involved, rhythm, flow, and so on. By contrast, racing is almost entirely about speed, rather than movement, while in gymnastics the emphasis is on the agility of the body and perfection in the execution of a limited number of movement exercises.

PRINCIPLES OF AESTHETIC EXPERIENCE
Of course, there is more to dance than movement alone, just as there is more to music than sound. In 1999, neuroscientists Vilayanur Ramachandran and William Hirstein published an influential article, “The Science of Art: A Neurological Theory of Aesthetic Experience,” in which they advanced eight supposedly universal principles for the perception and appreciation of art:

- Enhancement of features that deviate from average (referred to as the “peak-shift effect”);
- Grouping of related features;
- Isolation of a particular visual clue;
- Contrasting of segregated features;
- Dislike of unnatural perspectives;
- Perceptual problem solving or deciphering ambiguous scenes;
- Metaphor;
- Symmetry.
Peak shift principle:
This psychological phenomenon is typically known for its application in animal discrimination learning. In the peak shift effect, animals sometimes respond more strongly to exaggerated versions of the training stimuli. For instance, a rat is trained to discriminate a square from a rectangle by being rewarded for recognizing the rectangle. The rat will respond more frequently to the object for which it is being rewarded to the point that a rat will respond to a rectangle that is longer and more narrow with a higher frequency than the original with which it was trained. This is called a supernormal stimulus. The fact that the rat is responding more to a 'super' rectangle implies that it is learning a rule.

This effect can be applied to human pattern recognition and aesthetic preference. Some artists attempt to capture the very essence of something in order to evoke a direct emotional response. In other words, they try to make a 'super' rectangle to get the viewer to have an enhanced response. To capture the essence of something, an artist amplifies the differences of that object, or what makes it unique, to highlight the essential features and reduce redundant information. This process mimics what the visual areas of the brain have evolved to do and more powerfully activates the same neural mechanisms that were originally activated by the original object.

Some artists deliberately exaggerate creative components such as shading, highlights, and illumination to an extent that would never occur in a real image to produce a caricature. These artists may be unconsciously producing heightened activity in the specific areas of the brain in a manner that is not obvious to the conscious mind. It should be noted here that a significant portion of the experience of art is not self-consciously reflected upon by audiences, so it is not clear whether the peak-shift thesis has any special explanatory power in understanding the creation and reception of art.

In the temporal arts, such as music, dance, and cinema, I suggest we could add another principle: the elicitation of patterns of anticipation.

The main principle is the peak-shift effect. Ramachandran and Hirstein claim that exaggerating the essential features of an object will create a quicker and stronger response in the brain of the observer. This is what cartoonists aim at when they draw a caricature, but it may also apply, for example, to Van Gogh’s sunflowers, which by their color evoke the experience of seeing a sunflower more than a naturalistic rendering would do. It may be more difficult, however, to point at a peak-shift effect in something such as Renaissance art or 17th-century landscape paintings, just as it may be difficult to define the essential features of an object. I therefore suggest that an artist does not necessarily exaggerate an essential feature, but exaggerates a feature that then becomes essential in the final creation.

Even though Ramachandran and Hirstein draw their examples from the visual arts, we can apply their principles to dance. Contrast, for instance, can be achieved by the opposition of group and individual, left and right, sitting and standing, and so on. In many Balanchine ballets, the contrast between male and female dancers is enhanced through the costumes: The women wear white pants and black tops, whereas the men wear black pants and white shirts. The perfect synchronization in much classical ballet, as well as in many modern dance performances, can be said to create a peak-shift effect in the grouping of related features. The feet, eye, and finger movements in traditional dances from India are an example both of the isolation of a particular visual clue and of metaphor in the cultural meaning of the gestures.

A problem with this approach to dance analysis is that the moment we move from a simple laboratory experiment to a full dance performance—whether a 10 second solo or an evening-long group piece—so many mechanisms interact that it becomes difficult to untangle the web of connecting lines of explanation. The opening moments of Balanchine’s Symphony in Three Movements may serve to illustrate this. Sixteen women are lined up on a diagonal, vigorously swinging their arms. Then suddenly a man jumps onto the stage from the wings with a ferocious leap. In this small fragment, we recognize the interplay of various principles I have described: group vs. individual, male vs. female, left vs. right, contrast in direction and use of space (the male dancer does not walk onto stage, he jumps), and breaking our expectation— even if we know the male dancer is going to jump on stage, it still catches our attention and inadvertently we will look to the side.

Perhaps the best example of a radical disruption of our expectations is at one of those unfortunate moments in a ballet when a dancer falls. Instantly, a sigh goes through the audience, while those who were focusing on something else wonder what happened. What is more, because we know that falling hurts, we can empathize with the dancer, a feeling that has been hypothesized to be mediated by mirror neurons.4 But here is the interesting part: A choreographer who understands this can put it to creative use by having a dancer intentionally drop to the floor. One of ballet’s popular legends is that when one of the dancers fell during the rehearsals for Serenade (1935), Balanchine liked the effect so much that he decided to incorporate the fall in the ballet.

This way of analyzing dance also allows us to explain why certain dance performances are boring—for example, because they do not hold our attention by varying from our expectations. Compare your own experience, or what you read in a dance review, with how I have described the processes of attention and expectation. Remember, too, that expectations work not only on the micro scale of individual movements, but also on the more global scale of dance phrases and even whole ballets.

FROM IDEA TO PRACTICE
In 1995, I met William Forsythe, the choreographer and artistic director of the Frankfurt Ballet whose work had made such an impression on me when I had first discovered dance. He encouraged me not just to do research on dance but also to give my ideas a physical reality. It took a while to overcome my qualms, since I am a mathematician and philosopher by education, and at the time was working as a quantitative analyst for an investment bank. Yet I had felt the desire to choreograph since I first saw a dance performance.

When asked how I can choreograph without formal dance training, I respond that what matters is to get the dancers to do what I want them to do. Demonstrating a movement sequence and having the dancers imitate it is but one of many ways of achieving that goal. I make extensive use of cognitive strategies for generating movements and creating a hierarchical or sequential structure within a series of movements. For example, “change the leading movement” means that after the dancer moves, let us say, the left leg, she will move the right arm and subsequently the head, the left shoulder, the hips, and so on. In developing these strategies, I take into account what is known about the workings of the motor and visual systems and the principles of aesthetic experience outlined above.7 So “change the leading movement” plays with the tendency of the brain to extrapolate a motion trajectory by abruptly breaking off a trajectory and continuing with another limb.

When I started reading about brain science, I bought some popular science books. I quickly found myself yearning for more. Before long, I was reading graduate neuroscience textbooks and then scientific journals. In Cognitive Neuroscience: The Biology of the Mind, by Michael Gazzaniga, Richard Ivry, and George Mangun, I read the neurological explanation of the childhood game of trying to pat your head while rubbing your stomach, an example of what is technically known as dual task interference. Our difficulty in performing this task, or in rapidly switching between patting with one hand and rubbing with the other, is probably due to the competition in our brain between the two motor tasks, one of which involves the left hemisphere and the other the right. It seemed clear that this example was a specific case of a more general class of movements and, as such, could be used as a strategy for producing movement in choreography. So the next time I was in the studio with one of my dancers, we explored some extensions of the tasks investigated in the laboratory experiments, such as drawing a line with the right foot and a circle with the left arm or using three limbs instead of two.

This is just one example of applying scientific findings and laboratory experiments to dance. To give another, when reaching for an object such as a spoon, you transform its location from a visual frame of reference (“next to the plate”) to a frame of reference relative to your body—roughly speaking, the distance between your hand and the spoon. What is true for objects is also true for parts of the body. If I extend my arm, I can refer to the position of my hand relative to my body but also think of it as being at a point in space. In itself, this is not a dramatic observation, but to return my hand to its previous position, I can either move it back to my body, keeping the body fixed in space, or move my body to my hand, keeping the hand fixed in space, or move both toward each other. The same idea can be applied to movements involving two limbs. I can scratch my arm by moving my finger along my arm or by moving my arm along my finger, a technique I have called “reversals.” In everyday life, this is what we do, for example, if a glass is filled to the top: Rather than bringing it to our mouth, we bend forward to sip from it.

When improvising, dancers can use these ideas to create a variation on a choreographed movement sequence. As “recipes” or techniques for generating movements, they are not specific to my work, but can also be found in other dance forms, everyday movements, and martial arts—what else is the movement following a feint other than a sudden change of the leading movement? These techniques, too, do not tell the whole story. I always say that my work arises from the tension between this formal approach, based on my scientific research, and my own aesthetic ideas and preferences.

JUDGING THE RESULTS
Choreography is not just about inventing movements, it is about evaluating the perceptual and emotional effects of a particular movement or spatial configuration. Choreographers continue adjusting a certain movement or position until it matches the effect they had in mind (or until the day of the premiere). The same is true for dancers as they watch their movements in the studio mirror or on video or rehearse a movement until their bodies’ proprioceptive feedback informs them that the movement “feels right.” In doing so, both choreographers and dancers are guided by the brain mechanisms involved in perception, action, and emotion. This is why the British neuroscientist Semir Zeki says that, in a way, all artists are neuroscientists. They investigate the properties of the brain and reveal its capabilities.

The logic of the present argument may also explain why people’s experiences of a given dance performance differ. Having seen a fascinating performance, we expect to be as thrilled the next time we see a piece by the same choreographer. Indeed, this is why we want to return, to get another dose of that good feeling. Conversely, if we were bored, we are less inclined to go again. What we have seen has not only raised (or downgraded) our expectations of what we will feel, it has also entrained our perceptual expectations.

When creating a new work, choreographers can either try to replicate the perceptual and emotional effects of the previous work or explore a different direction. Both approaches have their pluses and minuses. If a choreographer continues along the lines of previous work, new audiences may still be thrilled and so may part of the old audience. Some people, however, may be disappointed: Their perceptual expectations were met, but their emotional expectations were not. This is why many sequels disappoint. But seeing the same piece again may in fact be more engaging, even though we are now familiar with it. We may attend to different aspects or get excited in anticipation, or maybe the performance was simply better at triggering our responses. As to our choreographer, should he decide to explore a different path, he may not only alienate part of the audience, he may also quite simply fail in that the piece does not live up to his own expectations. That is the risk of any experiment, whether artistic or scientific.

Of course, this is not the only reason why people differ in their aesthetic judgments. Appreciating something cognitively and enjoying it emotionally are not the same. Each person’s individual experience of a dance performance is the product not just of perceptual processes, but also of their interaction with memories, associations, and personal preferences. As I like to say, with a nod to my former career in finance: “Past performances are not a guide to future experiences.” Time may distort the memory of past performances. Experiences, tastes, and preferences may change over time. But then again, these memories and preferences are also laid down in the brain. Their content may change over time or differ among individuals, but the brain processes that connect them are the same.

Back in the laboratory, neuroscientists are probing the neural mechanisms of human motion perception and motor control, moving beyond studying simple finger movements to more complex stimuli and motor tasks. In effect, they are becoming choreographers of certain movement sequences. If choreographers and dancers performed in research laboratories and neuroscientists analyzed dance performances, we might learn more about both dance and the brain. But whether you are dancing yourself or watching a dance performance, even when your limbs do not move, your brain will dance.


Exploring How Music Works Its Wonders
Perhaps Levitin’s unique contribution to research on music lies in his drawing attention to the importance of the cerebellum in music listening. Conventional wisdom suggests that the principal role of the cerebellum relates to coordinating motor movement. But research by Levitin is consistent with studies by Jeremy D. Schmahmann, M.D., and Janet Sherman of Harvard Medical School that point to a much broader role for the cerebellum, including tracking the beat and distinguishing familiar from unfamiliar music. Perhaps most surprising is that the cerebellum plays a role in musically evoked emotions and in the formation and expression of musical taste. These discoveries are consistent with anatomical studies from the 1970s that found direct neural connections between the cerebellum and the hearing organ within the cochlea, which converts sound vibrations into nerve impulses.

One of the foremost successes in the science of music has been our understanding of how the physiological properties of the hearing organ influence the perception of dissonance.

An ostinato that recurs throughout the book and comes to the fore in the final chapter is the relationship between music and evolutionary psychology. Archeological evidence shows that music is very old—much older than a Johnny-come-lately like agriculture. Musical instruments have even been discovered in Neanderthal burial sites, suggesting that music-making may have been characteristic of the entire genus Homo. All known cultures (past and present) have engaged in recognizably musical activities, which raises what might be called the $64,000 question in the science of music: Is music a spandrel—a non-adaptive artifact of brain organization as suggested by Harvard University psychologist Steven Pinker, Ph.D.? Or could music be an evolutionary adaptation in its own right? Levitin, clearly parting company with Pinker, recounts some of the published evidence for music’s evolutionary role.

Also missing is the pioneering work of psychologist Neil Todd, Ph.D., of Manchester University, on the perception of rhythm and musical “motion.” The cochlea of the inner ear contains not only the hearing organ but also the vestibular system, and Todd’s work relating the sense of rhythm and motion to auditory-induced activation of the vestibular system is as fascinating as it is unorthodox. A neuroanatomist might well expect that the connections between the cochlea and the cerebellum exist to coordinate movement with vestibular sensations. Given Levitin’s interest in the musical role of the cerebellum, a summary of Todd’s work would have made a pertinent (and interesting) addition to the book.


Music Training Linked to Better Understanding of Speech

French author Victor Hugo once wrote, “Music expresses that which cannot be put into words and cannot remain silent.” A new study suggests that musical skills can also help people understand spoken words buried in a noisy cacophony. This ability may help explain why music training seems to help some people with other forms of learning and could eventually lead to new therapies for children with autism and older people with hearing difficulty.

“The brain is set up with a lot of overlap for language and music,” says Laurel Trainor, a researcher who studies how infants acquire both language and music at McMaster University in Ontario, Canada. “Many of the mechanisms used to process both are the same. And all of those rely on executive functioning—or memory, attention and the ability to inhibit [distraction].”

So might musical training help enhance executive function? Nina Kraus, the head of Northwestern University’s Audio Neuroscience Lab, decided to test just that.

“We reasoned that the nervous system works in economical and pervasive ways when it comes to speech and music,” Kraus says. “A basic musical skill is picking out a relevant signal from a number of other sounds—so we hypothesized that musicians may be better at hearing speech in background noise because of their training.”

Kraus and colleagues Alexandra Parbery-Clark, Carrie Lam and Erika Skoe evaluated participants as they listened to and then repeated back sentences presented in varying amounts of background noise. Those who had musical training, defined as ten or more years of musical study, were much better able to repeat the sentences than those without it. Kraus says the finding supports the argument that musical training may harness areas of the brain that improve executive functioning.

“There is a system of pathways that stretches anatomically from the cerebral cortex down through the brainstem all the way to the ear called the corticofugal network,” she says. “So we have this scaffolding, a true network in place that provides a top-down influence that tunes our sensory system to pick out meaningful sounds from ones that aren’t so important.”

The study, published in the Sept. 3 issue of Ear and Hearing, could influence future therapies for both older people who can have difficulty differentiating speech in noise and children with disabilities like autism or attention deficit/hyperactivity disorder (ADHD).

Music therapy has been linked anecdotally to improved attention and social interaction in autistic children , though researchers are unsure why. For Catherine Lord, director of University of Michigan Autism and Communication Disorders Center, Kraus’s findings offer the beginnings of an explanation.

“People with autism often talk about getting overwhelmed by sound and say it’s hard to pull out what people are saying to them if it’s noisy or a bus happens to be driving by,” Lord says. “And if it’s possible to use music to help differentiate out the right sounds, that’s something that could be of great help.”

But Kraus, Lord and Trainor say that there is still a lot of work to be done before they can make any firm conclusions about the role of musical training as a therapeutic technique.

“There’s so much potential in what music may be able to do in special and elderly populations that’s, to date, been pretty much unexplored,” Trainor says. “And it’s time to start using good science and start exploring that.”



How Music Helps to Heal the Injured Brain
Therapeutic Use Crescendos Thanks to Advances in Brain Science

The role of music in therapy has gone through some dramatic shifts in the past 15 years, driven by new insights from research into music and brain function. These shifts have not been reflected in public awareness, though, or even among some professionals.

Biomedical researchers have found that music is a highly structured auditory language involving complex perception, cognition, and motor control in the brain, and thus it can effectively be used to retrain and reeducate the injured brain. While the first data showing these results were met with great skepticism and even resistance, over time the consistent accumulation of scientific and clinical research evidence has diminished the doubts. Therapists and physicians use music now in rehabilitation in ways that are not only backed up by clinical research findings but also supported by an understanding of some of the mechanisms of music and brain function.

Rapid developments in music research have been introduced quickly into neurologic therapy (see sidebar) over the past 10 years. Maybe due to the fast introduction, the traditional public perception of music as a ‘soft’ addition, a beautiful luxury that cannot really help heal the brain, has not caught up with these scientific developments.

But music can. Evidence-based models of music in therapy have moved from soft science—or no science—to hard science. Neurologic music therapy does meet the standards of evidence-based medicine, and it should be included in standard rehabilitation care.

Where We Started

While the notion that music has healing powers over mind and body has ancient origins, its formal use as therapy emerged in the middle of the 20th century. At that time, music therapists thought of their work as rooted in social science: The art had value as therapy because it performed a variety of social and emotional roles in a society’s culture. In this early therapy, music was used, as it had been through the ages, to foster emotional expression and support; help build personal relationships; create and facilitate positive group behaviors; represent symbolically beliefs and ideas; and support other forms of learning. In the clinic, patients listened to music or played it together with the therapists or other patients to build relationships, promote well-being, express feelings, and interact socially.

Because early music therapy was built upon these laudable and important but therapeutically narrow concepts, many in health care, including insurers, viewed it as merely an accessory to good therapy. For decades it was difficult to collect scientific evidence that music therapy was working because no one knew what the direct effects of music on the brain were. Now, however, the approaches that are central to brain rehabilitation focus on disease-specific therapeutic effects, demonstrated by rigorous research.

Neuroscience Steps Up

During the past two decades, new brain imaging and electrical recording techniques have combined to reshape our view of music in therapy and education. These techniques (functional magnetic resonance imaging, positron-emission tomography, electroencephalography, and magnetoencephalography) allowed us for the first time to watch the living human brain while people were performing complex cognitive and motor tasks. Now it was possible to conduct brain studies of perception and cognition in the arts.1

From the beginning of imaging research, music was part of the investigation. Scientists used it as a model to study how the brain processes verbal versus nonverbal communication, how it processes complex time information, and how a musician’s brain enables the advanced and complicated motor skills necessary to perform a musical work.

After years of such research, two findings stand out as particularly important for using music in rehabilitation. First, the brain areas activated by music are not unique to music; the networks that process music also process other functions. Second, music learning changes the brain.

The brain areas involved in music are also active in processing language, auditory perception, attention, memory, executive control, and motor control. Music efficiently accesses and activates these systems and can drive complex patterns of interaction among them. For example, the same area near the front of the brain is activated whether a person is processing a problem in the syntax of a sentence or in a musical piece, such as a wrong note in a melody. This region, called Broca’s area after the French neurologist from the 19th century who described it, is also important in processing the sequencing of physical movement and in tracking musical rhythms, and it is critical for converting thought into spoken words. Scientists speculate, therefore, that Broca’s area supports the appropriate timing, sequencing, and knowledge of rules that are common and essential to music, speech, and movement.

A key example of the second finding, that music learning changes the brain, is research clearly showing that through such learning, auditory and motor areas in the brain grow larger and interact more efficiently. After novice pianists have just a few weeks of training, for example, the areas in their brain serving hand control become larger and more connected. It quickly became clear that music can drive plasticity in the human brain, shaping it through training and learning.


Researchers in the field of neurologic rehabilitation have described parallel results. They found that the brain changes in structure and function as a result of learning, training, and environmental influences. Exposure and experience will create new and more efficient connections between neurons in the brain in a sort of “rewiring” process.

This discovery fundamentally changed how therapists developed new interventions. Passive stimulation and facilitation were no longer considered effective; active learning and training promised to be the best strategy to help rewire the injured brain and recover as much ability as possible. Further clinical research has strongly confirmed this approach.

By combining these developments—brain imaging, insight into plasticity, and finding that musical and non-musical functions share systems—therapists finally could build a powerful, testable hypothesis for using music in rehabilitation: Music can drive general reeducation of cognitive, motor, and speech and language functions via shared brain systems and plasticity. Once used only as a supplementary stimulation to facilitate treatment, music could now be investigated as a potential element of active learning and training.

First Steps with Movement

To explore this hypothesis, in the early 1990s we began to extract and study shared mechanisms between musical and non-musical functions in motor control. One of the most important shared mechanisms is rhythm and timing.

Timing is key to proficient motor learning and skilled motor activities; without it, a person cannot execute movement appropriately and skillfully. Rhythm and timing are also important elements in music. Rhythm timing adds an anticipation component to movement timing. The necessary harness for all elements of musical sound architecture, rhythm is also important in learning the appropriate motor control in order to play music.

We hypothesized that by using musical rhythms as timing signals we might improve a person’s motor control during non-musical movement. To test this idea, we used rhythmic auditory cues to give people an external “sensory timer” with which they could try to synchronize their walking.

When we tried it with patients with stroke or Parkinson’s disease, their improvements in certain areas were instantaneous and stunning. By following the rhythmic cues, patients recovering from stroke were able to walk faster and with better control over the affected side of their bodies. Some of the more complex measures of movement control, such as neuromuscular activation, limb coordination, angle extensions, and trajectories of the joints and centers of body mass, also became significantly more consistent, smoother, and flexible. For those with Parkinson’s disease, it was interesting to see that music and rhythm could quicken their movements and also serve as an auditory trigger to keep the movements going and prevent “freezing” (the sudden halt of all movement), which occurs frequently in Parkinson’s patients.

These improvements held up over long-term training and also proved to be superior in comparison with other standard physical therapy interventions.8 We then applied the same concepts to arm therapy, with similar success. Since then, other studies have confirmed and extended our research. The therapy created from it, rhythmic auditory stimulation, now is considered part of the state-of-the-art repertoire in motor therapies.

Our results added weight to the idea that music could shape movements in therapy by accessing shared elements of musical and non-musical motor control (rhythm, timing) and thus powerfully enhance relearning and retraining in a clinical environment. In a recent study that utilized brain imaging in patients with stroke, arm training with auditory rhythm triggered brain plasticity, as predicted. Additional areas in the sensorimotor cortex and the cerebellum were activated by the training. In comparison, standard physical therapy did not result in any evidence of new changes in brain activations.

Reaching for Speech and Cognition

Clinical research studies in the past 10 years have extended the use of music from motor therapy to the rehabilitation of speech, language, and cognitive functions. Scientists wondered if they could design therapeutic music exercises that would affect general cognition and speech and language functions via plasticity in shared brain systems the way they had for motor therapies.

It wasn’t as clear from the outset, though, exactly what advantage music would show over other methods of retraining impaired cognition or language functions. It was easier to see that music has advantages over other types of therapies for motor control because of its rhythmic patterns that drive priming and timing of the motor system, and the rich connectivity between the neurons in the auditory system and those in the motor system. One can picture the auditory neurons responding to rhythmic stimuli and firing in patterns that spread via connecting nerve fibers into motor neurons, activating them in synchronicity. How music could facilitate cognition and language training was initially less obvious.

Two insights from research help to bridge this gap. The first extends the idea that the brain systems underlying music are shared with other functions. Evidence suggests that music may activate these systems differently than speech or other stimuli do and might enhance the way the systems work together. For example, music tends to activate brain structures either bilaterally—in both hemispheres simultaneously—or in the right hemisphere more than the left. For injuries on one side of the brain, music may create more flexible neural resources to train or relearn functions. Aphasia rehabilitation is a good example. Singing—which relies mainly on right-hemisphere brain systems—can bypass injured left-hemisphere speech centers to help people produce speech.11 We have shown in a memory study that learning word lists in a song activates temporal and frontal brain areas on both sides of the brain, while spoken-word learning activates only areas in the left hemisphere.12 Music also can activate the attention network on both sides of the brain, which can help overcome attention problems caused by stroke or traumatic brain injury.

The second helpful insight was the development of the auditory scaffolding hypothesis.14 This model proposes that the brain assigns nearly everything that deals with temporal processing, timing, and sequencing to the auditory system. This process works because sound is inherently a temporal signal, and the auditory system is specialized and highly sensitive for perceiving time information. For example, short-term auditory verbal memory (in spoken words) is better than short-term visual memory (in written words). Similarly, people can track and remember auditory tone sequences better than visual or tactile ones. And people who are deaf also often have trouble developing non-auditory temporal skills. Cognitive abilities such as language, learning and remembering, attention, reasoning, and problem-solving require complex temporal organization. Experiences with sound may help bootstrap—or provide a kind of scaffolding for—developing or retraining such abilities. As music may be the most complex temporal auditory language, it may offer superior auditory scaffolding for cognitive learning.

Using these two insights, researchers could make a case for trying music as therapy in speech, language, and cognitive rehabilitation. Evidence from the research that ensued supports the clinical effectiveness of music and has identified the brain processes that underlie these effects.

For example, various studies have shown that therapeutic music exercises can help improve verbal output for people with aphasia, strengthen respiratory and vocal systems, stimulate language development in children, and increase fluency and articulation. Music therapy can retrain auditory perception, attention, memory, and executive control (including reasoning, problem-solving, and decision-making).15

Next Frontier: Mood?

The extended shared brain system theory and the auditory scaffolding theory provided a new theoretical foundation for the therapeutic use of music in motor, speech and language, and cognitive rehabilitation. In the future, new theories may help us understand the other effects of music, and point the way to new types of rehabilitation.

For example, how can we harness the ability of music to evoke and induce mood and emotion to help retrain the injured or depressive brain? We know that the capacity for memory improves when people are in a positive mood. We also know that rational reasoning in executive control requires integrating and evaluating both logic and emotion. In this context, one question is whether emotions evoked by music can contribute to executive control training in rehabilitation, and if so, how. The problem is that we still do not know the exact nature of these emotional responses and whether they relate to those that we experience in everyday life. If we find answers to questions like these, we might someday use music to retrain emotional and psychosocial competence—not in the traditional music therapy sense of improving well-being, but rather as a functional goal in cognitive ability.

Biomedical research in music has come a long way to open new and effective doors for music to help reeducate the injured brain. Of course, much still needs to be done: More professionals need specialized training, and other possibilities for rehabilitation require further research and clinical development. Scientists need to better understand what dosages work best, to pay more attention to research that will benefit children, and to focus on disorders in which neurologic music therapy lacks rigorous study so far, such as autism, spinal cord injury, cerebral palsy, and multiple sclerosis. In addition, the effects of brain injury can be complex, and researchers must take individual factors into account and adapt to individual needs. Neurologic music therapists share those aims with practitioners in other rehabilitation disciplines.

What no longer requires confirmation is the premise that music in therapy works, in principle and in practice. It is a fact: Music shows promise for helping to heal the brain. Research has identified specific areas in which music is an effective therapeutic approach. Neurologic music therapy now meets the standards of evidence-based medicine, is recognized by the World Federation of Neurorehabilitation, and should be a tool for standard rehabilitation care. Insurance companies must become familiar with the research evidence and reimburse patients who have conditions for which the evidence supports its effectiveness.

Neurologic music therapy is a specialized practice, but it is based on elements and principles of music and brain function that can be integrated by all rehabilitation professions. In this way, it offers a strong foundation for interdisciplinary teamwork that will benefit patients.

- - -

And finally this interview on "Music as the Brain’s Universal Language" to read directly in following the LINK
 
RE :cuckoo:
Keep on going this time with nerves, nervous system.

Often a drawing is better than words, and in this case it is even better than a song, or it depends of the song:
http://www.scholarpedia.org/w/images/e/e7/Skaggs_Nervous_system_NSdiagram.png

Nervous system
At the most basic level, the function of the nervous system is to control movement of the organism and to affect the environment (e.g., through pheromones). This is achieved by sending signals from one cell to others, or from one part of the body to others. The output from the nervous system derives from signals that travel to muscle cells, causing muscles to be activated, and from signals that travel to endocrine cells, causing hormones to be released into the bloodstream or other internal fluids. The input to the nervous system derives from sensory cells of widely varying types, which transmute physical modalities such as light and sound into neural activity. Internally, the nervous system contains complex webs of connections between nerve cells that allow it to generate patterns of activity that depend only partly on sensory input. The nervous system is also capable of storing information over time, by dynamically modifying the strength of connections between neurons, as well as other mechanisms.

Function
The ultimate function of the nervous system is to control the body, especially its movement in the environment. It does this by extracting information from the environment using sensory receptors, sending signals that encode this information into the central nervous system, processing the information to determine an appropriate response, and sending output signals to muscles or glands to activate the response. The evolution of a complex nervous system has made it possible for various animal species to have advanced perceptual capabilities such as vision, complex social interactions, rapid coordination of organ systems, and integrated processing of concurrent signals. In humans, the sophistication of the nervous system makes it possible to have language, abstract representation of concepts, transmission of culture, and many other features of human society that would not exist without the human brain.

At the most basic level, the nervous system sends signals from one cell to others, or from one part of the body to others. There are multiple ways that a cell can send signals to other cells. One is by releasing chemicals called hormones into the internal circulation, so that they can diffuse to distant sites. In contrast to this "broadcast" mode of signaling, the nervous system provides "point-to-point" signals — neurons project their axons to specific target areas and make synaptic connections with specific target cells. Thus, neural signaling is capable of a much higher level of specificity than hormonal signaling. It is also much faster: the fastest nerve signals travel at speeds that exceed 100 meters per second.

You can read the rest HERE and also THERE and THERE


Nerve

Physiology
A nerve conveys information in the form of electrochemical impulses (as nerve impulses known as action potentials) carried by the individual neurons that make up the nerve. These impulses are extremely fast, with some myelinated neurons conducting at speeds up to 120 m/s. The impulses travel from one neuron to another by crossing a synapse, the message is converted from electrical to chemical and then back to electrical.

Nerves can be categorized into two groups based on function: An afferent nerve fiber conducts sensory information from a sensory neuron to the central nervous system, where the information is then processed. Bundles of fibres or axons, in the peripheral nervous system are called nerves, and bundles of afferent fibers are known as sensory nerves. An efferent nerve fiber conducts signals from a motor neuron in the central nervous system to muscles. Bundles of these fibres are known as efferent nerves.
https://en.wikipedia.org/wiki/Nerve
 
Scientists discover a new line of communication between nervous system cells
In a host of neurological diseases, including multiple sclerosis (MS) and several neuropathies, the protective covering surrounding the nerves - an insulating material called myelin - is damaged.

Scientists at the Weizmann Institute of Science have now discovered an important new line of communication between nervous system cells that is crucial to the development of myelinated nerves - a discovery that may aid in restoring the normal function of the affected nerve fibers.

Nerve cells (neurons)
have long, thin extensions called axons that can reach up to a meter and or more in length. Often, these extensions are covered by myelin, which is formed by a group of specialized cells called glia. Glial cells revolve around the axon, laying down the myelin sheath in segments, leaving small nodes of exposed nerve in between. More than just protection for the delicate axons, the myelin covering allows nerve signals to jump instantaneously between nodes, making the transfer of these signals quick and efficient. When myelin is missing or damaged, the nerve signals can't skip properly down the axons, leading to abnormal function of the affected nerve and often to its degeneration.

In research published recently in Nature Neuroscience, Weizmann Institute scientists Prof. Elior Peles, graduate student Ivo Spiegel, and their colleagues in the Molecular Cell Biology Department and in the United States, have now provided a vital insight into the mechanism by which glial cells recognize and myelinate axons.

How do the glial cells and the axon coordinate this process" The Weizmann Institute team found a pair of proteins that pass messages from axons to glial cells. These proteins, called Necl1 and Necl4, belong to a larger family of cell adhesion molecules, so called because they sit on the outer membranes of cells and help them to stick together. Peles and his team discovered that even when removed from their cells, Necl1, normally found on the axon surface, and Necl4, which is found on the glial cell membrane, adhere tightly together. When these molecules are in their natural places, they not only create physical contact between axon and glial cell, but also serve to transfer signals to the cell interior, initiating changes needed to undertake myelination.

The scientists found that production of Necl4 in the glial cells rises when they come into close contact with an unmyelinated axon, and as the process of myelination begins. They observed that if Necl4 is absent in the glial cells, or if they blocked the attachment of Necl4 to Necl1, the axons that were contacted by glial cells did not myelinate. In the same time period, myelin wrapping was already well underway around most of the axons in the control group.

"What we've discovered is a completely new means of communication between these nervous system cells," says Peles. "The drugs now used to treat MS and other degenerative diseases in which myelin is affected can only slow the disease, but not stop or cure it. Today, we can't reverse the nerve damage caused by these disorders. But if we can understand the mechanisms that control the process of wrapping the axons by their protective sheath, we might be able to recreate that process in patients."

Well...I am going to stop here and put others links in the right threads.
 
Well of the last well... I am going to keep following a little more here again.

Have at a corner of your mind the following links:
Phosphorus and The Frequency of Light
https://cassiopaea.org/forum/index.php/topic,43133.msg704743.html#msg704743
Spontaneous Human Combustion
https://cassiopaea.org/forum/index.php/topic,894.msg153665.html#top

https://www.life-enthusiast.com/articles/phosphorus
Role of Phosphorus in your body!

Bone structure - 80-85% of phosphorus in the body is located in the bones and teeth
Energy production - (ATP - adenosine triphosphate and ADP - adenosine diphosphate)
Cell membranes - (as phospolipids)
Genetic reactions- in DNA - deoxyribonucleic acid and RNA - ribonucleic acid
Buffering agent - to maintain osmotic pressure

Functions of Phosphorus!


Digestive- regulates absorption of calcium and a variety of trace elements Phosphorus in excess has a laxative action
Nervous - source of adenosine triphosphate (ATP), component of the myelin sheath
Endocrine - interacts with vitamin D
Blood- red blood cell (RBC) metabolism
Muscular- adenosine triphosphate (ATP) needed for muscle contraction
Skeletal- component of bone and teeth
Immune- adenosine triphosphate (ATP) for leukocytes
Metabolic - energy production via phosphorylation reactions
Detoxification - in liver - via adenosine triphosphate (ATP)

Phosphorus Deficiency Symptoms

arthritis, fatigue, fragile bones, reproductive problems, tooth decay, stunted growth, weakness-muscle

Phosphorus Excess Symptoms

anemia (iron deficiency), arthritis, zinc deficiency, diarrhea, hyper excitability, tremors, irritability, calcium and magnesium deficiency.

Synergetic Nutrients

Absorption - sodium, potassium, low calcium diet, vitamin D, parathyroid hormone, high fat diet
Metabolic - calcium, magnesium, B-complex vitamin (in energy production)

Antagonistic Nutrients

Absorption- calcium, aluminum, iron, magnesium, vegetarian diets, vitamin D deficiency

Hair Analysis Notes!

High Hair Phosphorus

An elevated phosphorus level is frequently indicative of excessive protein breakdown of body tissues. As proteins break down, phosphorus is released. Phosphorus levels may increase temporarily as toxic metals are being eliminated in the course of a nutrition program. Very high phosphorus can indicate a serious metabolic disturbance.

Low Hair Phosphorus

A low phosphorus level is frequently associated with inadequate protein synthesis. Although most diets are adequate in phosphorus, those on low-protein diets or vegetarians may have a low phosphorus intake. Zinc is required for protein synthesis. Often a low phosphorus level is associated with a zinc deficiency, cadmium toxicity, or zinc loss. When these imbalances are corrected, the phosphorus level improves. A low phosphorus level may be due to poor digestion or assimilation of protein. This may be due to digestive enzyme deficiency, low hydrochloric acid level, or other factors.
Sources of Phosphorus!

Seafood- tuna, mackerel, pike, red snapper, salmon, sardines, whitefish, scallops, shad, smelt, anchovies, bass, bluefish, carp, caviar, eel, halibut, herring trout
Nuts/seeds - pinon, pistachios, pumpkin, sesame, sunflower, walnuts, almonds, brazil nuts, cashews, filberts, hickory, peanuts, pecans.
Vegetables
- chickpeas, garlic, lentils, popcorn, soybeans
Grains - wheat bran and germ, wild rice, buckwheat, millet, oats, oatmeal, brown rice, rice bran, rye, wheat
Miscellaneous - kelp, yeast, bone meal


https://www.springboard4health.com/notebook/min_phosphorus.html
Phosphorus - Mineral
Description
Phosphorus comprises approximately from 0.8% to 1.1% of total body weight. 80-90% of this phosphorus is found within the skeleton and teeth, where it is compounded with calcium to form calcium phosphate. Calcium phosphate is a primary constituent of hydroxyapatite crystals in bone and teeth, and helps to add structural rigidity to the softer organic portions.

Phosphorus plays an important role in almost all cell metabolic activities. It is a major constituent of the molecule phosphate, which is instrumental in the storage of energy as adenosine triphosphate (ATP) molecules.

Phosphorus is an essential compound of RNA and DNA, and is a part of the molecular structure of phospholipids, the key components in the structure of cell membranes. The phosphate groups is of primary importance in glycolysis. It is also present in many enzymes and proteins.

Phosphate is instrumental in maintaining the acid/base balance in the body by acting as a buffer. It also activates many of the vitamin B-Complex vitamins, allowing them to function as coenzymes in various metabolic processes.

Method of Action

Phosphorus is most abundantly seen in the body as a constituent of the molecule phosphate, one of the bone salts which add structural rigidity to the softer protein matrix of bone and teeth.

Perhaps phosphorus’ most important metabolic role is as a constituent of the molecule phosphate. When this molecule links to an adenosine pyrophosphoric acid (ADP) molecule adenosine triphosphate (ATP) is formed, processing a high energy phosphate bond. When broken, this bond releases energy and the phosphate, reforming and ADP molecule. The ATP "energy" molecule is formed during glycolysis and other processes involving the release of chemical energy from food. ATP is used as the primary source of energy for many metabolic and enzymatic activities, especially muscle contraction, active transport, and the formation of DNA. Phosphate is an important constituent of RNA and DNA. It serves to link the individual bases with one another.

The energy released from the high energy phosphate bond of ATP is essential for the operation of the sodium/potassium pump, which exchanges three sodium ions for two potassium ions across a biological membrane. This pump is used to regulate relative amounts of sodium and potassium excreted and retained in the body.

Phosphate, from ATP, reacts with choline to initiate synthesis of phospholipids which are essential constituents of cell membranes. Phospholipids are instrumental in regulating cellular permeability and are found in the exterior membrane of nerve cells. They are also helpful in solubilizing relatively nonsoluble triglycerides and cholesterols.

ADP, which contains two phosphate molecules, is a constituent of blood platelets and is secreted from platelet granules to stimulate platelet aggregation for blood clotting.

Phosphate also plays an important role, due to its effective buffering action, in maintaining acid/base balance in blood plasma.

Phosphorus absorption is about 50-70% efficient, as calcium, iron, and zinc tend to complex with phosphorus in the stomach, thus reducing absorption. Vitamin D tends to promote the absorption of both phosphorus and calcium from the intestine. Excretion through the urine regulated the body’s level of phosphorus.

Properties and Uses


Phosphorus supplementation is important in patients with low phosphate to calcium ratios for promotion of proper bone and tooth mineralization, especially in younger persons.

Phosphorus may also be beneficial in reversing, to some extent, osteomalacia, rickets, bone pain, and muscle weakness experienced by persons with hypophosphatemia.

Consequences of Deficiency

Phosphorus deficiency is extremely rare and occurs primarily in persons who use phosphate-binding antacids or who suffer abnormally excessive urinary losses. The principal symptoms include fatigue, anorexia, and demineralization of bone. Other possible symptoms include osteomalacia, convulsions, and abnormal or incomplete mineralization of developing teeth.

Phosphate deficiencies can be the result of defective renal phosphate absorption, as seen in familial vitamin D-resistant rickets, a genetically linked disorder which affects vitamin D utilization. Symptoms are characteristic of other forms of rickets.

Toxicity Levels

Phosphorus is not toxic in large amounts, although a disproportionately large amount of phosphorus relative to calcium can induce an increased fecal excretion of calcium, possibly resulting in a calcium deficiency.

In patients with renal insufficiencies, there may be a decreased urinary excretion of phosphorus, resulting in high blood phosphate levels. This condition may result in skeletal demineralization and mineral resorption. This condition can be controlled by ingestion of aluminum hydroxide or calcium carbonate, both of which reduce phosphate absorption.


https://viggor.com/rosenWellness/the-three-minerals-you-need-to-balance-your-nervous-system-the-importance-of-phosphorus-potassium-and-calcium
The Three Minerals You Need to Balance Your Nervous System – The Importance of Phosphorus, Potassium, and Calcium

The nervous system has two main components: the central nervous system (CNS) and the peripheral nervous system (PNS). The central nervous system consists of the brain and the spinal cord. The peripheral nervous system is what connects the rest of the body to the central nervous system. There are three types of peripheral nerves: autonomic (involuntary nerves), somatic (voluntary nerves), and sensory nerves.

For our purposes here we will focus on the autonomic nervous system (ANS) - sometimes thought of as the “automatic nervous system.” That is because what it controls are for the most part involuntary activities. It will conduct nerve impulses from the central nervous system to cardiac muscle, smooth muscle, and glandular epithelial muscle. To put in plain English – it tells our heart to beat, our digestive system to move food along, the endocrine glands to produce hormones, and for us to breathe.

The autonomic nervous system has two components: the sympathetic nervous system (SNS) and the parasympathetic nervous system (PSNS). During our normal daily affairs we will switch back and forth between the two of them, but it is impossible for them both to be going at the same time.

The sympathetic nervous system
serves as the emergency or stress system. Think of the “fight or flight” response. See a tiger, need to run. The parasympathetic nervous system controls our normal, everyday conditions. In an analogy to a car, the sympathetic nervous system is often described as the accelerator, while the parasympathetic system is the brakes. Sometimes after long periods of stress the autonomic nervous system is not performing correctly and we will accelerate when we should brake and brake when we should accelerate.

While our body was designed to handle stress, it was not designed to handle the constant stress that many people experience. Often people do not recognize their own stress level as they erroneously believe they are handling the stress, or it is how they always feel and do not notice a difference. Yet, their body is under constant stress from both conscious and unintended lifestyle and diet choices.

Both our sympathetic and parasympathetic nervous systems require specific nutrients. The accelerator function of the sympathetic nervous system is controlled by Phosphorus. Phosphorus is energizing – like fire. In fact, it burns immediately when exposed to air! Fortunately our Phosphorus is inside our body!

We are bombarded with messages about the importance of calcium for our bones. Well, without phosphorus all the calcium in the world will not do you much good. In fact, it may cause harm if there is too much calcium and not sufficient phosphorus.

Phosphorus is the second key mineral by content in our bones. It supports healthy bone formation, energy production, cell growth and repair (remember blood cells are made in our bones), collagen synthesis (that's what helps make the bone), cardiovascular function, and nerve and muscle activity. It is a key part of calcium and sugar metabolism.

Food sources of phosphorus include almonds, brewer's yeast, eggs, fish (halibut, salmon), glandular meats, lean beef, lentils, liver, milk, peanuts, poultry, pumpkin seeds, wheat bran, and yogurt.

What about potassium? Potassium strengthens the parasympathetic nervous system. It acts like a governor and helps calm the nervous system. It is critical for the ongoing health of every cell in our body. That’s a pretty important job! Along with its partner sodium, the two minerals balance the nutrient and waste exchange of each cell. Potassium is involved in nerve and muscle functioning where it again teams with sodium. It also maintains our body’s fluid balance, electrolyte balance, and pH balance.

Foods containing potassium include: almonds, artichokes, avocado, bananas, beet greens, broccoli, Brussel sprouts, kale, lentils, lima beans, oranges, papaya, pinto beans, prunes, raisins, spinach, sunflower seeds, Swiss chard, tomatoes, wheat germ, winter squash, and yams.

Calcium is one of the most talked about minerals and for good reason. It supports strong bone structure, teeth, and muscle tissue, aids in blood clotting function, supports cardiovascular and nerve functions, and helps in normal functioning of many enzymes. Calcium works in conjunction with Phosphorus and Potassium to balance these important systems.

The best sources of calcium are of course from food. It is also a misconception that this has to come from milk. Leafy green vegetables are a great source of calcium. For calcium choose: bone meal, cheese (best are Cheddar, mozzarella, and Swiss), collard greens, flaxseed, liver, milk, molasses, mustard greens, sesame seeds, spinach, turnip greens, wheat germ and yogurt.

http://www.drlwilson.com/ARTICLES/PHOSPHORUS.htm
PHOSPHORUS {it can be more clear to read the article directly to the link}

Phosphorus is a fascinating mineral, and a very important one in nutritional balancing science. It is called a macromineral because our bodies contain a lot of it. Other macrominerals are calcium, magnesium sodium, potassium and sulfur.

Phosphorus may be called the excitatory or high-energy mineral because it is also so involved in the human fuel system. This is a complex system by which we convert food and nutrients into a form that the cells use to produce their energy. Phosphorus is very fiery. It the only non-radioactive element that is not stable when in an atomic or singular form.

Some readers may recall a high school science experiment in which the teacher gently lifted a piece of pure phosphorus out of a jar filled with water. In less than a minute, it bursts into flame spontaneously. This is how unstable and fiery phosphorus is. In part because of this, phosphorus is a very essential mineral in the human body. It is also one of the most anabolic of all the minerals. This means it is needed to build up new body tissue.

The hair or soft tissue phosphorus level is also a most important reading on a hair tissue mineral analysis. Here it tells us about a person’s vitality, energy level, tissue regeneration, and when it is high, it may indicate a special form of mental development or a celebration pattern. Let us examine phosphorus in more detail.

SOURCES OF PHOSPHORUS
Phosphorus is found in all protein foods, along with nitrogen and often sulfur. These three elements are the hallmarks, as it were, of amino acids and proteins that play such a critical role in human and animal health. Plants such as vegetables are more built out of carbohydrates and sugars, while animal bodies are more built from proteins. This is very important to recall at all times, and one reason why low protein diets can be problematic.

The human being needs concentrated proteins every day, and those who follow very low protein diets are very foolish. Some people don’t want to bother with protein as it is more costly, and often must be prepared or cooked, compared to starches and sugars like breads, rice, pasta and fruits. This is always a mistake, in my experience of over 30 years.

Foods rich in high-quality phosphorus include meats, eggs, cheese and milk. Other sources that are not as high quality, in most cases, include nuts, seeds, and beans.

Harmful phosphorus compounds. Not all phosphorus compounds are healthful. For example, many raw grains are high in phytates, which are harmful phosphorus compounds. Cooking, fermenting, and special methods of food preparation, such as adding yeast to bread dough and adding lime to corn, were developed to reduce the high phytate content of grain foods.

This is an important reason NOT TO EAT RAW GRAINS, EVER. These include granola and even just soaked grains, which are popular today. Cooking or fermenting the grains gets rid of the phytates. However, cooking is best, in our experience because ferments have a number of problems that are explained in a separate article called Fermented Foods. Also read Raw Foods for more on why they do not work well today, except for raw dairy products which are excellent foods.

Phosphoric acid. Soda pop often contains phosphoric acid. This is used to cut the sweetness of the drink and to add a tangy flavor. It is also a stimulant, along with the sugar and caffeine in many soft drinks, and this is why these drinks are popular among exhausted people.

However, phosphoric acid is another harmful type of phosphorus compound that is very acidic and can erode the teeth, damage the stomach and intestines, helps destroy the bones, and should always be avoided. Phosphoric acid in Coca Cola, Pepsi and other soft drinks is the reason why these abominations can be used to clean the terminals on your car battery, but should never be drunk if you care for your body at all. Even dogs and cats know to avoid these products that should never be allowed to be sold.

FUNCTIONS OF PHOSPHORUS
1. Bone health. About 85% of your body’s phosphorus is in the bone structure, where it interacts with calcium to form the hard part of the bones. This, of course, is a critical body function. The right kind of phosphorus is needed, and the wrong kinds found in raw grains and soda pop, mainly, will tend to destroy the bones faster or cause them to grow in a deformed way.

2. Energy production. This is probably its most important role. ATP or adenosine triphosphate is the high-energy molecule that is used as the “refined fuel” for every purpose in the body. It is somewhat like refined gasoline in today’s society.

3. Growth and development. Phosphates are extremely important for growth of the body. For example, mother’s milk is rather low in phosphorus compared to cows milk. Cows, of course, grow much faster and larger than human beings. This is one of the problems with drinking cows milk. Milk from smaller animals such as goats and sheep tends to be more like human milk in their phosphorus content. Most of the phosphorus from dairy products ends up in the bones to create a strong and healthy body. Pasteurizing the milk damages some of the calcium and phosphorus compounds it contains and causes severe digestive problems for many people, especially those of the Black race and Asians. They may not even tolerate raw cow’s milk very well due to its high lactose content, but this is far better than pasteurized milk. Homogenizing the milk also may damage some phosphorus compounds and should never be done. Raw, unpasteurized, and unhomogenized milk is extremely safe when produced in a healthy way, which is easy to do.

4. The nervous system. The human nervous system is extremely dependent on phosphorus compounds, especially those found in meats and eggs. For example, phospholipids are needed to form the myelin sheath on the nerves. This is like the insulation on wires. If it is not strong, the brain literally short circuits, like two bare wires touching each other. This can cause seizures, multiple sclerosis and dozens of other problems.

Also, the brain uses so much energy (at least one-third of all your energy) that high-energy phosphorus compounds are critical for thinking and higher brain development of a human being. This is one of the reasons vegetarians are prone to fatigue, anxiety and depression much more than meat eaters. Meat is far higher in bioavailable phosphorus compounds than vegetarian proteins like nuts, seeds and beans.

5. Cell membranes. Phospholipids are also needed to maintain the integrity of our cell membranes. This may not seem important, but it is a critical body function. The cell membranes keep the right nutrients inside the cells and keep the bad ones out of the cells. Omega-3 fatty acids, along with others, are incorporated into phosphorus compounds to form cell membrane structures needed for the transfer of nutrients into the cells and to move waste products out of the cells.

6. All protein synthesis. Phosphorus is involved in DNA and RNA metabolism. RNA, in turn, is needed to make all body proteins, enzymes, hormones and trillions of other chemicals in our bodies. We must have enough phosphorus or the process stalls and health declines. Too much mercury or aluminum in the body definitely interferes with the process of protein biosynthesis. This may show up on a hair mineral analysis as a phosphorus level less than about 13 mg%.

7. Buffering the pH of the blood. Phosphorus compounds perform many interesting functions, among which is buffering acids in the body to maintain a steady pH or acid-base balance. While phosphorus is considered an acid-forming mineral, this is only partially true. Phosphorus in certain forms has a neutralizing effect upon lactic acid and other acids that can build up in the body. So phosphorus can be both acid-forming and alkaline-forming in our bodies depending on what it is used for.

8. Maintaining the osmotic balance of the body fluids. This is another very interesting use of phosphorus in our bodies. All the fluids in our bodies such as the lymph, blood and the fluid inside of all of our cells must be maintained in a balance. Otherwise pressures would build up and damage our cell walls, our blood vessels and other pipes or conduits through which the fluids flow. Some phosphorus compounds help to keep all the fluids in balance by conducting small amounts of it back and forth between various body compartments like the cells, the blood stream and the lymph fluid.

PHOSPHORUS ASSESSMENT USING HAIR MINERAL ANALYSIS
The hair phosphorus level is a critical indicator on a properly performed hair mineral test. Dr. Paul Eck found that the hair phosphorus level mainly has to do with protein biosynthesis. Without sufficient protein synthesis, healing is definitely impaired. Thus, correcting the phosphorus level on a hair analysis is of primary importance.

An ideal hair phosphorus level is about 16 mg% or about 160 parts per million. The hair must not be washed at the laboratory for accurate hair readings.

False phosphorus readings with pubic hair. Pubic hair samples often produce very high phosphorus readings – up to 50 mg%! This is just one reason why the use of pubic hair for hair mineral analysis is not as reliable as head or body hair. Hair testing laboratories often allow practitioners to submit pubic hair samples, but I do not allow clients to do this unless there are no alternatives. (Fingernails or toenails are more reliable indicators of the soft tissue phosphorus level.)

A speed indicator. Phosphorus on a hair mineral test may indicate the speed of protein synthesis. This is an important sign, at times.

A catabolic indicator. When the hair phosphorus level is less than 15 mg%, a person is not synthesizing protein fast enough. This reflects a more catabolic state. This means one is tearing the body down faster than it is being built up. This is a simple, useful indicator for assessing a person’s health status and perhaps understanding why a person is not improving very fast.

An important vitality indicator. I also use phosphorus as a general vitality indicator, although Dr. Eck did not mention this too often. The ideal is about 15 to 16 mg%. When the level is lower, it indicates impaired vitality.

When the hair phosphorus is less than 12 mg%, vitality is definitely somewhat lower. A level less than 10 mg% is even more extreme. We see the low levels especially in those who do not eat enough protein, who have impaired digestion, yeast infections, vegetarians, extreme stress, and in a few other cases.

A yang indicator. Phosphorus is a fairly yang mineral. When it is low, it often indicates a more yin condition of the body.

An important “male” mineral. Another way to analyze phosphorus metabolism is by understanding that phosphorus is a “male” mineral because of its fiery nature. Trauma or anything that makes the body yin, including too many “female” toxic metals such as mercury and copper, can interfere with its functioning.

Lifestyle and phosphorus. Those with low hair phosphorus can be high strung. It is helpful for them to relax to help normalize their hair phosphorus level.

Impaired digestion and low phosphorus. Low hair phosphorus may indicate impaired digestion of protein if a person is eating sufficient high-quality protein. Many people do not digest protein very well. The reasons can be low digestive enzymes, poor food combinations, improper eating habits such as eating on the run, yeast infections, or other intestinal infections or improper gut flora. To correct this, everyone with a low phosphorus level needs to take a powerful digestive aid such as ox bile and pancreatin. We use one called GB-3. Another brand that is okay is called Bilex.

Copper toxicity and low phosphorus. Often, a low phosphorus on a hair test indicates hidden or overt copper toxicity. Copper imbalance is associated with yeast overgrowth in the intestines, low digestive enzyme production due to low zinc, improper gut flora and intestinal infections, and even protein destruction due to the effect of copper on other nutrients such as zinc, vitamin C, manganese and others.

An indicator for other toxic metals. A low hair phosphorus may be an indicator for the presence of aluminum, mercury, and perhaps nickel or lead, as well.

TMG may help raise a low hair phosphorus reading. TMG enhances methylation, and this may enhance protein biosynthesis, in some cases.

Elevated hair phosphorus and mental development. A phosphorus level between 17 to about 25 mg% usually indicates a milder form of protein breakdown or catabolism that is associated with mental or spiritual development or pre-development.

This is not a problem, at all. Instead, it signals an unusual change in the body, in my experience. My understanding at this time is that it indicates a breakdown of certain support cells in the brain of adults. The supporting cells, called the glial cells, are replaced by neurons, which are the “thinking cells” of the brain.

This is an exciting change because the end result is a greater brain capacity and thus much better mental functioning. This is a part of the process that is called on this website Mental or Spiritual Development. Thus, a slightly elevated phosphorus is indeed a very positive indicator on a properly performed hair mineral analysis.



Very high phosphorus readings (P > 25 mg%). At times, one will see a hair phosphorus level of 27, 30 or even 40 mg%. If you see this, first rule out the use of pubic hair. Also, rule out certain hair products that contain phosphorus, as these do exist. One that is toxic makes the hair shiny. Usually, these will explain the high reading. It is not necessarily a spiritual development pattern.

SUBSTANCES THAT MAY IMPEDE NORMALIZATION OF THE HAIR PHOSPHORUS READING
Propranolol. This drug is used to lower blood pressure, reduce thyroid activity and to calm the heart. It may interfere with the fiery quality of phosphorus.

CAUSES FOR A LOW HAIR PHOSPHORUS (IMPAIRED PROTEIN SYNTHESIS) INCLUDE:
1. Not enough dietary protein, or perhaps not enough high quality protein such a meat and eggs. For example, living on soy products such as tofu, nuts, seeds or beans can cause this reading as these are all lower quality proteins. Other examples are vegan diets, and vegetarian diets.

2. Incorrect eating habits. These include eating in the car, eating too fast, eating in noisy restaurants, eating when not at peace. Others are eating standing up, eating when upset, not chewing your food thoroughly or eating at your desk while you are working.

3. Possibly eating a less well utilized, incomplete, poorly absorbed, overcooked or spoiled protein food. For example, overcooked meat and all hard-cooked and hard-boiled eggs are harder to digest. Protein powders, no matter how nutritious, should ideally be eaten alone. When they are mixed with fruit, vegetables, water, juice and some vitamins in a blender they are often very bad food combinations that are poorly utilized by the body.

4. Problems in the digestive tract or liver that interfere with the absorption or utilization of amino acids. A common one is a chronic candida albicans infection, for example. Others might be leaky gut syndrome, an inflamed intestinal tract or an irritated bowel due to a parasitic or other infection. These problems are extremely common and plague most people to some degree.

5. Deficiencies of some nutrient or an excess of toxins in the mitochondria that impairs energy production and DNA and RNA synthesis. This is a final step in protein synthesis and a very important one. Many people suffer from what is today called mitochondrial defects for these reasons. Minerals such as zinc and magnesium, among others, are needed in the correct proportions and the correct forms for protein synthesis and energy production.

6. The presence of toxic metals. Examples are too much copper, mercury, or aluminum. The presence of the amigos or irritant forms of minerals may inflict oxidant and other types of damage on proteins in the body.

7. Low hair phosphorus as an indicator for a hidden zinc deficiency. A low hair phosphorus level frequently indicates a need for zinc, or the presence of excess copper in the body, regardless of the hair zinc or copper levels. This was another of Dr. Eck’s brilliant insights about the body and about hair mineral analysis. Zinc is required for several important enzymes involved in protein synthesis such as RNA transferase. Without adequate available zinc, protein synthesis is severely impaired.


Skin, hair and nail problems and phosphorus. If zinc becomes deficient, the body prioritizes its zinc reserves and may reduce the synthesis less essential proteins such as the skin, hair and nails. This is one cause for baldness, skin diseases and “zinc spots”, small whitish spots on the fingernails and toenails.

One can even calculate when stress or another condition reduced the available zinc by the location of the spots relative to the distal end of the fingernails. The fingernails usually grow about one-fourth to one-third of an inch per month. The closer a white spot is to the nail bed, the more recent was the low zinc present.

Impaired protein synthesis due to low zinc or high copper is also why some women develop stretch marks, baldness at times, spider veins, varicose veins, digestive problems and many, many other telltale signs of low zinc. These indicate stress and copper imbalance, at the very least, and may indicate other problems with protein synthesis.

PHOSPHORUS READINGS DURING NUTRITIONAL BALANCING PROGRAMS
Very high hair phosphorus on a retest - a celebration pattern. A sudden increase in hair phosphorus on a retest, but only when a person has followed a nutritional balancing program, is often very positive, even if the level rises to 25 or 30 mg%. It appears to reflect a breaking down and release of old, diseased tissue. The level usually returns to a more normal level on the next retest.

In this regard, tissue breakdown or catabolism is not all bad when it is controlled and is a part of rebuilding or actually remodeling of the body in a more healthful way. This is an important principle of nutritional balancing science that sometimes tissue must break down to be replaced by healthier tissues.

Updating the minerals. A high phosphorus of this type can represent updating the minerals in the body. This is a rather unusual process, as is mental development, that usually never occurs except when one follows a complete nutritional balancing program for several years.

It involves replacing what may be called “older” minerals with “newer” ones that work better. The older ones include toxic metals such as lead, cadmium, arsenic and aluminum. The newer ones include zinc, copper, selenium and silicon. Humanity is moving in the direction of the newer minerals, and away from the older ones which are associated with violence, horror and certain diseases.

Just taking these newer minerals will not accomplish the same thing as following a carefully designed program based on a tissue mineral test and guided by the principles of Dr. Eck as far as how to interpret the hair test and design the program.



“Gathering firewood” pattern. Phosphorus may decrease on a retest if the body eliminates lead, mercury or perhaps some other toxic metal. This occurs commonly during a nutritional balancing program. It is not a problem, at all.

In these cases, toxic metals, especially lead, had been displacing the phosphorus level upwards. As the toxic metal is eliminated from the body, the hair phosphorus decreases to where it should be. This phenomenon is called displacement.

Another possibility is that as certain toxic metals are removed from the body, they may temporarily interfere with protein biosynthesis and cause a lower phosphorus reading. More research is needed on this topic.


”On fire” pattern. This is defined as an increase in the hair phosphorus of 2 mg% or more on a retest. This is an excellent sign, indicating better vitality and improved speed of protein biosynthesis.

“Moving fast” pattern. This is a retest hair analysis pattern when a person is following a nutritional balancing program. It consists of:

1. A hair phosphorus level greater than 16 mg%.
2. An “amigo dump” (elimination of several of the so-called amigos).
3. A double or triple positive “flip” pattern (flips are when a key ratio, or the oxidation rate, or another pattern flips or changes rapidly to become much more positive).

This is a very positive combination of patterns on a hair analysis retest, for which reason it is called the Moving Fast pattern.

PHOSPHORUS SYNERGISTS
Minerals and other substances that are essential for the action of phosphorus include most of the trace minerals and hundreds of other nutrients that are involved in energy production, cell membrane formation, protein synthesis, the nervous system and fluid balance. Calcium is absorbed with phosphorus and is a synergist in bone formation. Magnesium is a synergist in energy production and protein synthesis. Vitamin D assists phosphorus absorption, along with calcium absorption and utilization. B-complex vitamins require phosphorus for their activity, in many cases. As stated above, TMG or trimethylglycine, is often synergistic with phosphorus.

PHOSPHORUS ANTAGONISTS
These substances block or interfere with the action of phosphorus in the body in some manner. Toxic metals such as aluminum and mercury are powerful antagonists for phosphorus on a hair mineral test.

Cortisone therapies, and steroid-containing drugs and medicines such as nasal sprays, pain remedies, cortisone shots and other steroid-containing products. In part, the devastating effects of these common remedies may be their detrimental effect on phosphorus metabolism in the body. This can help explain common side effects of these drugs such as bone loss, cataract formation, thinning of the skin, exhaustion, adrenal damage and others.

As discussed above, propranolol may be antagonistic to phosphorus. More research is needed on this, however.

Sodium and potassium are both synergistic and antagonistic with phosphorus. They are needed to absorb phosphates in the intestines. However, they are also powerful solvents that can lower the calcium and phosphorus in the blood and the tissues in some cases. For example, children have more sodium and potassium in the hair and other tissues than adults. For this reason, children often have lower levels of phosphates in their blood and tissues than adults. However, their hair phosphorus level should be about the same as adults at about 15-17 mg%.

Much more can be written about phosphorus. For more technical information about phosphorus, see the Mineral Reference Guide in the back of the text, Nutritional Balancing And Hair Mineral Analysis.

THE HAIR CALCIUM/PHOSPHORUS RATIO
Some doctors use this ratio to assess the oxidation rate. Dr. Eck rejected this idea completely. It is not accurate enough, and it is too unreliable because it can be influenced by many factors.

It is true that calcium represents stability and sluggishness when it is too high. Meanwhile, the hair phosphorus represents adequate protein synthesis. However, this is not the same as the oxidation rate, which depends upon adrenal and thyroid activity. Dr. Melvin Page, DDS used the Ca/P ratio in the blood serum to assess the sympathetic and parasympathetic balance, but this is entirely different from the hair mineral biopsy concept.

Also, the hair Ca/P ratio can be impacted by several other patterns that will skew the ratio severely. These are:
1. Three highs/four highs pattern. In this case, a person could be in fast oxidation, but the hair Ca/P ratio will indicate slow oxidation. This is common and very confusing if one uses the hair Ca/P to assess the oxidation rate.

2. Four lows pattern. In this case, a person can be in slow oxidation, but the hair Ca/P ratio will often indicate fast oxidation. This is also terribly confusing and not true.

3. Pubic hair samples. If pubic hair is used for testing, which I do not recommend, the phosphorus level is often elevated in these samples. This will skew the hair Ca/P ratio.

For all these reasons, I avoid using the hair Ca/P ratio for assessing the oxidation rate and it is not used in nutritional balancing science at this time (2014).


http://chestofbooks.com/health/general/Healthy-Life-Magazine/Phosphorus-And-The-Nerves.html
Phosphorus And The Nerves

We have to remember that the nervous system is two-fold. The one, or conscious portion, consists of the brain and spinal cord, from which all the nerves or branches travel to all parts of the body and give us dominion over them. The other, or subconscious, called the sympathetic nervous system, lies on either side of the front of the spine as two long chains with centres, or ganglia, at intervals. This second system is not within our control and has to do with the regulation of our vegetative functions, including the bulk of the digestive process.


All nerves, whether they come from the brain or from the sympathetic system, ranging to their smallest terminals, are built alike of cells, and these cells secrete a complex fatty substance, called lecithin, whose dominant element is phosphorus. This phosphorus has to be supplied to the body with food, and as food, and it cannot be properly utilised or assimilated by the body or used by the nerves to build up their lecithin unless it is eaten in the form of organic compounds.

The tissues of the body are continually dying, as a result of work done, and are continually being replaced by fresh young tissues as needed. It is the function of the nerves to manage this work for us as well as to similarly arrange for reproduction.

In order to control the functions of the various organs and tissues and to regulate the rate at which they reproduce themselves, the nerves extend their terminal branches, not only into every tissue, but into every microscopical unit of such tissue, and the part of the cell which represents the nerve terminal is the inner structure called the nucleus.

Now it will be obvious that the more the two nervous systems are worked the greater will be their depletion of lecithin and the more need there will be for fresh supplies of phosphorus in the daily food rations.

The person who works hard, whether it be manual labour or brain work, needs food and rest at intervals in order that the nerves may recuperate and replenish their stocks of lecithin.

A goodly proportion of uncooked foods rich in phosphorus must be supplied to make good the wear and tear, and the digestion must equally be efficient if these food-stuffs are to become assimilated.

Cooking of food to a large extent breaks down the organic phosphorus salts and makes them inorganic. In this state they are of but little use to the body. Poor digestion associated with putrefactive fermentation equally converts the organic salts into inorganic ones. These pass into the blood and are promptly eliminated by the kidneys as waste (phosphaturia) and thus they never reach the nerves at all.

We must remember that phosphorus is usually found in natural foods bound up with the proteid and especially with that proteid which has to do with the reproduction of the species. For this reason man instinctively resorts to the use of egg-yolks, and to the various seeds (such as nuts, wheat, barley, etc.) because of their rich phosphorus content.

These proteid-bound phosphorus salts can only be properly utilised when the hydrochloric acid of the stomach juice is well formed, for it converts them into acid salts which are readily absorbed. Therefore to ensure free absorption we must always remember to give the phosphorus-containing foods with such meals as will cause free secretion of the gastric acid.

When fermentation is active and the stomach juices are weakened the germs of the intestines rapidly break up the phosphorus constituents of the proteids and make them inorganic. Therefore the first thing to do when a person is found to be suffering from phosphaturia is to stop the intestinal fermentation by a right diet, clear the bowels of their accumulated waste poisons and give the nerves plenty of rest. Another consideration to bear in mind is that the nerves need fat wherewith to build up the lecithin. An excessive fermentative sourness of the stomach makes the food so acid when sent into the bowels that the bile, pancreatic and other intestinal juices cannot neutralise them, and so the fats themselves are not emulsified and digested, which fully accounts for the mental depression and debility of which these patients complain.

People who are suffering from "nerves" in any form need plenty of pure fat (fresh dairy butter, cream, nut butter, fruit-oils, etc.) and an abundance of natural fresh vegetable products at once rich in phosphorus and iron and in organic alkaline acid-neutralising earthy salts. These arrest fermentation and so enable the phosphorus and the fat to become duly assimilated.

-------------------------------

Don't be afraid to look by yourself in this forum, the different threads about health and nutrition. :cuckoo:

Without forget the following:Nerve agent
Nerve agents are a class of phosphorus-containing organic chemicals (organophosphates) that disrupt the mechanisms by which nerves transfer messages to organs. The disruption is caused by blocking acetylcholinesterase, an enzyme that catalyzes the breakdown of acetylcholine, a neurotransmitter.
Poisoning by a nerve agent leads to contraction of pupils, profuse salivation, convulsions, involuntary urination and defecation, and death by asphyxiation due to a loss of control of the respiratory muscles. Some nerve agents are readily vaporized or aerosolized, and the primary portal of entry into the body is the respiratory system. Nerve agents can also be absorbed through the skin, requiring that those likely to be subjected to such agents wear a full body suit in addition to a respirator.

Biological effects

As their name suggests, nerve agents attack the nervous system of the human body. All such agents function the same way: by inhibiting the enzyme acetylcholinesterase, which is responsible for the breakdown of acetylcholine (ACh) in the synapse. ACh gives the signal for muscles to contract, thus, if it cannot be broken down, muscles are prevented from relaxing.

Initial symptoms following exposure to nerve agents (like sarin) are a runny nose, tightness in the chest, and constriction of the pupils. Soon after, the victim will then have difficulty breathing and will experience nausea and drooling. As the victim continues to lose control of their bodily functions, they will involuntarily salivate, lacrimate, urinate, defecate, and experience gastrointestinal pain and vomiting. Blisters and burning of the eyes and/or lungs may also occur.This phase is followed by initially myoclonic jerks followed by status epilepticus. Death then comes via complete respiratory depression, most likely via the excessive peripheral activity at the neuromuscular junction of the diaphragm.

The effects of nerve agents are long lasting and increase with continued exposure. Survivors of nerve agent poisoning almost invariably suffer chronic neurological damage. This neurological damage can also lead to continuing psychiatric
effects.

Mechanism of action

When a normally functioning motor nerve is stimulated, it releases the neurotransmitter acetylcholine, which transmits the impulse to a muscle or organ. Once the impulse is sent, the enzyme acetylcholinesterase immediately breaks down the acetylcholine in order to allow the muscle or organ to relax.

Nerve agents disrupt the nervous system by inhibiting the function of the enzyme acetylcholinesterase by forming a covalent bond where acetylcholine would break down (undergo hydrolysis). Acetylcholine thus builds up and continues to act so that any nerve impulses are continually transmitted and muscle contractions do not stop. This same action also occurs at the gland and organ levels, resulting in uncontrolled drooling, tearing of the eyes (lacrimation) and excess production of mucus from the nose (rhinorrhea).

The structures of the complexes of soman (one of the most toxic nerve agents) with acetylcholinesterase from Torpedo californica have been solved by X-Ray crystallography (PDB codes: 2wfz, 2wg0, 2wg1, and 1som). The mechanism of action of soman could be seen on example of 2wfz

Antidotes
Atropine and related anticholinergic drugs act as antidotes to nerve agent poisoning because they block acetylcholine receptors, but they are poisonous in their own right. (Some synthetic anticholinergics, such as biperiden, may counteract the central symptoms of nerve agent poisoning better than atropine, since they pass the blood–brain barrier better than atropine.) While these drugs will save the life of a person affected with nerve agents, that person may be incapacitated briefly or for an extended period, depending on the amount of exposure. The endpoint of atropine administration is the clearing of bronchial secretions. Atropine for field use by military personnel is often loaded in an autoinjector, for ease of use in stressful conditions.

Pralidoxime chloride, also known as 2-PAM chloride, is also used as an antidote. Rather than counteracting the initial effects of the nerve agent on the nervous system like atropine, pralidoxime chloride reactivates the poisoned enzyme (acetylcholinesterase) by scavenging the phosphoryl group attached on the functional hydroxyl group of the enzyme. Though safer to use, it takes longer to act.

Revival of acetylcholinesterase with pralidoxime chloride works more effectively on nicotinic receptors while blocking acetylcholine receptors with atropine is more effective on muscarinic receptors. Often, severe cases of the poisoning are treated with both drugs.

Countermeasures in development
Butyrylcholinesterase is a prophylactic countermeasure against organophosphate nerve agents. It binds nerve agent in the bloodstream before it can exert effects in the nervous system. Because it is a biological scavenger (and universal target), it is currently the only therapeutic agent effective in providing complete stoichiometric protection against the entire spectrum of organophosphate nerve agents.

Classes
There are two main classes of nerve agents. The members of the two classes share similar properties and are given both a common name (such as sarin) and a two-character NATO identifier (such as GB).

G-series
The G-series is thus named because German scientists first synthesized them. G series agents are known as non-persistent, while the V series are persistent. All of the compounds in this class were discovered and synthesized during or prior to World War II, led by Gerhard Schrader (later under the employment of IG Farben).

This series is the first and oldest family of nerve agents. The first nerve agent ever synthesised was GA (tabun) in 1936. GB (sarin) was discovered next in 1939, followed by GD (soman) in 1944, and finally the more obscure GF (cyclosarin) in 1949. GB was the only G agent that was fielded by the US as a munition, in rockets, aerial bombs, and artillery shells.

V-series

The V-series is the second family of nerve agents and contains five well known members: VE, VG, VM, VR, and VX, along with several more obscure analogues.[10] This class of compounds is also sometimes known as Tammelin's esters, after Lars-Erik Tammelin of the Swedish National Defence Research Institute.

The most studied agent in this family, VX, was invented in the 1950s at Porton Down in the United Kingdom. Ranajit Ghosh, a chemist at the Plant Protection Laboratories of Imperial Chemical Industries (ICI) was investigating a class of organophosphate compounds (organophosphate esters of substituted aminoethanethiols). Like Schrader, Ghosh found that they were quite effective pesticides. In 1954, ICI put one of them on the market under the trade name Amiton. It was subsequently withdrawn, as it was too toxic for safe use. The toxicity did not go unnoticed and some of the more toxic materials had been sent to the British Armed Forces research facility at Porton Down for evaluation. After the evaluation was complete, several members of this class of compounds became a new group of nerve agents, the V agents (depending on the source, the V stands for Victory, Venomous, or Viscous). The best known of these is probably VX, with VR ("Russian V-gas") coming a close second (Amiton is largely forgotten as VG). All of the V-agents are persistent agents, meaning that these agents do not degrade or wash away easily and can therefore remain on clothes and other surfaces for long periods. In use, this allows the V-agents to be used to blanket terrain to guide or curtail the movement of enemy ground forces. The consistency of these agents is similar to oil; as a result, the contact hazard for V-agents is primarily – but not exclusively – dermal. VX was the only V-series agent that was fielded by the US as a munition, in rockets, artillery shells, airplane spray tanks, and landmines.

Novichok agents
The Novichok (Russian for "newcomer") agents are a series of organophosphate compounds that were developed in the Soviet Union from the mid-1960s to the 1990s. The goal of this program was to develop and manufacture highly deadly chemical weapons that were unknown to the West. These new agents were designed to be undetectable by standard NATO chemical detection equipment and to defeat chemical protective gear.

In addition to the newly developed "third generation" weapons, binary versions of several Soviet agents were developed and were designated as "Novichok" agents.

Insecticides
Some insecticides, including carbamates and organophosphates such as dichlorvos, malathion and parathion, are nerve agents. The metabolism of insects is sufficiently different from mammals that these compounds have little effect on humans and other mammals at proper doses; but there is considerable concern about the effects of long-term exposure to these chemicals by farm workers and animals alike. At high enough doses, acute toxicity and death can occur through the same mechanism as other nerve agents. Organophosphate pesticide poisoning is a major cause of disability in many developing countries and is often the preferred method of suicide.
https://en.wikipedia.org/wiki/Nerve_agent
 
I am waking the other morning with GABA and others things from a dream in mind.
So let's taking a look to Gamma-aminobutyric acid, GABRA2, and other Autonomic nervous system...


How the Autonomic Nervous System Works and also HERE
An overview to one of the most important systems in your body
The nervous system is one of the most incredible parts of the human body. Your nervous system takes in all the information in the world around you and sends a message to your muscles, allowing you to make your way through the world. Your autonomic nervous system also controls all of your vital functions, many of which you aren't consciously aware of. In short, it keeps you alive.

While it might feel like a disservice that such an important part of your body is under-recognized by design, it's probably a good thing that your autonomic nervous system is out of your conscious control.
Like many things taken for granted, the significance of the autonomic nervous system is suddenly recognized when something goes wrong. While few diseases attack the autonomic nervous system alone, almost all medical disorders have some impact on the autonomics. In order to fully understand disease and health, it's important to know how the autonomic nervous system works
Your autonomic nervous system lies almost entirely outside of the central nervous system and involves two main parts: the craniosacral part (parasympathetic), and the thoracolumbar part (sympathetic). These are sometimes thought of as being opposite to each other, ultimately striking a balance within the body.

The parasympathetic are associated with relaxation, digestion, and generally taking it easy. The sympathetic is responsible for the "fight or flight" response.

One of the interesting things about the autonomic nervous system is that, almost without exception, the nerves synapse in a clump of nerves called a ganglion before the message is transmitted to the target organ, such as a salivary gland.

This allows for another level of communication and control.
For most of us, the autonomic nervous system is generally out of our conscious control. However, the cortex of your brain, normally associated with conscious thought, can change your autonomic nervous system to some degree. In the cerebrum, the insula, anterior cingulate cortex, substantia innominata, amygdala and ventromedial prefrontal cortex communicate with the hypothalamus to impact your autonomic nervous system. In the brainstem, the nucleus tractus solitarius is the main command center for the autonomic nervous system, sending input largely through cranial nerves IX and X.

Because the cortex is linked to the autonomic nervous system, you may be able to control your autonomic nervous system through conscious effort, especially with some practice. Highly trained people, such as advanced yoga practitioners, may be able to intentionally slow their heart rate or even control their body temperature through meditative practices. For most of us, though, focusing on things that are relaxing rather than stressful, or just taking a large breath when you notice your sympathetic nervous system is causing a fast pulse or anxious feeling, can bring your parasympathetic nervous system back into a degree of control.


The Autonomic Nervous System And Dysautonomia

The autonomic nervous system controls the unconscious bodily functions, such as heart rate, digestion, and breathing patterns. It consists of two parts: the sympathetic system and the parasympathetic system.

The sympathetic nervous system can best be thought of as controlling the fight or flight reactions of the body, producing the rapid heart rates, increased breathing, and increased blood flow to the muscles that are to escape danger or cope with stress.

The parasympathetic nervous system controls the quiet body functions, such as the digestive system. So: the sympathetic system gets us ready for action, while the parasympathetic system gets us ready for rest. Normally, the parasympathetic and sympathetic components of the autonomic nervous systems are in perfect balance, from moment to moment, depending on the body’s instantaneous needs.

In people suffering from dysautonomia, the autonomic nervous system loses that balance, and at various times the parasympathetic or sympathetic systems inappropriately predominate. Symptoms can include frequent vague but disturbing aches and pains, faintness (or even actual fainting spells), fatigue and inertia, severe anxiety attacks, tachycardia (fast heart rate), hypotension (low blood pressure), poor exercise tolerance, gastrointestinal symptoms, sweating, dizziness, blurred vision, numbness and tingling, pain, and (quite understandably) anxiety and depression.

Sufferers of dysautonomia can experience all these symptoms or just a few of them.

They can experience one cluster of symptoms at one time, and another set of symptoms at other times. The symptoms are often fleeting and unpredictable, but on the other hand they can be triggered by specific situations or actions. (Some people have symptoms with exertion, for instance, or when standing up, or after ingesting certain foods.) And since people with dysautonomia are usually normal in every other way, when the doctor does a physical exam he or she often finds no abnormalities.

Because the physical exam and laboratory tests are usually quite normal, doctors (being trained in the sciences, and thus, trained to expect objective evidence of disease) tend to write people with dysautonomia off as being mentally unstable, (or, more often, as having an anxiety disorder).

What Causes Dysautonomia?
Dysautonomia can be caused by many different things; there is not one single, universal cause. It seems clear that some people inherit the propensity to develop the dysautonomia syndromes, since variations of dysautonomia often seem to run in families. Viral illnesses can trigger a dysautonomia syndrome. So can exposure to chemicals. (Gulf War Syndrome is, in effect, dysautonomia: low blood pressure, tachycardia, fatigue and other symptoms that, government denials aside, appear to have been triggered by exposure to toxins.) Dysautonomia can result from various types of trauma, especially trauma to the head and chest — including surgical trauma. (It has been reported to occur after breast implant surgery.) Dysautonomias caused by viral infections, toxic exposures, or trauma often have a rather sudden onset. Chronic fatigue syndrome, for instance, most classically begins following a typical viral-like illness (sore throat, fever, and muscle aches), but any of the dysautonomia syndromes can have a similar onset.
https://www.verywell.com/dysautonomia-1745423

Neurotransmitters: The Chemical Messengers of the Brain

What are Neurotransmitters?
It is believed that the brain contains several hundred different types of chemical messengers (neurotransmitters) that act as communication agents between different brain cells. These chemical messengers are molecular substances that can affect mood, appetite, anxiety, sleep, heart rate, temperature, aggression, fear and many other psychological and physical occurrences.

Scientists have identified three major categories of neurotransmitters in the human brain:

1. Biogenic amine neurotransmitters have been studied the longest and are probably the best understood in terms of their relationship to psychological disturbances. Six of the main biogenic amine neurotransmitters are:

Serotonin is ​a chemical messenger that a role in modulating anxiety, mood, sleep, appetite, and sexuality. Serotonin reuptake inhibitors (SSRIs) are generally considered first line medications to treat panic disorder.

Norepinephrine, which influences sleep and alertness, is believed to be correlated to the fight or flight stress response.

Epinephrine is usually thought of as a stress hormone managed by the adrenal system, but it also acts as a neurotransmitter in the brain.

Dopamine influences body movement and is also believed to be involved in motivation, reward, reinforcement and addictive behaviors. Many theories of psychosis suggest that dopamine plays a role in psychotic symptoms.

Histamine is thought to influence arousal, attention, and learning. It is also released in response to an allergic reaction. Antihistamines, which are commonly used to treat allergies, have common side effects of sedation, weight gain, and low blood pressure.

Acetylcholine is believed to be associated with muscle activation, learning, and memory. Alzheimer’s type dementia has been linked to acetylcholine function.

2. Peptide neurotransmitters are believed to be associated with the mediation of the perception of pain, stimulation of the appetite, regulation of mood and other multiple functions. Abnormalities in peptide neurotransmitters have been associated with the development of schizophrenia, eating disorders, Huntington’s disease and Alzheimer’s disease.

Cholecystokinin (CCK), a fairly new discovery, is a peptide that has received a lot of attention in the last decade. It is believed that CCK increases relaxation inducing GABA while decreasing dopamine. Studies have linked CCK with anxiety and panic attacks in people with panic disorder.

3. Amino acid neurotransmitters are viewed by some experts as the main players in the neurotransmission process. There are two major amino acid neurotransmitters:

Gamma-aminobutyric acid (GABA) is a major inhibitory neurotransmitter that acts through a negative feedback system to block the transmission of a signal from one cell to another. It is important for balancing the excitation in the brain. Benzodiazepines (anti-anxiety drugs) work on the GABA receptors of the brain, inducing a state of relaxation.
Glutamate is an excitatory neurotransmitter and is the most abundant chemical messenger in the brain. It is believed to be involved in learning and memory. Certain diseases (such as Alzheimer’s disease) or brain injury (such as stroke) can cause too much glutamate to accumulate. This can set the stage for excitotoxicity, a process that can lead to damage or death of the affected brain cells.

It is important to note that GABA and glutamate are carefully orchestrated to balance each other. Dysfunction of one of these amino acid neurotransmitters affects the function of the other. Some experts believe that their excitatory and inhibitory balance influences all brain cells

Neurotransmitters are Team Players
All chemical messengers in the brain have immense interconnectivity. Their function relies on a system of checks and balances during each moment of life. If one part of the system fails, others can’t do their job properly. Panic disorder is just one of many physical and psychological illnesses that are believed to be influenced by the complex interacting of neurotransmitters.

neuron7-56a6e83d5f9b58b7d0e56e45.jpg


Neurons (Brain Cells)

To get a better idea of how the brain functions through chemical communication, let’s start by looking at figure 1.1, which shows a basic schematic of a single neuron.

The center of the neuron is called the cell body or soma. It contains the nucleus, which houses the cell’s deoxyribonucleic acid (DNA) or genetic material.

The cell’s DNA defines what type of cell it is and how it will function.

At one end of the cell body are the dendrites, which are receivers of information sent by other brain cells (neurons). The term dendrite, which comes from a Latin term for tree, is used because the dendrites of a neuron resemble tree branches.

At the other end of the cell body is the axon. The axon is a long tubular fiber that extends away from the cell body. The axon acts as a conductor of electrical signals.

At the base of the axon are the axon terminals. These terminals contain vesicles where chemical messengers, also known as neurotransmitters, are stored.

Neurotransmitters (Chemical Messengers)
It is believed that the brain contains several hundred different types of chemical messengers (neurotransmitters). Generally, these messengers are categorized as either excitatory or inhibitory. An excitatory messenger stimulates the electrical activity of the brain cell, whereas an inhibitory messenger calms this activity. The activity of a neuron (brain cell) -- or whether or not it continues to release, or pass on, chemical messages -- is largely determined by the balance of these excitatory and inhibitory mechanisms.

Scientists have identified specific neurotransmitters that are believed to be related to anxiety disorders. The chemical messengers that are typically targeted with medications commonly used to treat panic disorder include:

Serotonin. This neurotransmitter plays a role in modulating a variety of body functions and feelings, including our mood.

Low serotonin levels have been linked to depression and anxiety. The antidepressants called selective serotonin reuptake inhibitors (SSRIs) are considered to be the first-line agents in the treatment of panic disorder. SSRIs increase the level of serotonin in the brain, resulting in decreased anxiety and inhibition of panic attacks.

Norepinephrine is a neurotransmitter that is believed to be associated with the fight or flight stress response. It contributes to feelings of alertness, fear, anxiety, and panic. Selective serotonin-norepinephrine reuptake inhibitors (SNRIs) and tricyclic antidepressants affect the serotonin and norepinephrine levels in the brain, resulting in an anti-panic effect.

Gamma-aminobutyric acid (GABA) is an inhibitory neurotransmitter that acts through a negative feedback system to block the transmission of a signal from one cell to another. It is important for balancing the excitation in the brain. Benzodiazepines (anti-anxiety drugs) work on the GABA receptors of the brain inducing a state of relaxation.

Neurons and Neurotransmitters Working Together
When a brain cell receives sensory information, it fires an electrical impulse that travels down the axon to the axon terminal where chemical messengers (neurotransmitters) are stored. This triggers the release of these chemical messengers into the synaptic cleft, which is a small space between the sending neuron and the receiving neuron.

As the messenger makes its journey across the synaptic cleft, several things may happen:

1. The messenger may be degraded and knocked out of the picture by an enzyme before it reaches its target receptor.
2. The messenger may be transported back into the axon terminal through a reuptake mechanism and be deactivated or recycled for future use.
3. The messenger may bind to a receptor (dendrite) on a neighboring cell and complete the delivery of its message. The message may then be forwarded to the dendrites of other neighboring cells. But, if the receiving cell determines that no more of the neurotransmitters are needed, it will not forward the message. The messenger will then continue to try to find another receiver of its message until it is deactivated or returned to the axon terminal by the reuptake mechanism.

For optimal brain function, neurotransmitters must be carefully balanced and orchestrated. They are often interconnected and rely on each other for proper function. For example, the neurotransmitter GABA, which induces relaxation, can only function properly with adequate amounts of serotonin. Many psychological disturbances, including panic disorder, may be the result of poor quality or low quantities of certain neurotransmitters or neuron receptor sites, the release of too much of a neurotransmitter or the malfunctioning of the reuptake mechanisms of the neuron.

An greatest hit:
spyraal:
Citation de: Cs on 23 Sept 2000

Q: Is this liquid that has been truncated a chemical transmitter?
A: Yes.
Q: And would this chemical transmitter, if it were allowed to flow, cause significant alterations
in other segments of the DNA?
A: Yes.
Q: So, there is a segment of code that is in there, that is deliberately inserted, to truncate this flow
of liquid, which is a chemical transmitter, or neuropeptide, which would unlock significant
portions of our DNA?
A: Close Biogenetic engineering.


I don't know if someone has figured that one out already (meaning the "liquid"), but my guess based on the transcript would be that it is some kind of neuro-trasmitter, since neurotrasmitters accommodate brain functions and do play a major role in our awareness. What if the body's former ability for production of this "fluid" or neurotransmitter was modified and/or replaced by the Orion STS designers with another one?... Hmm.. Maybe, that change would be enough to create a "new man" in terms of his worldview... :halo:
Citation de: wikipedia

There are many different ways to classify neurotransmitters. Dividing them into amino acids, peptides, and monoamines is sufficient for some purposes.

Approximately ten "small-molecule neurotransmitters" are known:

* Acetylcholine (ACh)
* Monoamines: norepinephrine (NE), dopamine (DA), serotonin (5-HT), melatonin, histamine
* Amino acids: glutamate, gamma aminobutyric acid (GABA), aspartate, glycine
* Purines: Adenosine, ATP, GTP, and their derivatives

In addition, over 50 neuroactive peptides have been found, and new ones are discovered on a regular basis. Many of these are "co-released" along with a small-molecule transmitter, but in some cases a peptide is the primary transmitter at a synapse.

Single ions, such as synaptically released zinc, are also considered neurotransmitters by some, as are a few gaseous molecules such as nitric oxide (NO) and carbon monoxide (CO). These are not neurotransmitters by the strict definition, however, because although they have all been shown experimentally to be released by presynaptic terminals in an activity-dependent way, they are not packaged into vesicles.

Not all neurotransmitters are equally important. By far the most prevalent transmitter is glutamate, which is used at well over 90% of the synapses in the human brain. The next most prevalent is GABA, which is used at more than 90% of the synapses that don't use glutamate. Note, however, that even though other transmitters are used in far fewer synapses, they may be very important functionally: the great majority of psychoactive drugs exert their effects by altering the actions of some neurotransmitter system, and the great majority of these act through transmitters other than glutamate or GABA. Addictive drugs such as cocaine, amphetamine, and heroin, for example, exert their effects primarily on the dopamine system.


Just my 2 cents, here!
:)
https://cassiopaea.org/forum/index.php/topic,10663.msg76261.html#msg76261


GABA


GABA metabolic relation
http://scholarpedia.org/article/File:Roberts_GABA_metabolic_relations.gif

Gamma-aminobutyric acid:

The term GABA refers to the simple chemical substance γ-aminobutyric acid (NH2CH2CH2 CH2COOH). It is the major inhibitory neurotransmitter in the central nervous system. Its presence in the brain first was reported in 1950

Contents:
1 Discovery of GABA and early history
2 Basic neurophysiology of GABA
3 A brief synopsis of the neurochemistry of GABA
4 The inhibited nervous system: a global view of GABAergic function (Roberts, 1976, 1986b, 1991)
5 GABA and diseases of the CNS
6 GABA, The quintessential neurotransmitter: electroneutrality, fidelity, and specificity (Roberts, 1993)
7 References
8 See Also

Discovery of GABA and early history

The history of GABA in brain began with the discovery of the unique presence of this substance in tissue of the vertebrate central nervous system (CNS). In the course of the study of free amino acids of various normal and neoplastic tissues in several species of animals by paper chromatography, relatively large amounts of an unidentified ninhydrin-reactive material were found in extracts of fresh brains of mouse, rat, rabbit, guinea-pig, human, frog, salamander, turtle, alligator, and chick. At most, only traces of this material were found in a large number of extracts of many other normal and neoplastic tissues and in urine and blood. The unknown material was isolated from suitably prepared paper chromatograms. A study of the properties of the substance in mouse brain revealed it to be GABA. The initial identification, based on the co-migration of the unknown with GABA on paper chromatography in three different solvent systems, was followed by an absolute identification of the GABA in brain extracts by the isotope derivative method. An abstract was submitted to the Federation meetings in March of 1950 reporting the presence of GABA in brain (Roberts and Frankel, 1950a). Three papers dealing with the occurrence of GABA in brain appeared later that year in the same issue of the Journal of Biological Chemistry (Roberts and Frankel, 1950b; Udenfriend, 1950; Awapara et al., 1950). Detailed histories of the early chemical work outlined above have been published (e.g. see Roberts, 1986a).

Detailed account of the discovery of GABA here: /history.

The 3 methylene groups between the amino and carboxyl groups of GABA endow it with great structural flexibility, allowing it freedom to explore the surrounding chemical space with a continuum of structures ranging from full extension ( Figure 1, upper right) to the contiguity of the amino and carboxyl groups shown in the cyclic form ( Figure 1, lower left). Therefore, GABA has potential capacity to engage in innumerable energy-minimizing, mutually shaping interactions with molecular entities encountered in its immediate environment.
Basic neurophysiology of GABA
For several years the presence of GABA in brain remained a biochemical curiosity and a physiological enigma. It was remarked in the first review written on GABA that “Perhaps the most difficult question to answer would be whether the presence in the gray matter of the central nervous system of uniquely high concentrations of γ-aminobutyric acid and the enzyme which forms it from glutamic acid has a direct or indirect connection to conduction of the nerve impulse in this tissue” (Roberts, 1956). However, later that year, the first suggestion that GABA might have an inhibitory function in the vertebrate nervous system came from studies in which it was found that topically applied solutions of GABA exerted inhibitory effects on electrical activity in the brain (Hayashi and Nagai, 1956). In 1957, the suggestion was made that indigenously occurring GABA might have an inhibitory function in the central nervous system from studies with convulsant hydrazides (Killam, 1957; Killam and Bain, 1957). Also in 1957, suggestive evidence for an inhibitory function for GABA came from studies that established GABA as the major factor in brain extracts responsible for the inhibitory action of these extracts on the crayfish stretch receptor system (Bazemore et al., 1957). Within a brief period the activity in this field increased greatly, so that the research being carried out ranged all the way from the study of the effects of GABA on ionic movements in single neurons to clinical evaluation of the role of the GABA system in epilepsy, schizophrenia, mental retardation, etc. This surge of interest warranted the convocation in 1959 of the first truly interdisciplinary neuroscience conference ever held, at which were present most of the individuals who had played a role in opening up this exciting field (Roberts et al, 1960).

During the aforementioned period, GABA became established as the major inhibitory neurotransmitter in the central nervous system (CNS). It was found to fulfill the “classical” requirements for neurotransmitter: proof of identity of postsynaptic action with that of the natural transmitter, presence in inhibitory nerves, releasability from terminals of identified nerves, and the presence of a rapid inactivating mechanism at synapses. Information on the GABA system, as a whole, up to 1960 has been thoroughly reviewed and extensively documented (Roberts and Eidelberg, 1960, and Roberts, et al., 1960) and major updates have appeared at intervals (Roberts, et al., 1976; Bowery, 1984; Olsen and Venter, 1986; Martin and Olsen, 2000)
A brief synopsis of the neurochemistry of GABA
GABA is formed in the CNS of vertebrate organisms to a large extent, if not entirely, from L-glutamic acid ( Figure 2). The reaction (reaction 5) is catalyzed by L-glutamic acid decarboxylase (GAD), an enzyme found in mammalian organisms largely in neurons in the CNS, although there now are many reports of the occurrence of both GAD and GABA in neurons in the peripheral nervous system, as well as in some nonneural tissues (e.g., pancreas) and in body fluids. Brain GAD catalyzes the rapid α-decarboxylation of L-glutamic acid and, of the rest of the naturally occurring amino acids, only L-aspartic acid to a very slight extent. Genes for two brain GAD isoforms have been cloned, as have families of other GABA-related proteins, such as 19 GABAA receptors and 2 to 3 GABAB receptors. It now is possible to visualize GABA, itself, and most of the proteins involved in GABA metabolism, release, and action on sections of the CNS at the light and electron microscopic levels, employing antisera to the purified components and peroxidase-labelling techniques. This has led to much more definitive data than were hitherto available through cell fractionation and lesion studies and has given detailed information of the interrelationships of GABA neurons in various nervous system regions (Roberts, 1978, 1980, 1984, 1986a).

The reversible transamination of GABA with α-ketoglutarate (reaction 9) is catalyzed by a mitochondrial aminotransferase, termed GABA-transaminase (GABA-T), which in the CNS is found chiefly in the gray matter but also occurs in other tissues. The products of the transaminase reaction are succinic semialdehyde and glutamic acid. There is present an excess of a dehydrogenase that catalyzes the oxidation of succinic semialdehyde to succinic acid, which in turn can be oxidized via the reactions of the tricarboxylic acid cycle. Because succinic semialdehyde is oxidized to succinate without the intermediate formation of succinyl-coenzyme A, one consequence of the operation of the GABA shunt in brain, through which 10% to 20% of glucose metabolism may flow, is a decreased rate of phosphorylation of guanosine diphosphate (GDP) to guanosine triphosphate (GTP). The latter may be involved in activation of G proteins, formation of deoxy GTP for mitochondrial DNA synthesis, and synthesis of adenosine triphosphate (ATP). Although the exact functional significance of this GABA-dependent metabolic shunt still is not apparent, it seems certain that GABA plays a special metabolic role in brain mitochondria, which is abrogated when inhibition of GABA-T occurs. Of the keto acids normally present, only α-ketoglutarate is an amino group acceptor. In addition to GABA, several other ω amino acids also are effective amino donors.

Steady-state concentrations of GABA in various brain areas normally are governed by the activity of GAD and not by GABA-T. In many inhibitory nerves, both GAD and GABA-T are present and are found throughout the neuron, GAD being more highly concentrated in the presynaptic terminals than elsewhere. The GABA-T is contained in mitochondria of all neuronal regions. GABA is a precursor of several substances found in nervous tissue and cerebrospinal fluid, among which are GABA histidine (homocarnosine), GABA-1-methylhistidine, γ-guanidinobutyric acid, GABA-1-cystathionine, α-(GABA)-L-lysine, GABA-choline, and putreanine [(N-4-aminobutyrl)-3-aminopropionic acid]. Homocarnosine is present exclusively in brain and cerebrospinal fluid, and there are data suggesting important roles for it as an antioxidant, an optimizer of immune function, and a modifier of brain excitability.

Important controls in regulation of the GABA system might be exerted at points related to the availability of glutamic acid, the substrate for GABA synthesis in nerve endings by GAD (reaction 5). Glutamate carbon can originate from glucose through glycolysis and the Krebs cycle (upper right-hand corner of Figure 2), from glutamine subsequent to uptake (reaction 6), and from proline (reactions 3 and 4) and ornithine (reactions 2 and 4). Ornithine (reactions 2 and 3), but not glutamate, is an effective precursor of proline in nerve terminals, a putative inhibitory neurotransmitter. Arginine can be converted to ornithine (reaction 1), which in turn gives rise to glutamate (reactions 2 and 4), proline (reactions 2 and 3), and GABA (reactions 2, 4, and 5).

GAD requires pyridoxal phosphate (PLP), a form of vitamin B6, as a coenzyme (Roberts et al., 1964). Dietary forms of vitamin B6 are absorbed and converted efficiently in tissues to (PLP), which is synthesized in brain from ATP and pyridoxal. PLP can readily be removed from the enzyme protein of GAD causing loss of enzyme activity, and the lost enzymatic activity can be restored simply by the addition of the coenzyme. Pyridoxine-deficient animals show a decrease in the degree of saturation with the coenzyme of the enzyme protein of cerebral GAD, but no decrease is found in the content of enzyme protein in the deficient animals. Brain GAD activity is restored rapidly to normal on feeding of pyridoxine to deficient animals. Pyridoxine deficiency, however produced, results in a susceptibility to seizures in animals, including humans, probably because of decreased ability to make GABA. Seizures in an infant with a simple dietary deficiency of vitamin B6 were abolished completely almost immediately after intramuscular injection of pyridoxine. This indicates that in a normal individual there is an extremely rapid conversion of pyridoxine to pyridoxal phosphate, association of the coenzyme with the apoenzyme of GAD, and formation of GABA in nerve terminals. Hydrazides and other carbonyl-trapping agents react with the aldehyde group of PLP and decrease its availability as a coenzyme. The seizures that result when such agents are administered are partially attributable to the decreases in the amounts of releasable GABA in nerve terminals of inhibitory nerves.
The inhibited nervous system: a global view of GABAergic function (Roberts, 1976, 1986b, 1991)
Perhaps the subject of neural inhibition had lain dormant for so many years because there was no material basis for it. Inhibitory neurons had not been identified, an inhibitory neurotransmitter had not been isolated and characterized, and postsynaptic sites for neural inhibition had not been shown. It is well to remember that it was not until 1952 (Eccles, 1982), two years after the discovery of GABA in brain, that the controversy as to whether synaptic transmission in the CNS is largely electrical or chemical in nature was settled in favor of the latter. It also was 3 years before modern molecular biology was begun by Watson and Crick (Watson and Crick, 1953).

GABA increases the permeability of membranes to specific ions in such a way as to cause the membranes to resist depolarization. For example, by acting on a particular class of receptors (GABAA), GABA produces an increase in permeability to Cl- ions that is measured as an increase in membrane conductance. GABA also produces increases in K+ conductance by action on another distinct class of receptors (GABAB) that are not colocalized with GABAA receptors. In general, GABA accelerates the rate of return of the resting potential of all depolarized membrane segments that it contacts and stabilizes undepolarized membrane segments by decreasing their sensitivity to stimulation. Thus, at many sites in the nervous system, GABA exercises inhibitory command-control of membrane potential. In this way this naturally occurring inhibitory transmitter can counteract the depolarizing action of excitatory processes to maintain the polarization of a cell at an equilibrium level near that of its resting value, acting essentially as a chemical voltage clamp. In most instances studied, GABA has been shown to exert hyperpolarizing or inhibitory effects by this mechanism. However, if high intracellular Cl- concentrations should occur, GABA can produce a decrease in membrane potential or depolarization. Data now suggest that the benzodiazepines (e.g., Valium) and barbiturates exert their pharmacologic effects largely by reacting with components of the GABAA receptor complex, thereby enhancing the efficacy of neurally released GABA.

GABA is inactivated at synapses by a mechanism that involves attachment to unique membrane recognition sites, different from those for the receptor, and subsequent removal from the synaptic junction by a Na+- and Cl--dependent transport process that is similar in principle to that used for transport of many other substances. The removal of synaptically released GABA takes place by reuptake into terminals of neurons and into glial processes that invest the synapses.
Figure 3 & 4
http://scholarpedia.org/w/images/a/a5/GABA_Roberts_fig2.jpg
http://scholarpedia.org/article/File:GABA_Roberts_fig3.jpg

The ubiquity and extent of immunocytochemically visualized presynaptic endings of inhibitory GABAergic neurons on various structures in the vertebrate nervous system are striking. The impression is that of looking at a highly restrained nervous system ( Figure 3 and Figure 4). In coherent behavioral sequences, innate or learned, preprogrammed circuits are released to function at varying rates and in various combinations. This is accomplished largely by the disinhibition of pacemaker neurons whose activities are under the dual tonic inhibitory controls of local-circuit GABAergic neurons and of GABAergic projection neurons coming from neural command centers. According to this view, disinhibition is permissive, and excitatory input to pacemaker neurons serves mainly a modulatory role.

Disinhibition., acting in conjunction with intrinsic pacemaker activity and often with modulatory excitatory input, is one of the major organizing principles in nervous system function. For example, cortical and hippocampal pyramidal neurons are literally studded with terminals from inhibitory GABAergic neurons. Not only are the endings of the local-circuit GABAergic aspinous stellate neurons densely distributed around the somata and dendrites of the cortical pyramidal cells, but they are also located on initial axon segments, where they act as frequency filters. In addition, GABA neurons have terminals from other GABAergic neurons impinging on them. Pyramidal cells are tightly inhibited by local-circuit inhibitory neurons that may themselves be inhibited by the actions of other inhibitory neurons in such a way that disinhibition of the pyramidal neurons occurs. Local-circuit GABAergic neurons also participate in processes that result in feedforward, feedback, surround, and presynaptic inhibition and presynaptic facilitation.

Both inhibition and disinhibition play key roles in information processing in all neural regions. Normally, the principal cells in particular neural sectors may be held tightly in check by constant tonic action of inhibitory neurons. Through disinhibition, neurons in a neural sector may be released to fire at different rates and sequences and, in turn, serve to release circuits at other levels of the nervous system. Communication among neural stations and substations may take place largely by throwing of disinhibitory neural switches. This may be the way information flows from sense organ to cerebral sensory area, through associative areas to the motor cortex, and by way of the pyramidal paths to the final motor cells of the medulla and spinal cord.
GABA and diseases of the CNS
Defects in coordination between the GABA system and other neurotransmitter and modulator systems may involve a local brain region, several brain regions, or the entire CNS. Enhanced synchrony of neuronal firing (e.g., in seizures) may arise in several ways: increased rate of release of synaptic excitatory transmitters, blockade of inhibitory transmitter receptor mechanisms, desensitization of receptors to inhibitory transmitters, decreased availability of inhibitory transmitter, decreased activity of inhibitory neurons, and increased formation or activation of electrotonic (gap) junctions. Immunocytochemical studies of the sensorimotor cortex in experimental epilepsy in monkeys showed highly significant reductions in numbers of GABAergic terminals of electrographically proved epileptogenic sites of alumina gel application. Electronmicroscopic observations showed a marked loss of axosomatic synapses on the pyramidal cells and a replacement of synaptic appositions with astrocytic processes in the alumina cream-treated animals. However, the symmetric, presumably excitatory synapses on the dendrites of these pyramidal cells appeared to be largely intact. Comprehensive biochemical studies complementary to the morphologic ones showed a significant correlation with seizure frequency only with losses in GABAergic receptor-related binding and decreased GAD activity. Current data support the notion that actual destruction or inactivation of inhibitory interneurons is one of the major cerebral defects predisposing to seizures, at least in the case of focal epilepsy (Roberts, 1986b). Mutations in GABAA receptor now have been shown to predispose individuals to various types of seizures (Macdonald, et al., 2004). GABA neurons play important roles in control mechanisms in various hypothalamic and brain stem centers. If their activity within these structures is compromised, abnormally enhanced responses may be observed, for example, in emotional reactivity, cardiac and respiratory functions, blood pressure, food and water intake, sweating , insulin secretion, liberation of gastric acid, and motility of the colon.

The roles of GABA neurons in information processing in various regions of the nervous system are so varied and complex that it appears doubtful that many useful drug therapies will come from approaches that are aimed at affecting one or another aspect of GABAergic function at all GABA synapses. Currently there are no drugs that are process and site specific. In this regard, the detailed molecular characterization that is being carried out of the enzymes of GABA metabolism, GABA receptors and transporters, the components of GABA receptor-associated anion channels, and the relationships among these structures and the lipidic membrane components in which they are imbedded should give rise to many opportunities for devising specific therapeutic modalities (e.g., see Roberts, 2006).
GABA, The quintessential neurotransmitter: electroneutrality, fidelity, and specificity (Roberts, 1993)
Isoelectric Points (PI) of Major Naturally-Occurring Amino Acids and Peptides in Animal Tissues (From Greenstein, J.P., Winitz, M. Chemistry of the Amino Acids, Vol. 1. New York: John Wiley & Sons, 1961, pp. 486-489).
Amino Acid pI
Aspartic acid 2.77
Glutamic acid 3.22
Cystine 5.03
Taurine 5.12
Asparagine 5.41
Phenylalanine 5.48
Homocystine 5.53
Threonine 5.64
Glutamine 5.65
Tyrosine 5.66
Serine 5.68
Methionine 5.74
Hydroxyproline 5.74
Tryptophan 5.89
Citrulline 5.92
Isoleucine 5.94
Valine 5.96
Glycine 5.97
Leucine 5.98
Alanine 6.00
Sarcosine 6.12
Proline 6.30
β-Alanine 6.90
Cysteine 6.94
Homocysteine 7.05
γ-Aminobutyric acid 7.30
Histidine 7.47
δ-Amino-n-valeric acide 7.52
ϵ-Amino-n-caproic acid 7.60
l-Methylhistidine 7.67
Carnosine 8.17
Anserine 8.27
Lysine 9.59
Ornithine 9.70
Arginine 11.15

Nature’s choice of GABA as the major inhibitory neurotransmitter is an example of evolutionary optimization. Alone of the known neurotransmitters, GABA is an electroneutral zwitterion (isoelectric point, 7.3) at physiologic pH, the ionization constants for both its amino and carboxyl groups being sufficiently far removed from neutrality so that shifts of pH in the physiologic range produce little change in net charge (Table 1). This endows GABA with a capacity for higher fidelity of information transmittal than that of other known major neurotransmitters, enabling it, in “stealth” fashion, to escape the charged minefields encountered in passage through the dense extracellular environment lying between presynaptic sites of release and postsynaptic sites of action. Coordinate enhancement with progressive acidification occurs in GABAergic inhibitory function because GABA formation and its anion channel-opening efficacy are increased while its metabolic destruction by transamination and removal by transport are decreased. Diminution of GABAergic inhibitory function occurs on alkalinization. Contrariwise, acidification decreases postsynaptic efficacy of glutamate, the major excitatory neurotransmitter, and alkalinization increases it.

In this manner the delicate balance between excitation and inhibition in the brain is maintained within the adaptive range in response to local or global activity that acidifies the environment in which it occurs. Accelerated metabolism after nerve activity results in accelerated formation of carbon dioxide and lactic acid; the accompanying acidification applies physiologic “brakes,” so to speak, preventing structural and functional damage from taking place. When GABAergic-glutamatergic relations are unbalanced by glutamatergic overactivity, seizures may occur. For example, the excitement experienced at an athletic event with the attendant hyperventilation and consequent alkalinization not infrequently causes seizures in susceptible individuals. Overbalancing in favor of the GABA system can lead to maladaptive decrement in neural activity and even to coma.

The properties of the simple GABA molecule itself, and of the machinery built to support its function, make it eminently suitable to guide the brain in a “civilized” manner. The yin-yang relationship between the glutamatergic excitatory and GABAergic inhibitory systems is played out on the tightrope of a delicate balance, and imbalances between them lead to serious disorders.

No α-, β-, or ω- amino acid known to occur in any abundance in animal tissues approaches GABA in molar efficacy at the GABAA receptor. Therefore, the noise level created by nonspecific effects at the GABAA receptor are minimal, ensuring quantitative fidelity of the neural messages delivered by GABA.

The “charm” of GABA lies in nature’s choice of this simple molecule, made from the common metabolic soil of glutamic acid, for the all-important role as major controller of the infinitely complex machinery of the brain, allowing it to operate in the manner best described as freedom without license. Try as one might, one cannot come up with a better choice for the job (Roberts, 1991, 1993).

And also:
dugdeep:
Quick web search brings this:
Citation de: http://misionmiranda.com/cheap-xanax/xanax-pharmacology-mechanism-of-action.html

Mechanism of action

Alprazolam or generic xanax medication chemically enhances the action of human body’s GABA (Gamma-Amino Butyric Acid). GABA is the nervous system’s primary inhibitory neurotransmitter, found in the brain and spinal cord. GABA tells neurons to slow down. About 40% of the millions of neurons all over the brain respond to GABA.

Neurotransmitters enable the brain cells to transmit impulses from one to another. Impulses are released from the brain cells by electrical signals. GABA tells the neurons of the brain that to slow down or stop working. This means that GABA has a general quietening influence on the brain: it is in some ways the body’s natural hypnotic and tranquillizer. This natural action of GABA is augmented by benzodiazepine, which thus exert an extra (often excessive) inhibitory influence on neurons for general calming and quieting effect on the brain.

Alprazolam enhances the activity of GABA but not increase the nervous system’s biological synthesis of GABA. Generic Xanax is more than 80% protein bound and is absorbed by the human system. Its benzodiazepine based metabolites are excreted primarily in urine and supported by sweating, saliva, faeces and breast milk.

Alprazolam acts as an anti-depressant in the human nervous system by controlling anxiety attacks and panic disorders. It operates widely in the brain, reducing emotional reactions, fear, tension, memory, thinking, control of consciousness, muscle tone and coordination within nervous system


So it sounds like trying GABA may be a good thing to try. I also found this site that talks about a few herbs that affect GABA receptors in the same way alprazolams do.

Citation de: http://www.webnat.com/articles/Neurotransmitters.asp

GABA (Gamma Amino Butyric Acid) is best known as an inhibitor of presynaptic transmission, or it keeps the brain from being “trigger happy”. When in balance, GABA prevents anxiety and increases mental clarity. Anxiolytic drugs of the benzodiazepine family (Valium, Xanax) work off of the soothing effects of GABA receptor response. All three herbs in HVP (hops, valerian and passion flower) are known to bind to GABA receptor sites, making this an excellent formula.

Other herbs that affect the GABA receptor sites in a similar way include: kava kava and lemon balm. The formula GABA Plus combines GABA with the amino acid glutamine (which is used to create another inhibitory neurotransmitter called glutamic acid. Glutamic acid is also the precursor to GABA. In addition to passion flower and Spirulina, GABA Plus helps calm overactive nerve functions and may be useful for reducing hyperactivity, seizures and nervous tics.
Hope that helps :)
https://cassiopaea.org/forum/index.php/topic,28596.msg364235.html#msg364235
And:
Neurotransmitters and Ion Channel Regulation
One hypothesis for KD action involves changes in the levels of certain neurotransmitters (NT), as a result of altered synthesis and/or clearance from the synaptic cleft. The production of the major excitatory NT glutamate is paradoxically linked to the synthesis of the main inhibitory NT GABA via the action of the biosynthetic enzyme for GABA, glutamate decarboxylase (GAD). The KD has been proposed to alter the metabolism of glutamate – in response to ketosis – resulting in increased levels of GABA and enhanced inhibitory neurotransmission
https://cassiopaea.org/forum/index.php/topic,28799.msg495537.html#msg495537

And obyvatel:

Citation de: go2 le juillet 15, 2011, 11:46:04 pm
Citation de: obyvatel


So it seems that the human capacity to immobilize and enter a deep state of relaxation without fear opens up connections to energies which are normally inaccessible. This state can be guided with a conscious intent to align specifically with STO energies and this is what the EE/POTS combination is achieving. Maybe it is self-evident - but I thought I would mention it nonetheless.


Wow! Polyvagel immobilization can connect the higher emotional center? The self-other relationship functions of the emotional center can access the higher realms when in a state of immobilization. It this what you see, obyvatel?


I think there is correlating information to indicate that the human capacity to immobilize in a relaxed conscious state without fear opens one up to energies not normally accessed. Higher emotional center access perhaps happens by the same process but there are other options too. From what we know about sex (another instance of this type of polyvagal immobilization) - that the human body dowses similar to a dead body after sex and the energy is said to drain straight to 4D STS - that certain techniques (tantric) are promoted as means of spiritual enlightenment - seems another option with a different flavor. Hypnosis could fall under the category of polyvagal immobilization too - at least that is my current thinking. In hypnosis we can access information in the subconscious, including past lives which seems to indicate we are accessing the level of soul consciousness .

In general it seems that in polyvagal immobilization, the motor and intellectual centers are (or can be) rendered quiet. This opens up the possibility of either the lower emotional center connecting with the higher emotional center and/or the sexual center working with its natural energy. Thus the portals to higher consciousness (both STS and STO) open up. This could give us a possible limited taste of what things could be like if our lower centers were already equilibrated and working properly . I say a limited taste as we need to put the motor and intellectual centers to sleep and make do with what our atrophied emotional center can for accessing this state today while if through Work, we can develop and make the lower centers work properly, we could be open to higher consciousness with our full set of faculties. At least that is my current understanding which may or may not be accurate.
https://cassiopaea.org/forum/index.php/topic,23603.0.html

And:

Alchemy

Alchemy can be defined as an allegorical description of the human chemical factory and it's work in transforming coarse substances (base metals) into finer ones (precious ones).

Alchemy is both an art and a scientific disciple that predates modern chemistry. In fact, modern chemistry comes from a DIRECT affiliation from old alchemy.

This separation of the fine from the coarse, the separation of the light from the dark, relates to an inner process of purification, a liberation from impurities made possible by the conscious surrendering of our outwardly directed self-will (a passive negation) to a more conscious, inwardly directed essential affirmation (that is, a conscious affirmation which serves as the spiritualizing factor), resulting in the purification of our own existence, both inwardly and outwardly, resulting in the purification of our essential nature (spiritual purity). This process is the great struggle, and the outcome of this struggle is described by Meister Eckhart as the "Everlasting Birth of Christ in the human soul".

The discipline of Alchemy relates to how an individual can use the human biological machine as a chemical factory to transform one kind of matter, that is, the coarser ones, into finer ones. These material transformations are but the reflections of something happening on a much deeper level, mirroring the inner evolution of the human soul. These alchemical transformations within the human biological machine are but an existential representative effect, and not the cause, of an inner evolution of the essential self.

Alchemy involves the process by which an individual refines the different substances within their human chemical factory, combining, separating and transforming these substances by various means, over a rvery long period of time, through the use of a slow heat generated by the application of an unwavering attention directed onto the unconscious manifestations of their own mechanical nature, producing a chemical and electrical mutation within their human biological machine, a mutation most likely representing itself even down to the level of the DNA molecule itself, which is all but a reflection of a much deeper mutation occurring within their essential self, all of which makes it possible for them to serve as an energy transducer for the unhindered flow of the creative force into the existing world.

[Note: Any changes in the essential self, by the use of certain psychological and emotional work, may cause electrical changes in various sectors of the brain and nervous system stimulating muscle and nervous system interactions releasing hormones that can alter DNA by affecting the permeability of the neural membranes or by altering the balance or composition of neurotransmitters, thus "turning on or off" DNA.]

Although Alchemy is applicable on ALL levels, that is, applicable on the material, genetic, psychic, and spiritual levels, it is very important to note that alchemy is NOT about a way of producing a change in the human biological machine, but rather, in a way to note the changes in the biological machine when changes occur, due to an inner transmutation of the essential self. The changes of the latter are the generatrix of the former and NOT the other way around. Much disinformation is out there because there is an exclusive focus on the material aspects of alchemy overlooking its essential aspects.

Relating to the paragraph above, the great alchemist Fulcanelli said in his book The Dwellings of the Philosophers:

"And so, I beg those who will read this little book to credit my words. I say to them once more, that they will never learn this sublime science by means of books, and it can only be learned through divine revelation, hence it is called Divine Art."

In his book, The Secret Teachings of All Ages, Manly P. Hall spoke of Alchemy as follows (p 498,499):

"Alchemy is the science of multiplication and is based upon the natural phenomenon of growth. "Nothing from nothing comes," is an extremely ancient adage. Alchemy is not the process of making something from nothing; it is the process of increasing and improving that which already exists

[...]

"God is the "within" and the "without" of all things. The Supreme One manifests Himself through growth, which is an urge from within outward, a struggle for expression and manifestation. There is no greater miracle in the growing and multiplication of gold by the alchemist than in a tiny mustard seed producing a bush many thousands of times the size of the seed. If a mustard seed produces a hundred thousand times its own size and weight when planted in an entirely different substance (the earth), why should not the seed of gold be multiplied a hundred thousand times by art when that seed is planted in its earth (the base metals) and nourished artificially by the secret process of alchemy?

"Alchemy teaches that God is in everything; that He is One Universal Spirit, manifesting through an infinity of forms. God, therefore, is the spiritual seed planted in the dark earth (the material universe). By art it is possible so to grow and expand this seed that the entire universe of substance is tinctured thereby and becomes like unto the seed--pure gold. In the spiritual nature of man this is termed regeneration; in the material body of the elements it is called transmutation. As it is in the spiritual and material universes, so it is in the intellectual world. Wisdom cannot be imparted to an idiot because the seed of wisdom is not within him, but wisdom may be imparted to an ignorant person, however ignorant he may be, because the seed of wisdom exists in him and can be developed by art and culture. Hence a philosopher is only an ignorant man within whose nature a projection has taken place.

[...]

"That which is true in the superior is true in the inferior. If alchemy be a great spiritual fact, then it is also a great material fact. If it can take place in the universe, it can take place in man; if it can take place in man, it can take place in the plants and minerals. If one thing in the universe grows, then everything in the universe grows. If one thing can be multiplied, then all things can be multiplied, "for the superior agrees with the inferior and the inferior agrees with the superior."

Rodney Collin Smith said the soul can be viewed as the cumulative sum total of all conscious moments that one has experienced throughout ones life. Working from this definition we can say that the physical process of alchemical transformation which takes place within the physical organism is cumulative also, since it is but a reflection of the cumulative moments of consciousness of the essential self. Thus, if this is true, then the alchemical process does not stop once it begins. It continues as long as the essential self becomes more conscious of itself. If the essential self falls asleep and temporarily gives up the struggle to be more conscious then the process does not stop or reverse itself, it simply ceases to continue.

The process of transformation will simply be reactivated at a later time when consciousness is present and more active. Assuming reincarnation exists, then it is possible that the cumulative efforts of the previous life to become more conscious are carried over into the next life and these results are represented within the very nature of the new physical vehicle, down to the molecular level, since, as already stated, the existential is but a reflection of the essential.
 
Something interesting:
In man, individual reasoning and separate personal choice became possible. As a result of one conscious and profound choice - to cut himself off from spontaneous dependence on God, live separate in unalterable pride - his group mind did not merely degenerate, with something analogous to sloth, from the form and behaviour design of his species. It split away. Both survive in him, the latter maintaining his ineradicable sense that, like it or not, some behaviour patterns should be followed and others rejected; that, in human parlance, some actions are right and some are wrong. Different cultures, different modes of upbringing, different creeds may modify his judgements as to which are which; but nothing will alter his conviction that the two sorts exist. As has already been noted, brain washing itself can do no more than reorientate conscience; it cannot destroy it.

The group mind, the collective unconscious, tainted by the ancient and continuing experience of evil, seeps into the being of every individual self, a self which is simultaneously drawn towards the original design for its species. The constant tension between the two is eased by the formulation of legal codes; the law of nature for the species is projected and rationalized in natural law, external, rigid, easy to recognize, hard to follow.

The Incarnation fulfilled and transcended with joy this projected, external law, showing it to be not a series of arbitrary decrees but a means to an end, and bringing into being a group mind renewed, spontaneous, free, in which the individual reborn could ultimately achieve that end, carry out the purpose for which he and his species were made; to know, love and serve God and to enjoy His Presence for ever. Psi would still link each human being with his contemporaries and his forebears, with the collective unconscious of mankind, whose impulses, good, bad, and neutral, would flow into him as before. But it could also link him with the new group, with the springs of grace welling up through those who were members of one another in the Mystical Body of Christ. Through that individual, moreover, the impulses of the new group could flow back (without his necessarily knowing it), into the old, changing its colouring, revitalizing it deep below the level of consciousness. Hence the overwhelming importance for all mankind of those dedicated to contemplative prayer. Hence the stress and exhaustion and glory of their calling.

It has been argued that in animal species the psi-factor is to be found at work in instinctive processes subserving reproduction. Ancient tradition and modern observation can both be adduced to support the hypothesis that in man too there is a close connexion between psychical and sexual activity.

One aspect of this is to be seen in the insistence upon virginity, chastity, or at the very least upon periodical spells of continence in those who deliberately attempt to cultivate and to use the psi-function, as in the traditional employment of a child under the age of puberty for scrying in the inkpool or the crystal.

If sexual activity inhibits the workings of psi, so also it seems can psi-activity inhibit biological processes connected with reproduction. Reference has already been made to Ronald Edwin's remark that during the long periods in which his clairvoyant powers were at their height he was completely free from physical desire. Women clairvoyants and mediums have noticed that in similar circumstances the menses do not occur.
The connexion between psi-activity and matter, living or inert, is just as difficult to explain and even harder to accept. Where what is called "psychic healing" is concerned, it is clear that psycho-somatic processes initiated by very powerful suggestion are at work; but how odd it is that one mind should be able so powerfully to impress another that a suggestion should be transmitted to the autonomic nervous system which coordinates and determines, deep below the level of consciousness, even of sensation, all the non-voluntary activities of the body. Inedia, the power to do without food, for very long periods, of which there are reliable records both among ordinary people and among those dedicated to a religious life[6]; levitation; and the sudden loss of weight said to have been observed during the course of séances in mediums, specializing in physical phenomena: all these are odd enough, but do at any rate concern the reactions of the living body-mind, in which consciousness and energy co-exist and are correlated in everyday experience.
And so back, full circle, to the body again: the body, first means of perception, continual source of that imagery in which its owner may become fully aware of his own intuitions: the body, the living symbol through which his feelings, his desires, his whole self may be imaged to the world in varying degrees, from the blush to the stigmata, from the sweat of fear to the odour of sanctity,[5] from jumping for joy or trembling with fright to the phenomenon of levitation: the body, generator of that energy which can be used to subserve either the reproduction and survival of the species, or psi-activity.
The oddest and most unpalatable of all aspects of psi lies in its peculiar relationship to time as we experience it. It is not too difficult, to be sure, to allow that retro-cognition (otherwise post-cognition) may occur; perhaps because the process of remembering one's own past, so vividly there, though invisible, provides a familiar analogy, perhaps because one can think of a film being played over and over again, or can imagine say the "ghost" of Lady Macbeth re-enacting for ever, in a timeless repetitive automatism, the guilty gesture of attempting vainly to wash her hands. It is possible moreover to talk - and to be understood - about a place being "full" of a certain emotional atmosphere to whose survival everyone conscious of it will contribute more vitality. And so on. Though these are no more than analogies, they afford a comfortable feeling that something points towards a rational explanation, an explanation that may one day be made completely plain.

But there is no ordinary experience, no convincing analogy to make comfortable the notion that people are occasionally aware of what has not yet happened, whether it is a card one ahead of that being proposed for guessing, or a railway accident twenty years in the future. Myers' query, "Are we regarding as a stream of consequences what is really an ocean of co-existencies?"; William James' kindred notion of an eternal present; the concept of other dimensions whose effects we can observe but whose nature we can no more comprehend than the Flatlanders could comprehend the nature of a cube; Dunne's theory of a Serial Universe, with a time behind our time, and another behind that, and so on to infinity: all these weave a discreet curtain of speculation over the skeleton in the cupboard, but in no way ease acceptance of the fact that it is there, or not at any rate by the non-mathematical type of mind. This may however be given some inkling of what is meant by considering instead of Dunne's mathematical arguments and symbols his likening of consciousness to a searchlight exploring a four-dimensional world.

There are those who find the small "time-displacement" phenomenon to be seen in the precognitive card guesser less strange than the long-range prophecies which are occasionally recorded and verified. This is not simply because the irregularity is "only a very little one", but because there is not such a vast multiplicity of causes and choices at work between the seer and the seen.
It is interesting that Werner Heisenberg[7] should maintain the existence of two types of causality, which seem to operate the one on a large scale, the other on a small. In the former, a strict determinism prevails. In the latter, there is "potentiality", the possibility of unpredictable, spontaneous activity (and even, at the level of nuclear physics, of "time reversal", of a situation in which it is not possible to know which of two events is the cause of the other and which the effect). He contends moreover that "the causality governing Man is of the weaker type, and he embodies both mechanical fate and potentiality". In large-scale matters, sociologists will be justified in basing their plans on the assumption that a number of developments are predetermined, in personal matters the individual can pray for grace to help him make a free and a right choice.
If that individual becomes aware, through hunch or dream or warning "voice", of some disaster to which present causes outside his control are building up, he may be free to choose, ahead of schedule as it were, whether he will be involved in it or trust his intuition, even if this brings him temporarily into ridicule. The odd way in which some trivial veridical detail of the achieved disaster - a newspaper headline, a soldier's uniform - may present itself to him with especial clarity can be paralleled by the common experience of remembering just such details of past events, details charged with some strong emotional significance not always understood.

I do not myself find this attempt to interpret the process involved precognition very satisfying, for precognitive experiences are usually so vividly of something "given", independent, existing in its own right, and containing chunks of objective and, so to speak, irrelevant data, which neither telepathy as to present circumstances nor intuition as to their outcome could be expected to provide. It should, however, be considered as well as the mathematical theories which have been put forward, and perhaps in connexion with them.

There are two other hypotheses in which time and causality are involved. Both are characterized by a careful, painstaking avoidance of what would seem to any theist to be the obvious explanation of the problem involved. One seems to imply that occasional reversals of the temporal process can occur, as if part of a stream were, here and there, to be diverted and pumped at high pressure round a looped channel to flow backwards into the main current. This was first formulated nearly sixty years ago by a French thinker, Gabriel Tarde, in an article entitled "The Effect of Future Events".[8] "Purposiveness," he wrote, "plays a role in the phenomena of life perhaps more important than that of heredity. The embryo is explained by the adult creature. Evolution ... shows changes occurring not at random but apparently in accordance with a directing idea ... when several events converge towards one important event, this future event has exerted an influence" (my italics) on the present.
Perhaps the contemporary blaze of Bergson's thought so dazzled Monsieur Tarde that he did not realize that he was seeing, and reiterating in other words, Aristotle's theory of teleology. Perhaps it was simply such a distaste for the whole concept as, according to Dr Denis Hill,[9] leads medical students who have been conditioned to consider disease in terms of strict physical causation to shy away from psychiatry as "non-scientific". Was it however an emotional reaction against some all too anthropomorphic concept of the Divine, or an assumption that there could be no contact between the disciplines of science and of theology that made him construct a very difficult theory of time rather than accept the fact that "directive ideas" exist not in the void but in a mind; that forced him and his followers to postulate an abstract "purposiveness" working itself out in biological development rather than realize that purposiveness and purposes are not autonomous entities, but can only be conceived, held, and co-ordinated in an Intelligence alive and aware of Itself and of Its own activities?
The other hypothesis is that put forward by Jung in the theory of synchronicity which is, as has already been noted, an attempt to reconcile with what he calls the iron law of causality the occurrence not only of psi-phenomena but also of significant coincidences, events in the outer world which suddenly express, symbolize or perfect psychological developments in this or that individual. He instances an occasion on which he was carefully explaining to a worried patient, reluctant to accept what he said, that an insect of which that patient kept dreaming was a scarab, the Egyptian symbol of rebirth and renewal. His remarks were met with a sort of glum resistance until suddenly there flew into the room where they were sitting the nearest European equivalent of the scarab, a glowing green and gold rose beetle. It settled before them, and the arrival of this external image of his own thought in some odd way jolted the patient into realizing its meaning, and set him on the way to recovery.

Where Tarde wrote of purposiveness, directive ideas, and the effect of future events to explain what he had observed, Jung uses the word Tao to indicate the source alike of temporal processes and of synchronicity: Tao, the self-caused, self-existent activity, which cannot be contained in human thought. It is clear from his other work that the Name of God carries for him at best the definition of an archetypal idea at work in the psyche and at worst the sense of a threatening, capricious, and implacable tyrant.

For those who associate that Name with "the unfathomable mystery, the incomprehensible Being" of whom Bede Griffiths writes in The Golden String, and with that passage from St Augustine quoted there which broods upon the Power "most secret, most present, most beautiful, most mighty ... ever in action and ever quiet ... upholding, filling and protecting, creating, nourishing and perfecting all things" there will be no difficulty in accepting that the continuous glory of creation in matter, life and consciousness, and our power of recognizing it by reason and intuition, in time and in eternity, spring all alike from God.
https://cassiopaea.org/forum/index.php/topic,803.msg4268.html#msg4268
 
Let's taking in this post a musical interlude
https://www.youtube.com/watch?v=SBenttZUT7A


The Song Was Written Impossible For Human But She Nailed It:
- she's not human. :thdown:
- the song was written possible. :violin:
- the song was written impossible for the most human except her... :jawdrop:

The song was written impossible for human. But she nailed it... after ...

https://www.youtube.com/watch?v=_x-NaT6XOwk
 
After that interlude, let's taking a close look to some beans:

Deduction.

Rule: All the beans from this bag are white.
Case: These beans are from this bag.
∴ Result: These beans are white.

Induction.
Case: These beans are [randomly selected] from this bag.
Result: These beans are white.
∴ Rule: All the beans from this bag are white.

Hypothesis.

Rule: All the beans from this bag are white.
Result: These beans [oddly] are white.
∴ Case: These beans are from this bag.

Deduction, induction, and abduction

Deductive reasoning (deduction):
Deductive reasoning allows deriving b from a only where b is a formal logical consequence of a. In other words, deduction derives the consequences of the assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. For example, given that "Wikis can be edited by anyone"(a1) and "Wikipedia is a wiki"(a2), it follows that "Wikipedia can be edited by anyone"(b).

Inductive reasoning (induction)
:
Inductive reasoning allows inferring b from a, where b does not follow necessarily from a. a might give us very good reason to accept b, but it does not ensure b. For example, if all swans that we have observed so far are white, we may induce that the possibility that all swans are white is reasonable. We have good reason to believe the conclusion from the premise, but the truth of the conclusion is not guaranteed. (Indeed, it turns out that some swans are black.)

Abductive reasoning (abduction):
Abductive reasoning allows inferring a as an explanation of b. As a result of this inference, abduction allows the precondition a to be abduced from the consequence b. Deductive reasoning and abductive reasoning thus differ in the direction in which a rule like " a entails b" is used for inference.

As such, abduction is formally equivalent to the logical fallacy of affirming the consequent (or Post hoc ergo propter hoc) because of multiple possible explanations for b. For example, in a billiard game, after glancing and seeing the eight ball moving towards us, we may abduce that the cue ball struck the eight ball. The strike of the cue ball would account for the movement of the eight ball. It serves as a hypothesis that explains our observation. Given the many possible explanations for the movement of the eight ball, our abduction does not leave us certain that the cue ball in fact struck the eight ball, but our abduction, still useful, can serve to orient us in our surroundings. Despite many possible explanations for any physical process that we observe, we tend to abduce a single explanation (or a few explanations) for this process in the expectation that we can better orient ourselves in our surroundings and disregard some possibilities. Properly used, abductive reasoning can be a useful source of priors in Bayesian statistics.


Abductive validation
Abductive validation is the process of validating a given hypothesis through abductive reasoning. This can also be called reasoning through successive approximation. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data. The best possible explanation is often defined in terms of simplicity and elegance (see Occam's razor). Abductive validation is common practice in hypothesis formation in science; moreover, Peirce claims that it is a ubiquitous aspect of thought:

Looking out my window this lovely spring morning, I see an azalea in full bloom. No, no! I don't see that; though that is the only way I can describe what I see. That is a proposition, a sentence, a fact; but what I perceive is not proposition, sentence, fact, but only an image, which I make intelligible in part by means of a statement of fact. This statement is abstract; but what I see is concrete. I perform an abduction when I so much as express in a sentence anything I see. The truth is that the whole fabric of our knowledge is one matted felt of pure hypothesis confirmed and refined by induction. Not the smallest advance can be made in knowledge beyond the stage of vacant staring, without making an abduction at every step.[40]

It was Peirce's own maxim that "Facts cannot be explained by a hypothesis more extraordinary than these facts themselves; and of various hypotheses the least extraordinary must be adopted."[41] After obtaining results from an inference procedure, we may be left with multiple assumptions, some of which may be contradictory. Abductive validation is a method for identifying the assumptions that will lead to your goal
https://en.wikipedia.org/wiki/Abductive_reasoning

Reality and truth are coordinate concepts in pragmatic thinking, each being defined in relation to the other, and both together as they participate in the time evolution of inquiry. Inquiry is not a disembodied process, nor the occupation of a singular individual, but the common life of an unbounded community.

The real, then, is that which, sooner or later, information and reasoning would finally result in, and which is therefore independent of the vagaries of me and you. Thus, the very origin of the conception of reality shows that this conception essentially involves the notion of a COMMUNITY, without definite limits, and capable of a definite increase of knowledge. (Peirce 1868, CP 5.311).

Different minds may set out with the most antagonistic views, but the progress of investigation carries them by a force outside of themselves to one and the same conclusion. This activity of thought by which we are carried, not where we wish, but to a foreordained goal, is like the operation of destiny. No modification of the point of view taken, no selection of other facts for study, no natural bent of mind even, can enable a man to escape the predestinate opinion. This great law is embodied in the conception of truth and reality. The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real. That is the way I would explain reality. (Peirce 1878, CP 5.407).
https://en.wikipedia.org/wiki/Pragmatic_theory_of_truth



I am little short tonight to keep on going, i have to move again tomorrow, and this time i don't know when i gonna have internet.
So i will put everything in bulk for the moment here:

https://en.wikipedia.org/wiki/Semiotics

http://www.ethanhein.com/wp/2017/philip-taggs-everyday-tonality/

https://en.wikipedia.org/wiki/Music_semiology

https://fr.wikipedia.org/wiki/Charles_Sanders_Peirce

https://en.wikipedia.org/wiki/Abductive_reasoning

https://en.wikipedia.org/wiki/Peirce%27s_law

https://fr.wikipedia.org/wiki/Pragmatisme

https://en.wikipedia.org/wiki/Scientific_method#Pragmatic_model

https://en.wikipedia.org/wiki/Pragmatic_theory_of_truth

https://en.wikipedia.org/wiki/Communication_Theory_as_a_Field#Russill.2C_pragmatism_as_an_eighth_tradition

In this process we observe two sorts of elements of consciousness, the distinction between which may best be made clear by means of an illustration. In a piece of music there are the separate notes, and there is the air. A single tone may be prolonged for an hour or a day, and it exists as perfectly in each second of that time as in the whole taken together; so that, as long as it is sounding, it might be present to a sense from which everything in the past was as completely absent as the future itself. But it is different with the air, the performance of which occupies a certain time, during the portions of which only portions of it are played. It consists in an orderliness in the succession of sounds which strike the ear at different times; and to perceive it there must be some continuity of consciousness which makes the events of a lapse of time present to us. We certainly only perceive the air by hearing the separate notes; yet we cannot be said to directly hear it, for we hear only what is present at the instant, and an orderliness of succession cannot exist in an instant. These two sorts of objects, what we are immediately conscious of and what we are mediately conscious of, are found in all consciousness. Some elements (the sensations) are completely present at every instant so long as they last, while others (like thought) are actions having beginning, middle, and end, and consist in a congruence in the succession of sensations which flow through the mind. They cannot be immediately present to us, but must cover some portion of the past or future. Thought is a thread of melody running through the succession of our sensations.

We may add that just as a piece of music may be written in parts, each part having its own air, so various systems of relationship of succession subsist together between the same sensations. These different systems are distinguished by having different motives, ideas, or functions. Thought is only one such system, for its sole motive, idea, and function is to produce belief, and whatever does not concern that purpose belongs to some other system of relations. The action of thinking may incidentally have other results; it may serve to amuse us, for example, and among dilettanti it is not rare to find those who have so perverted thought to the purposes of pleasure that it seems to vex them to think that the questions upon which they delight to exercise it may ever get finally settled; and a positive discovery which takes a favorite subject out of the arena of literary debate is met with ill-concealed dislike. This disposition is the very debauchery of thought. But the soul and meaning of thought, abstracted from the other elements which accompany it, though it may be voluntarily thwarted, can never be made to direct itself toward anything but the production of belief. Thought in action has for its only possible motive the attainment of thought at rest; and whatever does not refer to belief is no part of the thought itself.

And what, then, is belief? It is the demi-cadence which closes a musical phrase in the symphony of our intellectual life. We have seen that it has just three properties: First, it is something that we are aware of; second, it appeases the irritation of doubt; and, third, it involves the establishment in our nature of a rule of action, or, say for short, a habit. As it appeases the irritation of doubt, which is the motive for thinking, thought relaxes, and comes to rest for a moment when belief is reached. But, since belief is a rule for action, the application of which involves further doubt and further thought, at the same time that it is a stopping-place, it is also a new starting-place for thought. That is why I have permitted myself to call it thought at rest, although thought is essentially an action. The final upshot of thinking is the exercise of volition, and of this thought no longer forms a part; but belief is only a stadium of mental action, an effect upon our nature due to thought, which will influence future thinking.
http://www.peirce.org/writings/p119.html

http://www.peirce.org/writings.html
 
In the last months i moved severals time in differents regions for working, and this years it was the first time that at each region i had to confront to toxics people and the most time in toxic environment, it was pretty tired but i was enough fine and it was also a pretty good challenge.
Until i caught a severe fever during two days, but also for the first time it was a fever without a drop of sweat.
Usually with a fever i put myself in the conditions to sweat to death and I get by like new.
So this time i was like a hot dog left to fire, and i felt my brain fried. After this chicken fried passage it was pretty hard to arrive
to connect up two wandering neurons together. It was like you riding a gp bike at high speed and at the beginning of the turn you lost all the electronic help you usually get, and you end up counting the rolls you make in the gravel, trying to get up, ring and piss off, without to know why.

During my recovery period, my older brother Joël died at the age of 50, his body was found ten days later in his little house that he had just moved to be near my parents. It was incomprehensible, shocking, brutal and without alternative.

Taking some days for his funerals, i was still working as tree climber.
I was taking vitamin C and a few other supplements to keep my immune system afloat, having a mainly paleo diet line, with a few exceptions of festivities and other family meals.
Except that my immune system was already at the limit, and despite the benefit of being near the trees, I was getting sick without being sick: since I was a child I was of weak constitution without letting go or being satisfied with it, that's why now at the age of 44 I can consider myself to be in good health and fairly resistant to all kinds of weather.
And during my illnesses I get out of it quite well and quickly with some grandmother recipes or simply rest.

Except this time too many things accumulated at the same time and I also had to take care of other people, so my father ended up in the hospital. And for Joel's funeral, my mother decided to take the longest path, at the level of the traditional ceremonies that lasted days and nights in different places, so be it.
I left breathing problems getting worse and worse, keeping me up at night, during months and finally ended up in a clinic almost like a fish dying out of the water.
Today I am not jumping everywhere again, but i am back to life and taking advantage of my convalescence to write this post, among other things.

I chose to put all this here, in this thread, because it has been more than a year since I left it, and I have not fully recovered my lost links, and above that I have not yet fully recovered my ease and concentration. I miss climbing trees, but I know they won't be very far away once I get on my feet.
There are still many floating parts in this thread, and I had a certain blurry vision of the whole thing, and only little by little, post by post, I was getting closer to a certain idea that one day had shone in me.

There's still a lot of work to do, that's true, and it's the stuff the most fun to do !
 
Minus the religious connotations, very interesting information.Always wondered why the images in my telescope didn't look like those from NASA.
. Frequency, Vibration, Sound, Resonance
 
Not a plug......Frequency of plants captured...I actually purchased the original 1 hour frquency of grass download a few months ago, but I don't see it anymore. Added it to my nighty nite playlist.
 
Hi 1peacelover, I was watching carefully your first video and i really think it's a waste of time, except if you want to feel
good, and eat more New Agey salad again and again without in the end learn about nothing, but "nice" noise.

About the second video, you are joking, you ain't really bought this ?!?
 
Back
Top Bottom