Artificial Intelligence News & Discussion

A more appropriate keynote speaker could not be chosen for the 2022 AI World Summit held in Amsterdam last month. Drawing on the vast depth of his newest book, The Matter With Things, Iain McGilchrist highlights the key elements of the left and right brain hemispheres with direct application and connection to AI – a term McGilchrist defines not as “Artificial Intelligence”, but as “Artificial Information Processing”. That statement in itself likely prompted the listeners in the crowd to think about the AI concept in a different way: activation of right hemisphere perhaps?



I edited and rewrote the transcription (at end of post) as his points are useful thought-provoking: it is hardly a light speech, where every word and sentence packs a bit of a punch.

Highlights:

On both hemispheres and their integral connection:

Our apparent ability these days to hear only what comes from the left hemisphere, does not depend on the brain itself having radically changed in the last couple of centuries, though it is certainly always evolving. It's more like this: you buy a radio set and you soon find a couple of channels worth listening to. For a host of reasons after a while you end up listening only to one. It's not the radio set that's changed, it's you. And in the case of the brain it would not even have mattered so much if we'd settled on the intelligent channel, but we didn't - we settled on the one whose value is nothing to do with truth nor with courage, magnanimity, and generosity but only greed, grabbing and getting, manipulation.

We need to simplify and stand apart to manipulate things to deal with the necessities of life and to build the foundations of a civilization. But to live in it we also need to belong to the living world and to understand the complexity of what it is we're dealing with.
The right hemisphere which in reality understands far more, is aware of those vast unknowns. When we're functioning well, the right hemisphere tests the left hemisphere's theory about reality against experience. But the left hemisphere's vision of a lifeless, mechanical, two-dimensional, geometric construct has been externalized around us to such an extent, that when the right hemisphere checks back with experience, it finds the left hemisphere has already colonized our reality.

Comments on the left hemisphere:
One hemisphere of the brain the left has evolved over millions of years to favor manipulation grabbing, getting and controlling…

The left hemisphere is less emotionally stable as well as less intelligent and I mean cognitively as well as emotionally and socially than the right.

Delusions and hallucinations are much more frequent, grosser [more coarse] and more persistent

Without the right hemisphere to rely on the left hemisphere is at sea. It quite literally denies the most obvious facts, it lies, and makes stuff up and it doesn't know what it's talking about and it is relentlessly, vacuously cheerful in the face of overwhelming disaster.

The left using narrow beam scattered attention to one detail after another to use what is already familiar, certain, static, explicit, abstract, decontextualized, disembodied, categorized, general in nature, and reduced to its parts. All is predictable and controlled. This is an inanimate universe and a bureaucrat's dream. It is like a map in relation to the world is mapped - useful to the degree that it leaves almost everything out and its only value is utility.

All here is a re-presentation which literally means present again when it's actually no longer present but dead and gone.

[Referring to Dunning-Kruger effect]: The left hemisphere doesn't know what it is it doesn't know, and so it thinks it knows everything.
The left hemisphere can often be an impediment to imagination and its value is single and simple: power. This is intellectually, morally, and spiritually bankrupt.

Comments on the right hemisphere:
The right has effectively been given so charge of the task of understanding the whole picture so conflicting of these goals that in humans the hemispheres are largely sequestered one from the other.

The right hemisphere sees not the re-presentation but the living presence - bringing broad, open, sustained, vigilant attention to bear on the world, it sees what is fresh, unique, never fully known, never finally certain, but full of potential. It understands all it is and must remain implicit. Humor, poetry, art, narrative, music, the sacred - indeed everything we love. It understands that nothing is ever merely static and unchanging but flowing and radically interconnected. That parts of the left hemisphere's invention and that what we are seeing as parts are already wholes at another level. This is a free world, an animate universe and a bureaucrat's nightmare. It has all the richness and unfathomable complexity of the world that the left hemisphere simply mapped.

[Referring to Dunning-Kruger effect]: The right hemisphere which in reality understands far more, is aware of those vast unknowns.


Comments in connection with AI:
AI… could be seen as replicating the functions of the left hemisphere at frightening speed across the entire globe. Since the evolutionary reason for developing left hemisphere functions was solely to enhance power, this could indeed be seen as the ultimate logical aim of the left hemisphere. It has after all no sense of the bigger picture of other values or of the way in which context and even sheer scale and extent changes everything. Every angel has his devil. What looks good in one context may be far from good in another or when extended too far. But every devil has his angel and full credit to AI where it has potential to be an enormous help to us.

On quality versus quantity of life: I believe AI could help us but how? Can't AI help us through freeing time up for us by removing mundane tasks? Of course. IT saves time by doing things quickly. Or does it? Bosses become rich by saving wages but we become their new unwilling wage slaves. This is already apparent in not so small ways that add cumulatively to the stress of everyday life, and rob it of its feel of connection with another human being - as well as robbing us of our most precious commodity: time.

Everyday life has got much worse. As machines gradually displace humans, what happens to human flourishing? What happens when reliance on machines strips us of our skills? A process already well advanced. How resilient, resourceful, skillful, will we then turn out to be compared with our forebears? What about the impact on us of the loss of daily contact with human beings? Yes a few lucky clever ones may get jobs in IT, but the economic drive is very simple: machines are cheaper than people so the aim has to be to employ fewer people, doesn't it?

And what about our dignity as free individuals? Thanks to AI can we escape the appalling prospect already realized in China, that wherever we go, whatever we buy, whomever we're seen with, our every word, every action, the very thoughts we express in our faces - all is monitored, potentially marked down against us and whatever freedom is still left to us, parametrically curtailed accordingly. We become unpeople, non-citizens, non-humans. The only answer to this seems to be a kind of AI arms race in which the supposed goodies use AI to head off the AI of the baddies. But even if this could happen, I can't see how but then I'm not an AI expert, how do we know who are the goodies anymore? The WEF?

The problem with every step that increases the reach of human power as it will sooner or later be used for evil ends. Once a pernicious regime’s AI reaches a certain level, it can effectively destroy any attempt to resist it, bringing the prospect of a totalitarianism which can have no end.

All decisions affecting humans are moral ones, and as I've argued at length morality is not purely utilitarian and cannot be reduced to calculation. Every human situation is actually unique, and uniqueness has to do with personal history, consciousness, memory, intention, all that is not explicit that we refer to in that deceptively simple word emotion - all the experience and understanding gained through and stored in the body. All that makes us specifically humans not machines.

Goodness stems from virtuous minds, not following rules. While machines, also it is claimed, to get more like humans, humans are getting more like machines. AI is there to make things happen to give us control but this is good only if we make progress in wisdom as fast as we make progress in technical know-how, otherwise it's like putting machine guns in the hands of toddlers.

And to me the belief that machines could become sentient is really just the obverse phase of the view that we, sentient beings, are really just machines.

The best way to destroy Humanity would be to hybridize it with a machine. I do not call those who pursue this aim evil as they may just have a failure of imagination but the aim itself is evil if we can call anything evil. It can only further degrade our idea of what a human life is for, and it obviously opens us to totalitarian control which knows no limit.

So can AI further the workings of the right hemisphere? I think not directly because it's far from being a matter of different “processing”, say parallel rather than serial, but it can by choosing its projects very carefully, and positively turning away from those that will harm.

The paradox is that to succeed at AI whose entire purpose is to give us control, you must let go of control at least to a large extent - let go of mechanisms, bureaucracy, micromanagement, and strangulation by systems.

Whereas if you monitor and control you will never get more than mediocrity and we cannot afford mediocrity right now. If we are not to become ever more diminished as humans we need people to be in control of machines, not them in control of us.

We're already calmly and quietly surrendering our liberty, our privacy, our dignity, our time, our values, and our talents to the machine. Machines will serve us well if they truly relieve us from drudgery but we must leave human affairs to humans. If not, we sign our own death warrant.

What makes life worth living is what can only be described as a resonance - an encounter with other living beings with the natural world and with the greatest products of the human soul - some would say with the cosmos at large. It's only in encountering the uncontrollable that we really experience the world and come fully alive.





Full Transcript:

Keynote from AI World Summit, Amsterdam, October 2022 – Iain McGilchrist
Keynote from AI World Summit, Amsterdam, October 2022 – Iain McGilchrist

“The Matter with Things: Our Brains, Our Delusions, and the Unmaking of the World”




To be or not to be - that I have it on the authority of a member of the Danish royal family, is the question, but not the only one. Some others of these: what is life for? who are we anyway? and what are we doing here? - in my recent book The Matter With Things I do my best to address these very questions and try to set up why we're getting things so badly wrong.

I'm not just here referring to our heinous crimes such as poisoning the earth's oceans, destroying its forests and persecuting its indigenous peoples. Fighting wars and enslaving people in the pursuit of rare earths, melting the ice caps and bringing to extinction rare and beautiful species of living things. Fading the glories of the world. These are visible, but less visible perhaps is that we're making ourselves wretched. There are no two ways about it. We are more affluent than ever, but riches and power, the only point in having riches, do not make people happier. Ask a psychiatrist, or take a look at the face of Vladimir Putin with power alas of life and death over millions of people and the owner of the most expensive lavatory paper dispenser in the world. No, as affluent as we are, we are also more anxious, depressed, lonely, isolated, and lacking in purpose than we have ever been. The effect is of staggering size and the evidence is clear it can be shown to have little to do with diagnostic fashion, so why is this?

I suggest it is because we have no longer the foggiest idea what a human life is about. Indeed there's a sense in which we no longer live in a world at all but exist in a simulacrum of our own making. You expect me to speak of brains and I will not disappoint you. Leaving nuance aside and condensing three decades of research and a vast body of supporting evidence into a phrase. We are now mesmerized by the least intelligent part of our brain. The reasons of survival, one hemisphere of the brain, the left, has evolved over millions of years to favor manipulation, grabbing, getting and controlling while the other, the right, has effectively been given so charge of the task of understanding the whole picture. So conflicting are these goals that in humans the hemispheres are largely sequestered one from the other. Our apparent ability these days to hear only what comes from the left hemisphere, does not depend on the brain itself having radically changed in the last couple of centuries, though it is certainly always evolving. It's more like this: you buy a radio set and you soon find a couple of channels worth listening to. For a host of reasons after a while you end up listening only to one. It's not the radio set that's changed, it's you. And in the case of the brain it would not even have mattered so much if we'd settled on the intelligent channel, but we didn't - we settled on the one whose value is nothing to do with truth nor with courage, magnanimity, and generosity but only greed, grabbing and getting, manipulation.

And no, the difference between the hemispheres is not a myth that has been debunked. That itself is a myth that requires debunking in which I've been rather successfully debunking for some while now. What definitely does need to be debunked is the bad old pop psychology story that the left hemisphere “does” reason and language and maybe a bit “dull” but is at least reliable like a slightly boring accountant. And the right hemisphere “does” emotions and pictures but is apt to be flighty and fay.

All this is wrong since we now know that each hemisphere is involved in everything and that for the record the left hemisphere is less emotionally stable as well as less intelligent and I mean cognitively as well as emotionally and socially than the right. The right hemisphere is a far superior guide to reality. Delusions and hallucinations are much more frequent, grosser [more coarse] and more persistent after damage to the right hemisphere than to the left. Without the right hemisphere to rely on, the left hemisphere is at sea. It quite literally denies the most obvious facts, it lies, and makes stuff up and it doesn't know what it's talking about, and it is relentlessly, vacuously cheerful in the face of overwhelming disaster.

You may well say but so what why the first, actually I don't care where things go on in my brain. But it does matter hugely because each hemisphere takes a different view of the world and those views are not strictly compatible. And so when we reflect, philosophize, or discourse publicly we're pretty much forced without knowing to favor one tape or the other. It seems that for us to be of two minds to appear inconsistent is a greater sin than to be consistently wrong - no room for “yes but”, for nuance, for seeing the hidden opposite that is always there in whatever is being peddled to us.

So what are these two hemispheric visions of the world like? I hope you may recognize them from experience. The left using narrow beam, scattered attention to one detail after another, to use what is already familiar, certain, static, explicit, abstract, decontextualized, disembodied, categorized, general in nature, and reduced to its parts. All is predictable and controlled. This is an inanimate universe and a bureaucrat's dream. It is like a map in relation to the world is mapped - useful to the degree that it leaves almost everything out and its only value is utility. Note that all here is a re-presentation which literally means present again when it's actually no longer present but dead and gone.

The other, the right hemisphere sees not the re-presentation but the living presence - bringing broad, open, sustained, vigilant attention to bear on the world, it sees what is fresh, unique, never fully known, never finally certain, but full of potential. It understands all that is and must remain implicit. Humor, poetry, art, narrative, music, the sacred - indeed everything we love. It understands that nothing is ever merely static and unchanging but flowing and radically interconnected. That parts of the left hemisphere's invention and that what we are seeing as parts are already wholes at another level. This is a free world, an animate universe and a bureaucrat's nightmare. It has all the richness and unfathomable complexity of the world that the left hemisphere simply mapped.

These two ways of seeing the world are each vital to our survival. We need to simplify and stand apart to manipulate things to deal with the necessities of life and to build the foundations of a civilization. But to live in it we also need to belong to the living world and to understand the complexity of what it is we're dealing with. This division of attention works to our advantage when we use both. However, it is a handicap in fact it is a catastrophe when we use only one. As I explained in the Master and His Emissary, twice in the history of the West, in ancient Greece and then in Rome, a civilization started out with an extraordinarily fruitful harmony of left and right. But as it overreached itself, it moved ever further towards the left hemisphere's take on the world before collapsing. And the same trajectory is now being pursued for the third time. After the miraculous outpouring of creativity in the arts, science, society, and philosophy that we call the Renaissance, our civilization has moved since the enlightenment, drunk with the arrogant belief that it knows everything and can fix everything further and further towards the left. We are now like sleepwalkers whistling a happy tune as we amble towards the abyss.

There is a phenomenon in Psychology called the Dunning-Kruger effect which means that the less you know the more you think you know and vice versa. The left hemisphere doesn't know what it is it doesn't know, and so it thinks it knows everything. The right hemisphere which in reality understands far more, is aware of those vast unknowns. When we're functioning well, the right hemisphere tests the left hemisphere's theory about reality against experience. But the left hemisphere's vision of a lifeless, mechanical, two-dimensional, geometric construct has been externalized around us to such an extent, that when the right hemisphere checks back with experience, it finds the left hemisphere has already colonized our reality. At least for those of us who lead a modern western urban life.

It finds a perfect simulacrum of the world according to the left hemisphere. The things that used to give us a clue that our reductionist theories weren't all they were cracked up to be, are fading away for most of us. These were the proximity of the natural world - the sense of a coherent, shared culture, of the body of something we live, not merely possess. The soul-moving power of great art, and the sense of something sacred that is very real but beyond everyday language.

Turning to AI, artificial information processing, by the way, not artificial intelligence, in many ways it could be seen as replicating the functions of the left hemisphere at frightening speed across the entire globe. Since the evolutionary reason for developing left hemisphere functions was solely to enhance power, this could indeed be seen as the ultimate logical aim of the left hemisphere. It has after all no sense of the bigger picture of other values or of the way in which context and even sheer scale and extent changes everything. Every angel has his devil. What looks good in one context may be far from good in another or when extended too far. But every devil has his angel and full credit to AI where it has potential to be an enormous help to us.

Above all I believe this is in finding ways to help reverse the damage done by the tide of destruction that industrialization itself has brought about. The destruction of the living world which I started by referring to: ecological repair. Here it can be truly a good angel, the very best we could hope for. And there’s much else to be grateful for - it can help treat diseases and perhaps find less destructive ways such as nuclear fusion to generate power. As an aside though, maybe our dependency on power is part of the problem and we should be aiming to use far less power in the future. And though as a doctor I believe in treating diseases, you'll be surprised to hear. Length of life is not by a million miles as important as the quality of that life and we will all die of something one day. We must focus on quality, not quantity, of life. Here again I believe AI could help us, but how? I suspect by dealing with the sort of technical problems I've mentioned and keeping as far as possible out of our daily lives.

Let me explain. Can't AI help us through freeing time up for us by removing mundane tasks? Of course. I.T. saves time by doing things quickly. Or does it? Bosses become rich by saving wages but we become their new unwilling wage slaves. This is already apparent in not so small ways that add cumulatively to the stress of everyday life, and rob it of its feel of connection with another human being - as well as robbing us of our most precious commodity: time.

This has got more obvious in the last four or five years, so not just because of covid, though that accelerated an already obvious development. As more and more processes that used to take a five-minute phone call become automated, we find ourselves entering into commerce with a computer program for several diverting hours leading us into inescapable escher-like closed loops, and then reporting playfully “oops something went wrong”. And if after this you hold the phone for an hour to speak to a real person they increasingly appear to have been so degraded by enforcement and machine-like algorithms that they might as well be machines.

Everyday life has got much worse. As machines gradually displace humans, what happens to human flourishing? What happens when reliance on machines strips us of our skills? A process already well advanced. And especially what happens if any number of reasons such as shortages of what we call resources, extended power failures, the breakdown of civil order, or war, or just the unsustainability of present levels of growth across whole populations - we can no longer always have those machines to rely on. How resilient, resourceful, skillful, will we then turn out to be compared with our forebears? Leaving aside such alarming but not I think merely alarmist possibilities, what about the impact on us of the loss of daily contact with human beings? As more and more jobs become automated, what happens to those who are rendered thereby unemployed? Yes a few lucky clever ones may get jobs in I.T., but the economic drive is very simple: machines are cheaper than people so the aim has to be to employ fewer people, doesn't it?

And what about our dignity as free individuals? Thanks to AI can we escape the appalling prospect already realized in China, that wherever we go, whatever we buy, whomever we're seen with, our every word, every action, the very thoughts we express in our faces - all is monitored, potentially marked down against us and whatever freedom is still left to us, parametrically curtailed accordingly. We become unpeople, non-citizens, non-humans. The only answer to this seems to be a kind of AI arms race in which the supposed goodies use AI to head off the AI of the baddies. But even if this could happen, I can't see how but then I'm not an AI expert, how do we know who are the goodies anymore? The WEF?

The problem with every step that increases the reach of human power as it will sooner or later be used for evil ends. Once a pernicious regime’s AI reaches a certain level, it can effectively destroy any attempt to resist it, bringing the prospect of a totalitarianism which can have no end.

All decisions affecting humans are moral ones, and as I've argued at length morality is not purely utilitarian and cannot be reduced to calculation. Every human situation is actually unique, and uniqueness has to do with personal history, consciousness, memory, intention, all that is not explicit that we refer to in that deceptively simple word emotion - all the experience and understanding gained through and stored in the body. All that makes us specifically humans not machines. Goodness stems from virtuous minds, not following rules. While machines, also it is claimed, to get more like humans, humans are getting more like machines. AI is there to make things happen to give us control but this is good only if we make progress in wisdom as fast as we make progress in technical know-how, otherwise it's like putting machine guns in the hands of toddlers. By the way if you spot evidence of a comparable growth in wisdom do send my PA a postcard won't you?

It's worth pointing out that subjects with schizophrenia whose thinking and behavior are like those with left hemisphere overdrive and hypo function of the right hemisphere, see a world made up of bits and pieces and often imagined people to become inanimate and machine-like or zombies. To them nothing seems real anymore, just a simulacrum a pretense, a play put on to deceive them. A person may look like a person but uncannily isn't: just AI. And to me the belief that machines could become sentient is really just the obverse phase of the view that we, sentient beings, are really just machines.

The psychiatrist R.D. Laing reported a schizoid patient who saw his wife as a mechanism. She was an “it” because everything she did was a predictable-determined response. He would for instance tell her “it” an ordinary funny joke and when she “it” laughed this indicated her “its” entirely conditioned robot-like nature.

In its assumption of determinacy and empty mechanistic behavior, this reflects what is hardly even a parody of a certain not uncommon scientific position. It also represents a chilling psychopathology. As to cyborgs: the best way to destroy Humanity would be to hybridize it with a machine. I do not call those who pursue this aim evil as they may just have a failure of imagination, but the aim itself is evil if we can call anything evil. It can only further degrade our idea of what a human life is for, and it obviously opens us to totalitarian control which knows no limit. It seems to me that we're like the sorcerer's apprentice in the story, who knew the spell that would set things in motion but had no idea how to make it stop. Obviously, the genie is out of the bottle and cannot now be put back, unless by a breakdown of civilization, which is I'm afraid far from unlikely.

So what can we hope for the future? What matters for the future of humanity is imagination and the values by which we allow ourselves to be led. The left hemisphere can often be an impediment to imagination and its value is single and simple: power. This is intellectually, morally, and spiritually bankrupt. So can AI further the workings of the right hemisphere? I think not directly because it's far from being a matter of different “processing”, say parallel rather than serial, but it can by choosing its projects very carefully, and positively turning away from those that will harm. Your choices are moral acts: you can't shrug it off.

I've painted some dark pictures but in the situation that we're in right now while we have time can we do something? Oddly, the paradox is that to succeed at AI whose entire purpose is to give us control, you must let go of control at least to a large extent - let go of mechanisms, bureaucracy, micromanagement, and strangulation by systems.

We must work with, not against, nature. A gardener cannot make a plant or make it grow. A gardener can only permit and encourage the plant itself to do what it does or crowd it out and stifle its chances to thrive. Humans in this respect are like plants. We can only be more or less impeded in our growth by external pressures. We need spontaneity, openness to risk, and trust in our intuition - for imagination and creativity and for us to be alive and truly present. So I say find people with a proven record of intelligence and above all insight, then give them time - stop breathing down their necks, stop asking how many papers they've published recently, or how near they are to a patentable product. It is true that if you trust, sometimes you will be let down, but more often in this arena you will be handsomely rewarded. Whereas if you monitor and control you will never get more than mediocrity and we cannot afford mediocrity right now. If we are not to become ever more diminished as humans we need people to be in control of machines, not them in control of us.

I'm not talking here about an apocalyptic future, I’m talking about apocalypse now. We're already calmly and quietly surrendering our liberty, our privacy, our dignity, our time, our values, and our talents to the machine. Machines will serve us well if they truly relieve us from drudgery, but we must leave human affairs to humans. If not, we sign our own death warrant.

All that we value most cannot be achieved by control or by an effort of will. Many rationally desirable goals such as sleep, are simply incompatible with the state of mind required to pursue them. They must come if they come at all, as the byproduct of a life well lived. Among these our wisdom itself followed closely by imagination, creativity, courage, humility, virtue, love, sympathy, admiration, faith, and understanding. They cannot be willed. What makes life worth living is what can only be described as a resonance - an encounter with other living beings with the natural world and with the greatest products of the human soul - some would say with the cosmos at large. It's only in encountering the uncontrollable that we really experience the world and come fully alive.

The resonance of a real relationship with a truly sentient other is not possible where there is no freedom, no spontaneity, no life. There is a mighty challenge then ahead: one in which your decisions and responses, my friends, will be of the utmost importance. So I shall leave you with a question, an important question, which is one that every human needs to answer: to be or not to be that is in fact the question.

Thank you.
 
Last edited:

Flexible AI computer chips promise wearable health monitors that protect privacy :whistle:

Publié: 17 nov 2022, 14:28 CET​

My colleagues and I have developed a flexible, stretchable electronic device that runs machine-learning algorithms to continuously collect and analyze health data directly on the body. The skinlike sticker, developed in my lab at the University of Chicago’s Pritzker School of Molecular Engineering, includes a soft, stretchable computing chip that mimics the human brain.


To create this type of device, we turned to electrically conductive polymers that have been used to build semiconductors and transistors. These polymers are made to be stretchable, like a rubber band. Rather than working like a typical computer chip, though, the chip we’re working with, called a neuromorphic computing chip, functions more like a human brain. It’s able to both store and analyze data.


To test the usefulness of the new device, my colleagues and I used it to analyze electrocardiogram data representing the electrical activity of the human heart. We trained the device to classify ECGs into five categories: healthy and four types of abnormal signals. Even in conditions where the device is repeatedly stretched by movements of the wearer’s body, the device could still accurately classify the heartbeats.


Why it matters

Most of the signals from the human body, such as the electrical activity in the heart recorded by ECG, are typically weak and subtle. Accurately recording these small signals requires direct contact between electronic devices and the human body. This can only be achieved by fabricating electronic devices to be as soft and stretchy as skin. We envision that wearable electronics will play a key role in tracking complex indicators of human health, including body temperature, cardiac activity, levels of oxygen, sugar, metabolites and immune molecules in the blood.


Analyzing large amounts of continuously acquired health data is challenging, however. A single piece of data must be put into the broader perspective of a patient’s full health history, and that is a big task. Cutting-edge machine-learning algorithms that identify patterns in extremely complex data sets are the most promising route to being able to pick out the most important signals of disease.


A typical approach to using machine learning to analyze real-time health data is to transmit the data wirelessly from wearable devices to a computer. But this poses challenges. Sending health data wirelessly is not only slow and consumes extra power, but it also raises privacy concerns. Our research aims to make the AI analysis of health data happen within these skinlike wearable devices, which would minimize the amount of information a device would need to transmit.


The ultimate goal is for this on-the-spot analysis to be able to send patients or health care providers timely alerts, or even one day automatically adjust medication dispensed by other wearable or implanted devices.


What other research is being done​


Other research about AI processing health data collected from wearable devices has mainly involved transferring the data to computers running AI algorithms. These projects have demonstrated the potential of AI for extracting useful information from complicated health data.


The recent development of flexible neuromorphic processors is an important step toward running AI data analysis directly on wearable devices, but these flexible processors lack skinlike stretchability and softness, which makes it difficult to build them into wearable devices. In contrast, our device has the skinlike properties necessary for a wearable health monitor.


What’s next​


Moving forward, researchers are likely to extend this type of AI analysis integrated in wearable devices to other types of health conditions and diseases. My lab is planning to improve our device, both to better integrate the device’s components and expand the types of machine-learning algorithms it can be used with.


Our work is a good starting point for creating devices that build artificial intelligence into wearable electronics – devices that could help people live longer and healthier lives.

And from here :
(end snip)

AI’s new frontier: Connecting grieving loved ones with the deceased​

Why people want to hold on to their loved ones is understandable.
One of our basic drives is to attach to others, particularly those who provide a secure base, like a parent for a child, said Robert Neimeyer, director of the Portland Institute for Loss and Transition. “These are among our strongest evolutionary imperatives, as beings, and our technologies are recruited to support that goal,” he said.
After the telephone was invented, he said, Thomas Edison was interested in developing a “spirit phone” to somehow communicate with the dead. And seeing a photograph of a deceased son who died at the Gettysburg battle during the Civil War was just as uncanny an experience for a parent then as it is for that mother in the video to see her dead daughter in virtual reality, Neimeyer said.
“What is surreal in one era quickly becomes conventional in the next,” he said. “In general, in life, we don’t grow as people by eliminating who we have loved, how we have loved what we have loved. It’s a question of holding on differently. How can we use this relationship as a resource? I think the technology can contribute to that.”
 
From here , we're reaching SciFi level...

Brain-computer interfaces could allow soldiers to control weapons with their thoughts

SEATTLE
Imagine that a soldier has a tiny computer device injected into their bloodstream that can be guided with a magnet to specific regions of their brain. With training, the soldier could then control weapon systems thousands of miles away using their thoughts alone. Embedding a similar type of computer in a soldier’s brain could suppress their fear and anxiety, allowing them to carry out combat missions more efficiently. Going one step further, a device equipped with an artificial intelligence system could directly control a soldier’s behavior by predicting what options they would choose in their current situation.
While these examples may sound like science fiction, the science to develop neurotechnologies like these is already in development. Brain-computer interfaces, or BCI, are technologies that decode and transmit brain signals to an external device to carry out a desired action. Basically, a user would only need to think about what they want to do, and a computer would do it for them.
BCIs are currently being tested in people with severe neuromuscular disorders to help them recover everyday functions like communication and mobility. For example, patients can turn on a light switch by visualizing the action and having a BCI decode their brain signals and transmit it to the switch. Likewise, patients can focus on specific letters, words or phrases on a computer screen that a BCI can move a cursor to select.
However, ethical considerations have not kept pace with the science. While ethicists have pressed for more ethical inquiry into neural modification in general, many practical questions around brain-computer interfaces have not been fully considered. For example, do the benefits of BCI outweigh the substantial risks of brain hacking, information theft and behavior control? Should BCI be used to curb or enhance specific emotions? What effect would BCIs have on the moral agency, personal identity and mental health of their users?
These questions are of great interest to us, a philosopher and neurosurgeon who study the ethics and science of current and future BCI applications. Considering the ethics of using this technology before it is implemented could prevent its potential harm. We argue that responsible use of BCI requires safeguarding people’s ability to function in a range of ways that are considered central to being human.

Expanding BCI beyond the clinic
Researchers are exploring nonmedical brain-computer interface applications in many fields, including gaming, virtual reality, artistic performance, warfare and air traffic control.
For example, Neuralink, a company co-founded by Elon Musk, is developing a brain implant for healthy people to potentially communicate wirelessly with anyone with a similar implant and computer setup.
In 2018, the U.S. military’s Defense Advanced Research Projects Agency launched a program to develop “a safe, portable neural interface system capable of reading from and writing to multiple points in the brain at once.” Its aim is to produce nonsurgical BCI for able-bodied service members for national security applications by 2050. For example, a soldier in a special forces unit could use BCI to send and receive thoughts with a fellow soldier and unit commander, a form of direct three-way communication that would enable real-time updates and more rapid response to threats.
To our knowledge, these projects have not opened a public discussion about the ethics of these technologies. While the U.S. military acknowledges that “negative public and social perceptions will need to be overcome” to successfully implement BCI, practical ethical guidelines are needed to better evaluate proposed neurotechnologies before deploying them.

Utilitarianism
One approach to tackling the ethical questions BCI raises is utilitarian. Utilitarianism is an ethical theory that strives to maximize the happiness or well-being of everyone affected by an action or policy.
Enhancing soldiers might create the greatest good by improving a nation’s warfighting abilities, protecting military assets by keeping soldiers remote, and maintaining military readiness. Utilitarian defenders of neuroenhancement argue that emergent technologies like BCI are morally equivalent to other widely accepted forms of brain enhancement. For example, stimulants like caffeine can improve the brain’s processing speed and may improve memory.
However, some worry that utilitarian approaches to BCI have moral blind spots. In contrast to medical applications designed to help patients, military applications are designed to help a nation win wars. In the process, BCI may ride roughshod over individual rights, such as the right to be mentally and emotionally healthy.
For example, soldiers operating drone weaponry in remote warfare today report higher levels of emotional distress, post-traumatic stress disorder and broken marriages compared to soldiers on the ground. Of course, soldiers routinely elect to sacrifice for the greater good. But if neuroenhancing becomes a job requirement, it could raise unique concerns about coercion.

Neurorights
Another approach to the ethics of BCI, neurorights, prioritizes certain ethical values even if doing so does not maximize overall well-being.
Proponents of neurorights champion individuals’ rights to cognitive liberty, mental privacy, mental integrity and psychological continuity. A right to cognitive liberty might bar unreasonable interference with a person’s mental state. A right to mental privacy might require ensuring a protected mental space, while a right to mental integrity would prohibit specific harms to a person’s mental states. Lastly, a right to psychological continuity might protect a person’s ability to maintain a coherent sense of themselves over time.
BCIs could interfere with neurorights in a variety of ways. For example, if a BCI tampers with how the world seems to a user, they might not be able to distinguish their own thoughts or emotions from altered versions of themselves. This may violate neurorights like mental privacy or mental integrity.
Yet soldiers already forfeit similar rights. For example, the U.S. military is allowed to restrict soldiers’ free speech and free exercise of religion in ways that are not typically applied to the general public. Would infringing neurorights be any different?

Human capabilities
A human capability approach insists that safeguarding certain human capabilities is crucial to protecting human dignity. While neurorights home in on an individual’s capacity to think, a capability view considers a broader range of what people can do and be, such as the ability to be emotionally and physically healthy, move freely from place to place, relate with others and nature, exercise the senses and imagination, feel and express emotions, play and recreate, and regulate the immediate environment.
We find a capability approach compelling because it gives a more robust picture of humanness and respect for human dignity. Drawing on this view, we have argued that proposed BCI applications must reasonably protect all of a user’s central capabilities at a minimal threshold. BCI designed to enhance capabilities beyond average human capacities would need to be deployed in ways that realize the user’s goals, not just other people’s.

For example, a bidirectional BCI that not only extracts and processes brain signals but delivers somatosensory feedback, such as sensations of pressure or temperature, back to the user would pose unreasonable risks if it disrupts a user’s ability to trust their own senses. Likewise, any technology, including BCIs, that controls a user’s movements would infringe on their dignity if it does not allow the user some ability to override it.

A limitation of a capability view is that it can be difficult to define what counts as a threshold capability. The view does not describe which new capabilities are worth pursuing. Yet, neuroenhancement could alter what is considered a standard threshold, and could eventually introduce entirely new human capabilities. Addressing this requires supplementing a capability approach with a fuller ethical analysis designed to answer these questions.
 
The latest thing is, apparently, ChatGPT, of which I knew nothing about until one or two days ago (thanks to another forum member for alerting me).

I watched a couple of videos in which a software code is offered to it and asked to improve it - and it does. Another person asks it to write the code for a clock, and it does. Then asks for specific characteristics like displaying seconds, making it pretty, giving it a 3D interface in a certain language, and it does. All the while explaining what it's doing to the code, not just writing it. Actually, it's kind of scary. Will it be improving itself next? :umm: On its website there's a few examples of what it can do:

 
From what I've seen so far, ChatGPT is pretty impressive. I'm planning to give it a try tomorrow if it stays free.
What's funny is that just a few years ago, there was the whole "learn to code" thing, where the idea was that as automation replaces jobs, the only solution is to become a programmer. With github's co-pilot and even moreso now with ChatGPT, it looks like the first jobs to be replaced by automation is programming, at least for a vast majority of people. Time will tell.
 
A friend of mine is playing with the ChatGPT and he just sent me this:

Can you invent a joke not created before?
Here is a joke that I came up with:

Why did the tomato turn red?
Because it saw the salad dressing!

I apologize if this joke has been used before, as I am a program and do not have the ability to browse the internet or access previous jokes. My responses are generated based on the information and training that I have received.

I actually laughed...
 
The latest thing is, apparently, ChatGPT, of which I knew nothing about until one or two days ago (thanks to another forum member for alerting me).

Woah..... this thing is insane!! 😮 I just asked it how to fix an error I've been getting after converting to PHP8 and it gave me a solution right away. Already spent hours looking on Google, wish I knew about this sooner, lol.

However, I did have to slightly modify it to get it to work the way I want it to so I don't think developers will be out a job just yet 😉
 
A friend of mine is playing with the ChatGPT and he just sent me this:
I thought the joke was funny too, and asked my husband if he wanted to hear an original joke generated by AI. Sure, he said, so I read him the question part and he answered immediately. Apparently English speaking children are familiar with this joke and not only, as a quick Google search reveals.

However, I did have to slightly modify it to get it to work the way I want it to so I don't think developers will be out a job just yet 😉

Same with comedians and joke writers at least.
 
Had a little chat with chatGPT and it was boring. Keep in mind that the system is still in the early stages of learning, and I suspect the early release to the public will certainly provide more training data.
Screenshot 2022-12-10 at 09-22-11 ChatGPT.png
The response of the algorithm is very generic and doesn't sound human at all. The grammar is good but it "talks" like a robot. That doesn't say that it will not get better. After all, automatic translation has come a long way since the early days.
 
Back
Top Bottom