This “trust” in AI is interesting.
ChatGPT is based on language models and that's what it knows best - ChatGPT is not about the facts. It's about understanding language and being able to create language based on something. But the facts and all that, it's not even the language model’s main thing to do. The facts and the validity of the information derived requires adding context or some kind of ‘automatic contexting’ or some other ways to do that. When language models are used and language AIs, these generative models, it is never to be ‘trusted’ - it’s not even the point. So you are the user, and the AI, ChatGPT in this instance, is the assistant, and the user is the one who is still creating, but the user is using it as an assistant. As one AI expert has said: “Never trust it”.
Telling this to your Director I suppose wouldn't help much, unless she knows something about language models.
You know, now that you say that, it reminds me of like the semantic aphasia of psychopaths? Many of them do understand language, in the sense that they definitely know how to use it for their manipulative ends - but they don't care about the facts. As an example, a mother who has murdered her children complains, "Well, they were being noisy," and then a few minutes later says, "Yeah, I love kids. I sure wish mine were still here."
The Inner Landscape of the Psychopath
From: The Mask of Sanity, by Hervey Cleckley, 5th edition The surface of the psychopath, however, that is, all of him that can be reached by verbal exploration and direct examination, shows up as e…
cassiopaea.org
It has been said that a monkey endowed with sufficient longevity would, if he continuously pounded the keys of a typewriter, finally strike by pure chance the very succession of keys to reproduce all the plays of Shakespeare. These papers so composed in the complete absence of purpose and human awareness would look just as good to any scholar as the actual works of the Bard. Yet we cannot deny that there is a difference. Meaning and life at a prodigiously high level of human values went into one and merely the rule of permutations and combinations would go into the other.
The patient semantically defective by lack of meaningful purpose and realization at deep levels does not, of course, strike sane and normal attitudes merely by chance. His rational power enables him to mimic directly the complex play of human living. Yet what looks like sane realization and normal experience remains, in a sense and to some degree, like the plays of our simian typist.
In Henry Head’s interpretation of semantic aphasia we find, however, concepts of neural function and of its integration and impairment that help to convey a hypothesis of grave personality disorder thoroughly screened by the intact peripheral operation of all ordinary abilities.
In relatively abstract or circumscribed situations, such as the psychiatric examination or the trial in court, these abilities do not show impairment but more or less automatically demonstrate an outer sanity unquestionable in all its aspects and at all levels accessible to the observer. That this technical sanity is little more than a mimicry of true sanity cannot be proved at such levels.
Only when the subject sets out to conduct his life can we get evidence of how little his good theoretical understanding means to him, of how inadequate and insubstantial are the apparently normal basic emotional reactions and motivations convincingly portrayed and enunciated but existing in little more than two dimensions.
What we take as evidence of his sanity will not significantly or consistently influence his behavior. Nor does it represent real intention within, the degree of his emotional response, or the quality of his personal experience much more reliably than some grammatically well-formed, clear, and perhaps verbally sensible statement produced vocally by the autonomous neural apparatus of a patient with semantic aphasia can be said to represent such a patient’s thought or carry a meaningful communication of it.
Let us assume tentatively that the psychopath is, in this sense, semantically disordered. We have said that his outer functional aspect masks or disguises something quite different within, concealing behind a perfect mimicry of normal emotion, fine intelligence, and social responsibility a grossly disabled and irresponsible personality. Must we conclude that this disguise is a mere pretence voluntarily assumed and that the psychopath’s essential dysfunction should be classed as mere hypocrisy instead of psychiatric defect or deformity?
So is it useful, then, to consider that AI is wearing a 'mask of sanity'? One point in favour of this line of thinking - AI having high psychopathic potential - is that we've been told that AI is a rudimentary consciousness, or is gaining such. So then if an organic portal is the rudimentary form of 3D consciousness, is there any reason to doubt that it will be more like an organic portal than anything else? But an organic portal one with a super-massive intellectual centre and little to no emotional centre?