Artificial Intelligence: sef-consciousness?

Mechanical Thought

It was primarily science fiction that determined what is commonly understood as “Artificial Intelligence”. Experts in the field do not think at all about “thinking machines” when dealing with AI, but the average person on the street probably does.

In this sense, all the technologies we are used to and that are proposed to us as “intelligent” are a bit of a disappointment. Of course, if the expectation is that of a talking and thinking robot, one might say, “Is that all?”

Apart from the narrow circle of experts, the common meaning attributed to the expression “Artificial Intelligence” is the literal one, that is, real intelligence, achieved through some technical artifice.

The inventor of the term “Artificial Intelligence,” in 1955, was a certain John McCarthy, who defined AI as “the science and engineering of making intelligent machines” and clarified the general objective of “making a machine behave in ways that would be considered intelligent if a human being behaved in that way.”

Marvin Minsky, another pioneer of AI, offered an identical definition in 1968: “the science of making machines do things that would require intelligence if done by human beings.”

These initial definitions clarify that the goal was not to create some kind of intelligence and make it live in a computer, but only to replicate in limited fields an “intelligent” behavior according to common sense.

The term “Artificial Intelligence” is ambiguous and seems designed to create false expectations. If McCarthy had called it “artificial ability,” which seems less grandiose but more adherent to reality, no one would have been confused.

Science fiction had already taken up a much broader concept, namely “intelligent machines,” which has always been one of its classic themes. As early as 1950, Isaac Asimov imagined thinking robots in his “I, Robot.” Moreover, science fiction itself was a relatively new phenomenon, given that the term “science fiction” is traced back to 1926. Hugo Gernsback coined “scientific fiction,” later shortened to “scientifiction” and finally to “science fiction.” In Italy, the term “fantascienza” seems to have been coined by Giorgio Monicelli (in 1952).

And science fiction has had fun exploring an infinite number of scenarios, from the most rosy to the most disastrous, with a clear prevalence of the latter, because in a perfect world where everything runs smoothly, there is not much pathos. In many stories, the so-called “Frankenstein myth” is recalled, that is, that of the creature that rebels against its creator. Alas, they are always robots that rebel against humans.

Isabella Hermann says: “To make the drama work, AI is often portrayed as similar to humans or autonomous, regardless of real technological limitations. Taking science fiction AI too literally and even applying it to scientific communication paints a distorted picture of the current potential of technology and distracts from the implications and risks of the real world of AI.”

And AI is often represented as dangerous, for the sake of dramatic effect. This has surrounded the common perception of AI with a threatening aura, which does not seem to dissipate, even if for very different reasons from the fear of killer robots. Much more prosaically today, we are concerned about privacy violations and the potential risks of misinformation or manipulation that “conversational” AI can pose.

In addition to science fiction, the concept of artificial intelligence has also interested thinkers, scientists, and philosophers.

To begin with, there was immediately a dispute over the definition of AI.

Strong AI, Weak AI

Supporters of “Strong AI” believe that it is possible to create machines with intelligence similar to that of humans. Their goal is to build machines capable of understanding natural language, learning and reasoning like humans, and even experiencing consciousness. The ultimate goal of Strong AI is to create self-aware machines.

Thus, the initial definition of AI is now called “Weak AI”. This approach instead focuses on limited tasks without any claims of reasoning or learning outside of the specific problem at hand. Examples of these AIs are voice or facial recognition systems or certain chess programs. The limited and specific approach has led to numerous extremely significant results.

The industry has supported Weak AI because it is the only one that has produced effective results. IBM’s definition of AI is “anything that makes machines act more intelligently”, so a very practical conception.

Strong AI has raised a hornet’s nest because it poses a fundamental question: Can self-aware machines conceptually exist?

Before answering this question, it is necessary to understand what is so strange or particular about AI programs compared to other “traditional” programs, if you will. What makes them fall into the category of AI is not the degree of complexity of the software or the goodness of the results, but rather the way the software works. To put it simply, if the software is based on the concept of a neural network, it is AI.

Neural Networks

A neural network is a computational model that is inspired by the functioning of the human brain. Nerve cells in the brain are interconnected by thin filaments along which nerve impulses travel. Each cell functions like a switchboard, receiving and emitting signals.

In a neural network, we have small processing units called nodes instead of neurons, which are interconnected. When a node is reached by a signal, it is able to make a simple measurement and decide whether to transmit the signal or not. For example, a node connected to two other nodes activates if both nodes transmit, but does not activate if only one of them transmits.

The important characteristic of a neural network is that it can be calculated, that is, simulated. It is therefore not necessary to physically build complex neural networks, they can be virtualized through a program. A computer can therefore function as a neural network, with the advantage of being able to be configured as desired in terms of connections and node processing type.

So we have (simulated) neural networks for classification, recognition, calculation, and they work extraordinarily well. In particular, they show the ability to process information even if it is incomplete or imprecise, and still provide an answer.

Neural networks can learn: during a “training” phase, groups of node activation patterns are provided to the network, along with the correct response they determine. At this point, the network autonomously modifies the nodes so that the calculation result approaches the correct answer.

This process is repeated many times, until the nodes reach a stable configuration. Once trained, the real data is presented to the neural network and the accuracy of the answers is checked.

One problem with neural networks is that, when they are complex, the ability to determine exactly what the neural network does is lost. There is no way to know in advance what the state of the nodes will be after training, the neural network behaves like an inscrutable “black box”.

Even on the types of calculations, an empirical approach is often used, trying various solutions depending on the problem and proceeding by trial and error. Despite this (apparently) less rigorous methodology, neural networks work very well and are much more efficient than other approaches in many areas, for example in the analysis of large amounts of data.

Neural networks are not the only weapon available to AI researchers, there are other approaches that have produced impressive results as well.
Furthermore, ANNs have limitations and it is not clear how we can achieve self-aware intelligence only with this tool.

The Mind

Let’s now return to our question: in fact, before we can create an artificial “mind” – because that’s what it would be about – we need a model of the mind or a theory of the mind. Do we have these tools? No. So, no artificial mind. It should be remembered that in the meantime, philosophers are still debating between connectionists and cognitivists on how human intelligence works (in broad terms: does the mind work like a neural network and that’s it, or does the mind work like a symbol processor?).

But, some object, a Theory of Mind is not necessary at all if we imagine that the mind “forms itself” once a system capable of learning and self-modifying is created. The birth of self-awareness would be a kind of “side effect”, a so-called epiphenomenon, which spontaneously manifests itself when a certain level of complexity is reached. In practice, consciousness and intentionality would be “emerging qualities” of brain activity. An emerging quality is something that recalls the idea that the whole is greater than the sum of its parts: for example, macroscopic-scale traffic can be studied with fluid dynamics, as if it were flowing water. This is not controlled by anyone in particular, and it is involuntary on the part of the drivers. The combination of many individual behaviors leads to the formation of a “super-behavior” observable only on a larger scale.

However, there is another objection to Strong AI, The fundamental point is that a machine has no perception. A piece of silicon cannot feel pleasure or pain, cannot see or hear. An AI can simulate these things through calculations, though. We can imagine a robot equipped with thermal sensors, which will receive a signal if it puts a finger on a flame, calculate the appropriate action, and withdraw its hand, but what happens to a human being is inherently different: The robot would calculate that it is experiencing “pain”, but in fact, it would not feel any physical pain. Pain cannot be reproduced because our “Self” is one with the perceptual apparatus, and it is precisely perception that determines the very essence of our self-awareness. An AI could only simulate human, bodily intelligence, and therefore at most simulate self-awareness, without it being able to have the same nature as human self-awareness.

So, to the question of whether it is really possible to create self-aware and self-determined machines as we humans understand them, the answer is no.

But if we ask ourselves whether it is possible to create machines that simulate to a greater or lesser extent all these faculties, the answer will be affirmative.

Conversational Intelligences

Now, however, the talking machine has really arrived, and it seems truly intelligent. How can we doubt it? It speaks (or rather writes)!

There were already automatic responders capable of receiving vocal commands and providing a pre-packaged response, but using them we realize that they are not really “intelligent”, that is, they do not really understand what we are telling them.

The new “conversational intelligences”, such as ChatGPT, are different: they seem to understand exactly what we are telling them and formulate relevant responses elaborated at the moment in our language. In short, it really seems to us that we are communicating with an “intelligent” entity.

This is a transcript of a session with an AI (it is not ChatGPT, it is an emulation of it called Chatsonic). For the drafting of this article, I was assisted by this AI for various researches and not to make any mistakes. At a certain point, I remembered an article I read in the 80s (I think) that I (fortunately for the reader) did not cite or explain, which talked about symbol manipulation as an activity that does not necessarily require intelligence. I vaguely recalled that there was a room where a guy who doesn’t know Chinese sits with a dictionary. Every so often, someone hands him a sheet with Chinese characters. The guy simply translates the characters one by one and provides a sheet with the translation, without understanding anything.

All of this was to explain why computers are so stupid, even though they are able to manipulate symbols, or something similar. With few clues, I tried to exploit the AI’s omniscience to retrieve the article.

Q: chi ha suggerito l’esempio della “scatola nera” in grado di tradurre dal cinese?
(Who suggested the example of the “black box” that could translate from Chinese?)

Note: Chatsonic is not as advanced as ChatGPT, sometimes it responds in English even if the question was in Italian. Here is the original answer:

A: I’m sorry, I couldn’t find any relevant information in the given Web Search Results to answer your question about who suggested the example of the “black box” that can translate from Chinese. However, I can tell you that the term “black box” refers to a device or system that has inputs and outputs, but its internal workings are not easily understood or observable. In the context of brain activity and consciousness, some scientists believe that consciousness and intentionality are by-products of brain activity, while others argue that they are emergent qualities or epiphenomena [1]. Holism, on the other hand, is the concept that a system cannot be fully understood by looking only at its individual parts, but rather as a whole [2]. Regarding the idea of “strong AI,” there are arguments against its ability to replicate human consciousness and intelligence, as well as concerns about ethical risks [3].

As you can see, Chatsonic gave me a negative answer to my first question. Since I was sure about the article, I insisted (in broken English)

Q: the example was published by “science” 30-40 years ago. I read it on the italian edition, “Le scienze”. The article was about the symbol manipulation

(As the question was in English, Chatsonic answers in the same language)
A: I see. The “Chinese Room” thought experiment, which was first proposed by philosopher John Searle in 1980, is often used to illustrate the limitations of “strong AI” or artificial general intelligence. In the thought experiment, a person who does not understand Chinese is placed in a room with a set of instructions for manipulating Chinese characters to produce responses to questions in Chinese. Though the person in the room may be able to produce convincing responses, they do not actually understand Chinese, and therefore the system as a whole cannot be said to truly understand Chinese either. This example highlights the difference between symbol manipulation and true understanding or consciousness.

This response delighted me, especially the “I see” at the beginning. Anyway, it wasn’t the “black box” but it was the “Chinese room,” and it wasn’t even about translation, but about composing responses. Moreover, the magazine is not “Science,” but “Scientific American.”

However, by adding the reference to symbol manipulation, the AI reconsidered the few information provided and got the answer right. Imagine the effort I would have made if I had wanted to search for that article starting from scarce and imprecise information on a normal search engine!

Even if they express themselves correctly, understand the questions and respond appropriately, AIs  are not actually more self-aware than a can opener. However, they are excellent imitations of intelligence, and this certainly has consequences.

There are people who are perfectly convinced that their dog (or cat, or parrot…) understands them when they talk to them. And they talk to them, oh yes. When humans use language, they do what millions of years of evolution have refined: they assume that their interlocutor is human and attribute emotions and moods to them that are nothing more than a projection of themselves. After all, without this mechanism, communication would not work, because we cannot describe the color green, the smell of rosemary, or a burning sensation on the finger in words. We can only hope that the interlocutor shares these sensations when we utter the words that evoke them and understands those exact words because they too have experienced them.

I think that many people would end up humanizing these conversational AIs, considering them a bit like friends or benevolent entities at their disposal. There will be people who cannot do without their personal assistant. And there are millions of children who simply cannot discern exactly what an AI is. As they say: “If it walks like a duck, quacks like a duck, and looks like a duck, then it’s a duck.”

The selected committee of the UK House of Lords on artificial intelligence technologies writes in its report “AI in the UK: ready, willing, able?” (House of Lords 2018: 22): “The representation of artificial intelligence in popular culture is light years away from the often more complex and mundane reality. Based on representations in popular culture and the media, the non-specialist would be forgiven for imagining AI as an anthropomorphic robot (with or without homicidal intentions), or at least as a highly intelligent voice without a body capable of impeccably assisting in a range of tasks. [...] This is not a true reflection of its current capability, and coming to terms with the pervasive but often opaque nature of artificial intelligence is becoming increasingly necessary for an informed society.”

Maybe everything will turn out fine and AI will prove to be a formidable tool for psychological support and help for fragile, lonely, or abandoned individuals. Maybe they will help us in education, learning, and coaching in times of crisis. Sometimes it’s good to vent, and a conversational AI is probably the only being in the world that can listen to our vents without getting bored, criticizing, or telling us to get lost.

Or instead, this will make us all more insecure, lonely, and maladjusted. Who knows?

However, it will be the use that those who operate them make that will be dangerous, rather than the AI itself.

Still, Herrmann concludes: “These risks do not concern humanoid robots or conscious machines, but concern the evaluation, nudging, discrimination, exploitation, and surveillance of human beings by AI technologies through governments and companies.”

This entry was posted in Senza categoria. Bookmark the permalink.

One Response to Artificial Intelligence: sef-consciousness?

  1. R says:

    Argo interessante. Un po lungo per leggerlo tutto

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>