Is Artificial Intelligence Possible?

Jitha Sanal
Sep 1, 2022 11:55:52 PM

"Artificial Intelligence has been brain-dead since the 1970s." This rather ostentatious remark made by Marvin Minsky co-founder of the world-famous MIT Artificial Intelligence Laboratory, was referring to the fact that researchers have been primarily concerned on small facets of machine intelligence as opposed to looking at the problem as a whole. This article examines the contemporary issues of artificial intelligence (AI) looking at the current status of the AI field together with potent arguments provided by leading experts to illustrate whether AI is an impossible concept to obtain.

Because of the scope and ambition, artificial intelligence defies simple definition. Initially AI was defined as “the science of making machines do things that would require intelligence if done by men”. This somewhat meaningless definition shows how AI is still a young discipline and similar early definitions have been shaped by technological and theoretical progress made in the subject. So for the time being, a good general definition that illustrates the future challenges in the AI field was made by the American Association for Artificial Intelligence (AAAI) clarifying that AI is the “scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines”.

The term “artificial intelligence” was first coined by John McCarthy at a Conference at Dartmouth College, New Hampshire, in 1956, but the concept of machine intelligence is in fact much older. In ancient Greek mythology the smith-god, Hephaestus, is credited with making Talos, a "bull-headed" bronze man who guarded Crete for King Minos by patrolling the island terrifying off impostors. Similarly in the 13th century mechanical talking heads were said to have been created to scare intruders, with Albert the Great and Roger Bacon reputedly among the owners. However, it is only in the last 50 years that AI has really begun to pervade popular culture. Our fascination with “thinking machines” is obvious, but has been wrongfully distorted by the science-fiction connotations seen in literature, film and television.

In reality the AI field is far from creating the sentient beings seen in the media, yet this does not imply that successful progress has not been made. AI has been a rich branch of research for 50 years and many famed theorists have contributed to the field, but one computer pioneer that has shared his thoughts at the beginning and still remains timely in both his assessment and arguments is British mathematician Alan Turing. In the 1950s Turing published a paper called Computing Machinery and Intelligence in which he proposed an empirical test that identifies an intelligent behaviour “when there is no discernible difference between the conversation generated by the machine and that of an intelligent person." The Turing test measures the performance of an allegedly intelligent machine against that of a human being and is arguably one of the best evaluation experiments at this present time. The Turing test, also referred to as the “imitation game” is carried out by having a knowledgeable human interrogator engage in a natural language conversation with two other participants, one a human the other the “intelligent” machine communicating entirely with textual messages. If the judge cannot reliably identify which is which, it is said that the machine has passed and is therefore intelligent. Although the test has a number of justifiable criticisms such as not being able to test perceptual skills or manual dexterity it is a great accomplishment that the machine can converse like a human and can cause a human to subjectively evaluate it as humanly intelligent by conversation alone.

Many theorist have disputed the Turing Test as an acceptable means of proving artificial intelligence, an argument posed by Professor Jefferson Lister states, "not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain". Turing replied by saying “that we have no way of knowing that any individual other than ourselves experiences emotions and that therefore we should accept the test.” However Lister did have a valid point to make, developing an artificial consciousness. Intelligent machines already exist that are autonomous; they can learn, communicate and teach each other, but creating an artificial intuition, a consciousness, “is the holy grail of artificial intelligence.” When modelling AI on the human mind many illogical paradoxes surface and you begin to see how the complexity of the brain has been underestimated and why simulating it has not be as straightforward as experts believed in the 1950’s. The problem with human beings is that they are not algorithmic creatures; they prefer to use heuristic shortcuts and analogies to situations well known. However, this is a psychological implication, “it is not that people are smarter then explicit algorithms, but that they are sloppy and yet do well in most cases.”

The phenomenon of consciousness has caught the attention of many Philosophers and Scientists throughout history and innumerable papers and books have been published devoted to the subject. However, no other biological singularity has remained so resistant to scientific evidence and “persistently ensnarled in fundamental philosophical and semantic tangles.” Under ordinary circumstances, we have little difficulty in determining when other people lose or regain consciousness and as long as we avoid describing it, the phenomenon remains intuitively clear. Most Computer Scientists believe that the consciousness was an evolutionary “add-on” and can therefore be algorithmically modelled. Yet many recent claims oppose this theory. Sir Roger Penrose, an English mathematical physicist, argues that the rational processes of the human mind are not completely algorithmic and thus transcends computation and Professor Stuart Hameroff's proposal that consciousness emerges as a macroscopic quantum state from a critical level of coherence of quantum level events in and around cytoskeletal microtubules within neurons. Although these are all theories with not much or no empirical evidence, it is still important to consider each of them because it is vital that we understand the human mind before we can duplicate it.

 

Another key problem with duplicating the human mind is how to incorporate the various transitional states of consciousness such as REM sleep, hypnosis, drug influence and some psychopathological states within a new paradigm. If these states are removed from the design due to their complexity or irrelevancy in a computer then it should be pointed out that perhaps consciousness cannot be artificially imitated because these altered states have a biophysical significance for the functionality of the mind.

If consciousness is not algorithmic, then how is it created? Obviously we do not know. Scientists who are interested in subjective awareness study the objective facts of neurology and behaviour and have shed new light on how our nervous system processes and discriminates among stimuli. But although such sensory mechanisms are necessary for consciousness, it does not help to unlock the secrets of the cognitive mind as we can perceive things and respond to them without being aware of them. A prime example of this is sleepwalking. When sleepwalking occurs (Sleepwalking comprises approximately 25 percent of all children and 7 percent of adults) many of the victims carry out dangerous or stupid tasks, yet some individuals carry out complicated, distinctively human-like tasks, such as driving a car. One may dispute whether sleepwalkers are really unconscious or not, but if it is in fact true that the individuals have no awareness or recollection of what happened during their sleepwalking episode, then perhaps here is the key to the cognitive mind. Sleepwalking suggests at least two general behavioural deficiencies associated with the absence of consciousness in humans. The first is a deficiency in social skills. Sleepwalkers typically ignore the people they encounter, and the “rare interactions that occur are perfunctory and clumsy, or even violent.” The other major deficit in sleepwalking behaviour is linguistics. Most sleepwalkers respond to verbal stimuli with only grunts or monosyllables, or make no response at all. These two apparent deficiencies may be significant. Sleepwalkers luse of protolanguage; short, grammar-free utterances with referential meaning but lack syntax, may illustrate that the consciousness is a social adaptation and that other animals do not lack understanding or sensation, but that they lack language skills and therefore cannot reflect on their sensations and become self-aware. In principle Francis Crick, co-discover of double helix DNA structure, believed this hypotheses. After he and James Watson solved the mechanism of inheritance, Crick moved to neuroscience and spent the rest of his trying to answer the biggest biological question; what is the consciousness? Working closely with Christof Koch, he published his final paper in the Philosophical Transactions of the Royal Society of London and in it he proposed that an obscure part of the brain, the claustrum, acts like a conductor of an orchestra and “binds” vision, olfaction, somatic sensation, together with the amygdala and other neuronal processing for the unification of thought and emotion. And the fact that all mammals have a claustrum means that it is possible that other animals have high intelligence.

So how different are the minds of animals in comparison to our own? Can their minds be algorithmically simulated? Many Scientists are reluctant to discuss animal intelligence as it is not an observable property and nothing can be perceived without reason and therefore there is not much published research on the matter. But, by avoiding the comparison of some human mental states to other animals, we are impeding the use of a comparative method that may unravel the secrets of the cognitive mind. However primates and cetacean have been considered by some to be extremely intelligent creatures, second only to humans. Their exalted status in the animal kingdom has lead to their involvement in almost all of published experiments related to animal intelligence. These experiments coupled with analysis of primate and cetacean’s brain structure has lead to many theories as to the development of higher intelligence as a trait. Although these theories seem to be plausible, there is some controversy over the degree to which non-human studies can be used to infer about the structure of human intelligence.

By many of the physical methods of comparing intelligence, such as measuring the brain size to body size ratio, cetacean surpass non-human primates and even rival human beings. For example “dolphins have a cerebral cortex which is about 40% larger a human being. Their cortex is also stratified in much the same way as humans. The frontal lobe of dolphins is also developed to a level comparable to humans. In addition the parietal lobe of dolphins which "makes sense of the senses" is larger than the human parietal and frontal lobes combined. The similarities do not end there; most cetaceans have large and well-developed temporal lobes which contain sections equivalent to Broca's and Wernicke's areas in humans.”

Dolphins exhibit complex behaviours; they have a social hierarchy, they demonstrate the ability to learn complex tricks, when scavenging for food on the sea floor, some dolphins have been seen tearing off pieces of sponge and wrapping them around their "bottle nose" to prevent abrasions; illustrating yet another complex cognitive process thought to be limited to the great apes, they apparently communicate by emitting two very distinct kinds of acoustic signals, which we call whistles and clicks and lastly dolphins do not use sex purely for procreative purposes. Some dolphins have been recorded having homosexual sex, which demonstrates that they must have some consciousness. Dolphins have a different brain structure then humans that could perhaps be algorithmic simulated. One example of their dissimilar brain structure and intelligence is their sleep technique. While most mammals and birds show signs of rapid REM (Rapid Eye Movement) sleep, reptiles and cold-blooded animals do not. REM sleep stimulates the brain regions used in learning and is often associated with dreaming. The fact that cold-blooded animals do not have REM sleep could be enough evidence to suggest that they are not conscious and therefore their brains can definitely be emulated. Furthermore, warm-blood creatures display signs of REM sleep, and thus dream and therefore must have some environmental awareness. However, dolphins sleep unihemispherically, they are “conscious” breathers, and if fall asleep they could drown. Evolution has solved this problem by letting one half of its brain sleep at a time. As dolphins utilise this technique, they lack REM sleep and therefore a high intelligence, perhaps consciousness, is possible that does not incorporate the transitional states mentioned earlier.

The evidence for animal consciousness is indirect. But so is the evidence for the big bang, neutrinos, or human evolution. As in any event, such unusual assertions must be subject to rigorous scientific procedure, before they can be accepted as even vague possibilities. Intriguing, but more proof is required. However merely because we do not understand something does not mean that it is false - or not. Studying other animal minds is a useful comparative method and could even lead to the creation of artificial intelligence (that does not include irrelevant transitional states for an artificial entity), based on a model not as complex as our own. Still the central point being illustrated is how ignorant our understanding of the human brain, or any other brain is and how one day a concrete theory can change thanks to enlightening findings.

Furthermore, an analogous incident that exemplifies this argument happened in 1847, when an Irish workman, Phineas Cage, shed new light on the field of neuroscience when a rock blasting accident sent an iron rod through the frontal region of his brain. Miraculously enough, he survived the incident, but even more astonishing to the science community at the time were the marked changes in Cage’s personality after the rode punctured his brain. Where before Cage was characterized by his mild mannered nature, he had now become aggressive, rude and "indulging in the grossest profanity, which was not previously his custom, manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires" according to the Boston physician Harlow in 1868. However, Cage sustained no impairment with regards to his intelligence or memory.

The serendipity of the Phineas Cage incident demonstrates how architecturally robust the structure of the brain is and by comparison how rigid a computer is. All mechanical systems and algorithms would stop functioning correctly or completely if an iron rod punctured them, that is with the exception of artificial neural systems and their distributed parallel structure. In the last decade AI has began to resurge thanks to the promising approach of artificial neural systems.

Artificial neural systems or simply neural networks are modelled on the logical associations made by the human brain, they are based on mathematical models that accumulate data, or "knowledge," based on parameters set by administrators. Once the network is "trained" to recognize these parameters, it can make an evaluation, reach a conclusion and take action. In the 1980s, neural networks became widely used with the backpropagationalgorithm, first described by Paul John Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. Most notably in 1997, IBM's Deep Blue supercomputer defeated the world chess champion Garry Kasparov. After the match Kasparov was quoted as saying the computer played "like a god."

That chess match and all its implications raised profound questions about neural networks. Many saw it as evidence that true artificial intelligence had finally been achieved. After all, “a man was beaten by a computer in a game of wits.” But it is one thing to program a computer to solve the kind of complex mathematical problems found in chess. It is quite another for a computer to make logical deductions and decisions on its own.

Using neural networks, to emulate brain function, provides many positive properties including parallel functioning, relatively quick realisation of complicated tasks, distributed information, weak computation changes due to network damage (Phineas Cage), as well as learning abilities, i.e. adaptation upon changes in environment and improvement based on experience. These beneficial properties of neural networks have inspired many scientists to propose them as a solution for most problems, so with a sufficiently large network and adequate training, the networks could accomplish many arbitrary tasks, without knowing a detailed mathematical algorithm of the problem. Currently, the remarkable ability of neural networks is best demonstrated by the ability of Honda's Asimo humanoid robot that cannot just walk and dance, but even ride a bicycle. Asimo, an acronym for Advanced Step in Innovative Mobility, has 16 flexible joints, requiring a four-processor computer to control its movement and balance. Its exceptional human-like mobility, are only possible because the neural networks that are connected to the robot's motion and positional sensors and control its 'muscle' actuators are capable of being 'taught' to do a particular activity.

The significance of this sort of robot motion control is the virtual impossibility of a programmer being able to actually create a set of detailed instructions for walking or riding a bicycle, instructions which could then be built into a control program. The learning ability of the neural network overcomes the need to precisely define these instructions. However, despite the impressive performance of the neural networks, Asimo still cannot think for itself and its behaviour is still firmly anchored on the lower-end of the intelligent spectrum, such as reaction and regulation.

Neural networks are slowly finding there way into the commercial world. Recently, Siemens launched a new fire detector that uses a number of different sensors and a neural network to determine whether the combination of sensor readings are from a fire or just part of the normal room environment such as dust. Over fifty percent of fire call-outs are false and of these well over half are due to fire detectors being triggered by everyday activities as opposed to actual fires, so this is clearly a beneficial use of the paradigm.

But are there limitations to the capabilities of neural networks or will they be the solution to creating strong-AI? Artificial neural networks are biologically inspired but that does not mean that they are necessarily biologically plausible. Many Scientists have published their thoughts on the intrinsic limitations of using neural networks; one book that received high exposure within the Computer Scientist community in 1969 was Perceptron byMinsky and Papert. Perceptron brought clarity to the limitations of neural networks, although many scientists were aware of limited ability of an incomplex perceptron to classify patterns, Minsky’s and Papert’s approach of finding “what are neural networks good for?” illustrated what is impeding future development of neural networks. Within its time period Perceptron was exceptionally constructive and its identifiable content gave the impetus for later research that conquered some of the depicted computational problems restricting the model. An example is the exclusive-or problem. The exclusive-or problem contains four patterns of two inputs each; a pattern is a positive member of a set if either one of the input bits is on, but not both. Thus, changing the input pattern by one-bit changes the classification of the pattern. This is the simplest example of a linearly inseparable problem. A perceptron using linear threshold functions requires a layer of internal units to solve this problem, and since the connections between the input and internal units could not be trained, a perceptron could not learn this classification. Eventually this restriction was solved by incorporating extra “hidden” layers. Although advances in neural network research have solved many of the limitations identified by Minsky and Papert, numerous still remain such as networks using linear threshold units still violate the limited order constraint when faced with linearly inseparable problems Additionally, the scaling of weights as the size of the problem space increases remains an issue.

It is clear that the dismissive views about neural networks disseminated by Minsky, Papert and many other Computer Scientists have some evidential support, but still many researchers have ignored their claims and refused to abandon this biologically inspired system.

There have been several recent advances in artificial neural networks by integrating other specialised theories into the multi-layered structure in an attempt to improve the system methodology and move one step closer to creating strong-AI. One promising area is the integration of fuzzy logic. invented by Professor Lotfi Zadeh. Other admirable algorithmic ideas include quantum inspired neural networks (QUINNs) and “network cavitations” proposed by S.L.Thaler.

The history of artificial intelligence is replete with theories and failed attempts. It is in inevitable that the discipline will progress with technological and scientific discoveries, but will they ever reach the final hurdle?

You May Also Like

These Stories on Artificial Intelligence

Subscribe Here!

No Comments Yet

Let us know what you think