Artificial Intelligence

AI (Artificial Intelligence) has become quite popular in modern culture, from devices that you can talk with, to robot pets, to fodder for science fiction. In some circles there is fear that we could unwittingly unleash an AI holocost upon us (ala "Terminator"). But just what is "AI"? You might answer "it is intelligence that is artificial". Granted, but what, exactly, is intelligence? Herein, I will attempt to demystify and clarify the terms and what we should, and should not be worried about.

The concept of AI has been seriously bandied about in computer science circles since the early days of digital computing. With somewhat hazy ideas about what AI was, many things came and went that were considered in the realm of AI until they actually came to pass. Natural language parsers were considered AI by some until they became available in the early 1970s. The same thing happened with voice recognition, inference engines, facial recognition, and a host of other software tools. As each technology came into being, the bar was raised on what could be considered AI.

The reason for this was primarily due to a poor definition of "intelligence" in the minds of most people. In turn, this was due to the fact that - even today - we don't have a good grasp of just how intelligence works. Note: in this case, I refer to intelligence as higher animal and human intelligence. Consider, for a moment, how you would define the term. Certainly, it would include such things as the ability to learn, recognize, communicate, reason, discern, draw conclusions, and to act on those conclusions. It also includes such aspects as creativity, the ability to philosophize, and some sort of emotional capacity. And that is probably just a start. Over the years, various models of intelligence have been proposed, but nothing has approached universal acceptance in this field. Even if we integrated every technology that has been considered AI in the past, we still would fall far short of human intellect (ignoring issues of computing speed).

The alert reader may have recognized that I made a jump in the previous paragraph from the general term "intelligence" to "human intelligence". Let's face it, the average man on the street thinks in these terms when "AI" is spoken of. So, we'll run with that. We still don't fully understand the "problem domain" - that is, we don't have a holistic understanding of human intelligence. Until such time, there is no way we could create an artificial version of it. There are essentially two viewpoints on human intelligence. 1) The mechanistic view that intellect is contained entirely within the brain and all that is needed for AI is to adequately model the brain (or, at least, the part in which conscious thought exists). 2) That consciousness exists outside of the corporal body (as a soul, spirit, etc) and that the brain is merely a biological interface to the incorporeal being. The first case is promoted by people like Frank J. Tippler (a mathematical physicist, and author) who postulates that the brain is a "wet computer" and intellect is software running on it. Thus, a sufficiently advanced program could run on a computer and BE intelligent. In the case of the second viewpoint, however, it may be impossible for humankind to create an artificial intelligence.

In either case, we don't understand intelligence sufficiently to create an artificial version of it. What if we created an alternative type of intelligence? Again, this is a matter of definition. Human intelligence (and possibly the intelligence of higher animals, such as primates) is the only kind of intelligence we have experience with. Even if you believe in a higher Being, that Being's intellect can only be comprehended through our own limited intellect. So, if we created something different, could we classify it as intelligence at all?

Another question is: is there a difference between something that has the appearance of intelligence and something which IS intelligent? Alan Turing speculated about a test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human (aka the "Turing Test"). But would the Turing test be passed if a human was only fooled for an hour, or a day, or a year? Could we simulate an intelligence that might be able to fool everyone into thinking it was intelligent forever? If it could fool us, would it actually BE intelligent? Does it even matter? Again, we'd have to have a definition of the word and then see if it applied to the AI. Frankly, just because something can fool people doesn't make it what it appears. People have been known to fool others into thinking they are someone other than who there are - especially online. But that doesn't mean the person on the other end of the internet is who, or what, they say they are. The Eliza program from the late 1970s is a conversational bot (program) that might convince someone that it is a real person for a short while. Modern voice-activated devices, such as the Alexa, are more modern (and useful) analogs, but no one would mistake them for real people for very long.

In the last decade, or so, AI has come to mean being able to recognize patterns that are not readily apparent to humans. Much of it is largely statistical analysis. As such, in academia, the term has little relation to the popular conception. Data is fed into the software to "train" it to recognize the desired patterns. This is all done by humans who have to hand-craft the software and training data to accomplish their goals. Hardly the autonomous thinking machines of science-fiction. Despite decades of work, all we've really accomplished is more refined versions of the pattern recognition and natural language parsing that were created 50 years ago. Is is possible that some secret government lab somewhere has succeeded in creating a real human-like AI? I highly doubt it. That would require advances in several fields that we would have seen in university settings, but haven't. And I haven't even addressed the amount of computer power that would be needed to simulate a human intellect at human speed.

For the sake of argument, though, let's say that an AI existed that had the intellectual capabilities of an average adult human. Would that be of concern? In itself, no. There is yet the idea of free will, which is an additional aspect of human intellect. That is, the ability for the AI to choose its own goals rather than have them programmed into it. This is yet another poorly understood area of human intellect. Why do twins who grow up in the same environment, with the same DNA, develop different tastes in art, subject matter, and vocation? In most cases, the twins live lives that reflect very definite different personal preferences. Where does this come from? You could simulate that in an AI by having it have randomly choose its preferences. But, is that real or simulated? Tied into this is emotional state. Could an AI have actual emotions or would it simply be simulated emotions. But what would be the point in doing this? Such AIs would have minimal use to those who spend the money on developing them. Having AIs that decide to go off and do their own thing is antithetical to capital investment. Thus, even if we had the technology (which we don't), it is hard to fathom why anyone would create such an AI. And without self-will, and all that goes with it, the fear of a murderous AI is unfounded.

One might argue about the danger of AI becoming self-aware. But being self-aware doesn't mean self-determination, self-protection, or any other form of selfness. A computer system that defends itself can do so without being self-aware, or not defend itself if it is. That is a matter of programming. In other words, there are several separate issues that many people conflate simply because all of these issues exist together in the only kind of intelligence we have experience with. Does AI require all of these human aspects of intelligence to be considered intelligent? And, once again, we come back to the definition of "intelligence".

Thus, before we can declare something "intelligent", we have to have a clear definition of the term - a standard by which to compare the prospective AI. When we say "human intelligence", are we talking raw computational power or something more? What aspects of human personality, flaws, and drives are required? If we are talking about an artificial human intellect that is indistinguishable from a natural human, we are still looking at software advances that are unlikely to happen in the near future, if ever. And we'd have to have hardware resources and power that may not exist even were we to combine every computer existing today and having ever existed in the past.

So, we have nothing to worry about then, right? No megalomaniac computer program is going to subjugate or destroy humankind in the foreseeable future. True enough. But that doesn't mean that there isn't something of concern. The AI of the modern world is nothing more than a tool that can be used by humans for good or evil. It isn't going to go off on its own, but those using it might. It will allow the identification of groups and individuals for whatever purposes its masters decide. It will allow good things such as development of new medicines. And it will allow bad things such as manipulation of markets for political or monetary gain. You see, the humans running the AI have the motivations and emotions that are lacking in the actual AI. You don't need a self-willed AI to subjugate the human race - you only need selfish people with power and the control of advanced AI. THAT is where the danger exists.

As with every other tool ever invented by mankind, AI can be used for good and/or evil. It isn't the single tool that will create or destroy our future, but it is part of our future for good or ill. It is important to understand where the real danger lays.