The truth is the hyperbole about AI is largely based on a lie.
But that has always been true!
A bit of history
Long long time ago, people imagined that they could build machines that showed human intellect. Most of them came to the conclusion that along the way we could make life better and everything artificial in our lives better, faster, and cheaper through automation. So we automated and the industrial age built a form of automated intelligence capable of performing
tasks people previously performed manually and ultimately outperforming us physically in almost every field and every aspect of them.
But with the advent of the electronic computer, people starting coming to the notion that we could also automate the thinking part of what people do. Most of them, along the way, came to the conclusion that we could do this for special purpose repeatable tasks and that would make life better and everything easier through automation. So we automated a lot of the wellunderstood decision-making, building more and more sophisticated control systems to handle the “cybernetic” aspect of automation. Way back in the 1948, Norbert Wiener wrote a book called “Cybernetics” using electrical circuitry and equations to characterize the control systems of animals and machines.1
Up to this point in time, things were, shall I say, less visionary and more grounded. But then the big brains of AI got together2 and started to theorize and make claims. Things seemed so obvious then, because logically, we could solve logical problems with logical mechanisms. So the only real question was which techniques could be implemented in computers and what were the time and storage implications of those implementations.
There was a concept called a production system.
- A sequence of (condition, action) pairs
- Starting at the first one:
- If the condition is met (true), perform the action, repeat.
- Otherwise, go on to the next pair
- If you run out of conditions to test, stop (or take more input and try again).
Since a production system can implement anything a Turing machines can implement, it is general purpose. So it can do anything a computer can do. Production systems implemented “Artificial Intelligence”, and it was touted as such.
Over the intervening years, more and more techniques were identified and implemented for optimization of the analytical processes.
Recent breakthroughs
We had so-called expert systems, systems that do recursive descent to examine a complex space, simulation-based methods the model futures and select alternatives based on outcomes, statistical methods, recognition mechanisms that ‘learn’ how to adjust parameters by using feedback, systems that automatically generate parameters out of a multidimensional space to ‘learn’, pattern recognition systems of all sorts, contextual window
limitations, and on and on. And all of this moved the ball forward in the seeming intelligence of the AI. AI got better and better at speech recognition, translation, visual analysis, artwork, and on and on across many fields of endeavor.
The ‘big’ recent breakthrough (about 10 years ago) was the ‘transformer’ which takes sequences and selects replacements of elements of those sequences based on statistics from large quantities of data. Large, in this case, means much of the accessible information from the Internet as well as any content collected by the parties building the technology or
licensing it (or taking it without license) from 3rd parties. Sequences in this case largely applies to the written languages, but is being applied to verbal and visual senses as well. But it is important to remember…
A difference in amount is a difference in kind – sort of
The real breakthrough has come more as a result of increased computing and storage than of brilliant new ways of thinking about thinking. More speed and more computers means we can explore larger chunks of the search apace in shorter time frames. More storage and more digital content available to use means we can do statistics or similar approaches on larger
and larger volumes of data to get closer and closer in our approximations. But the thing that is missing, one of the many such things, is the ability to make judgments about results. Syntactic judgments about sentence forms and language usage computers can do, but they have been able to do that since their inception. Spell checkers and correctors, sentence structure analysis, anticipating the next word or character or what have you, have all been here since at least the 1970s. But evaluating the sensibility in light of a model of reality is something these modern systems cannot seemingly do – yet…
It comes down to the same things
At the end of the day, we have finite state machines modeling or being used to create socalled ‘intelligence’, a term we do not really define well enough ourselves.
An alien spacecraft comes to Earth in search of intelligent life, looks at people, finds none, and leaves.
And finite state machines are the same things with the same limitations they had in the 1940s.
Conclusions
The terminator movie (and now television) series has been moving back the date of Judgment day over the years because we keep not reaching the age where artificial is intelligent. They claim it is because of time travel, but that is also a scientific concept that fiction portrays but reality fails to deliver (yet, but then if we have time travel that could change…).
Human beings have natural intelligence, if we ego ourselves as that. But we have not yet been able to create artificial intelligence via computer. We do have the capacity to build biological brains, and with their integration into networks, judgment day can still be realized.