I was a true fan of the MythBusters TV show, in which Jamie and Adam aim to uncover the truth behind popular myths, tales and legends. Today I suggest we apply the same concept to AI, and some of its most popular beliefs. I will rate each “truth” as Busted (meaning the “truth” is a Myth), Possible or Confirmed.
Nowadays, AI seems to be everywhere, in every product, in every commercial, in every answer to any problem. This sudden industry trend must rely on a major disruption to have such an impact… At least in the press! 🙂
Consumers are discovering AI in their everyday lives: Alexa, Cortana, Siri to name just our vocal assistants… Also in our cars that are becoming “intelligent” to indicate the most appropriate and efficient routes to reach our destinations. Even our smartphones seem to “understand” what we are writing to suggest the next word and auto-correct our texts… All of this in less than what? 5 years? What a recent major disruption!
The 60s: The foundations of AI
AI is definitely not a new subject.
The concept itself appeared in the mid 60s, in a paper from Herbert A. Simon, Nobel laureate in economics: “Machines will be capable, within 20 years, of doing any work a man can do”.
A few years later, in 1967, the term “Artificial Intelligence’ was introduced for the first time in a speech from Marvin Minsky, an American cognitive scientist who dedicated most of his work to Artificial Intelligence, predicted that “Within a generation, the problem of creating ‘artificial intelligence’ will substantially be solved.”. In addition, three years later he emphasized “In from 3 to 8 years, we will have a machine with the general intelligence of an average human being”.
Even earlier than that, in 1959, Arthur Lee Samuel, an Electrical Engineer from MIT, known for having popularized “Machine Learning”, developed a checkers-playing program to illustrate the power of his self-learning algorithms. The method he proposed was an alternative to brute force analysis evaluating every possible move to choose the best, and made a computer playing against a human a reality. Samuel is considered a pioneer in AI and its application to computer gaming.
The 90s: The early days of AI
Since the foundations of AI and their thunderous promises, the 80s turned into a kind of hibernation for AI. Indeed, AI is not an easy task to achieve, and lack of computational power and lack of data slowed the development and proofs of the scientific theories. In addition, the computer paradigm shifted from centralized computer systems to general-purpose / personal computers which also affected the development of AI. Although PCs had less power than servers offered, they attracted the most attention since they radically transformed the computer industry.
IBM – after having successfully launched the Personal Computer and having been challenged on its own turf by PCs coming from Asia – decided to demonstrate the superiority of High Performance Computing systems and invested in R&D for AI.
Its “Deep Blue” project was the first computer ever to defeat the world chess champion Garry Kasparov, after a 6-game round that lasted several days in 1991 : 1 win for Gaparov, 2 wins for Deep Blue and 3 draws. It is funny to note that the boom of the PC, which distracted IBM from AI, is the same thing that led to the computer giant investing heavily again in AI a decade later…
2010: the inflection
Solving a checkers or a chess game is one thing. How about for a computer to win popular games that do not rely on logic or strategies, but on knowledge and culture?
IBM – again – launched its Watson suite, a software dedicated to AI. To demonstrate Watson’s superiority, IBM decided to implement a program that played Jeopardy. The program demonstrated a real ability to understand questions (or reverse questions) in natural language, and a true capacity to search the web, scoring and sorting results before answering. And guess what? Watson surpassed its human opponents…
In 2014, a chatbot with the pseudonym of Eugene Goostman made the headlines for having tricked the judges by passing a Turing test. Journalists too swiftly hailed Goostman’s creation to be ‘intelligent’ and ‘capable of thinking’. After many controversial debates, it turned out that the machine was able to dodge some questions pretending to be an adolescent who spoke English as a foreign language. After this, the Turing test was decided to not to be the best measure of a machine’s intelligence.
Go game is renowned to be the most challenging strategy game in the world.
Machines turn out to be superior to humans in any strategic game. Would it be the same with games where winning is not only based on strategy, but also on typical human behavior like lying – or bluffing to be more politically correct… J
This is exactly the challenge faced by Libratus, an AI system developed by two researchers at Carnegie Mellon in 2017. They built the first software to be able to play and win at poker. Although limited to a particular form of poker, the no-limit Texas Hold ‘Em, and to face-to-face games, AI was starting to behave like humans.
2020: To infinity and Beyond
Mimicking human behavior, the ability to make jokes and to discern sarcasm, is clearly the next big step that AI will overcome.
There has already been some work in this direction. In 2015, Hanson Robotics unveiled Sophia, presented as being the first humanoid that behaves and thinks like a human. It is worth noting that Sophia is the first machine to be referred to with personal pronouns. Most articles refer to Sophia as “she” and not as “it”…
Besides her controversial interview during SXSW in 2016 when she announced her willingness to eradicate humankind, which demonstrated that Sophia is, at best, capable of reproducing “natural” human facial expressions, but not at all capable of thinking by herself, Sophia is nevertheless an experiment that helped humans better understand their expectations with regards to their interactions with robots.
More impressive is IBM’s “debater”. The idea was not to reproduce natural human behavior, because the debater was just a simple screen, this research project aimed at reproducing humans’ ability to argue, contradict and convince. In 2019, IBM organized a debate between its AI and a human. It was up to the audience to decide the winner. Although the AI lost the debate, the exchanges were still impressive, and demonstrated the AI from IBM’s ability to structure an argument, understand its opponent’s oppositions and react accordingly.
One of the unique qualities of the human mind is its capacity to create. Matthias Roeder, from the Herbert Van Karajan Institute, is leading a group of researchers whose objective is to complete Beethoven’s 10th symphony. Although initial results have been judged “artificial and mechanical”, the AI improves as it learns from the composer’s work and with some human “tweaking”, we might expect a representation of this symphony in 2020.
Myth or Reality?
AI is definitely not recent, since it has existed for a long time, over 60 years if we agree to consider Samuel’s checker-playing machine as being the first “intelligent machine”.
It is not a major disruption either since academics and researchers established its foundations, guidelines and principles over 50 years ago…
Our myth seems “busted”.
On the other hand, AI has made such progress over the past couple of decades that we may consider that AI is transforming machines’ capabilities and the way humans interact with them so deeply, that we are currently facing a new revolution.
Which tends to be in favor of our myth to be “Confirmed”.
Besides deciding whether or not the myth is a myth (we might need an advanced AI system for that! 🙂 ), I am more interested in knowing what factors from the past 20 years have made theories that are half a century old possible today.
In my humble opinion, there are 3 major industry trends that have favored the recent boom in AI:
- Computing power virtualization
- Data massification
- Data diversification
Computing virtualization – aka cloud computing – makes the use of complex infrastructures as simple as ordering a book on Amazon.
Such normalization and banalization, although putting a tremendous price pressure on the industry, made computing widely available everywhere at any time, virtually without any limits and at affordable prices.
Increases in computer power and miniaturization allowed for diversification in terms of devices, and processing capabilities that are no longer exclusive to mainframes. PCs, smart phones and even objects are embedding more and more CPU, storage and communication capacities, and, as a result, are becoming smarter.
This drastic price erosion also applies to storage. Today, we have the capacity to store and process everything we produce. More data has been created in the last two years than in the entire history of the human race!
The data explosion is due, not only to social networks and the ease with which we produce and share digital content, but also to the Internet of Things.
The variety of data sources is the last factor that favored the rapid development of AI that we are currently witnessing. Researchers now have access to any measurable physical quantity and the ability to interact with every real-world mechanism.
“Things” do generate tons of data that are used to train AI algorithms.
AI is on its way to deeply modifying our digital world, the way humans interact with machines and will probably affect our daily lives and jobs.
Will AI become better than humans?
Will machines steal our jobs or deprive us from our capacity to think?
Stay tuned, these are the next myths or realities I am planning to discuss!
I would like to thank warmly Jean-Eric Michallet who supported me in writing this series of articles, and Kate Margetts who took time to read and correct the articles.
I would also like to give credit to the following people, who inspired me directly or indirectly:
- Patrick GROS – INRIA Rhône-Alpes director
- Bertrand Brauschweig – INRIA AI white book coordinator
- Patrick Albert – AI Vet’ Bureau at AFIA
- Julien Mairal – INRIA Grenoble
- Eric Gaussier – LIG Director & President of MIAI