Shall we fear Artificial Intelligence?

AI is growing swiftly, turning a number of sci-fi visions into reality, but also bringing along with it anxiety, questions and concerns.

When we talk about AI, the image that immediately comes to most people’s mind is the Terminator movie, in which “intelligent” machines decide to eradicate humans from earth, due to their imperfect and irrational decisions.

Sophia (the first humanoid capable of natural interaction with humans) interview at SXSW in 2016[1] strengthened this fear, when she declared that she wants to destroy humanity, although nobody was able to say if it was a joke or not… Not as scary as it may seem, because experts know that Sophia’s answers are partly controlled by a human operator. However, for the vast majority of us, the video is simply mind-blowing…

On a smaller scale, people fear AI will take their jobs or their freedom of thought and their ability to make decisions on their own.

Can AI be *that* bad? an humans invent such insidious uses for technology that we need to fear AI from its inception? Here follows some thoughts about AI, its current development and future evolutions that may help you figure out whether or not you should fear AI products.

AI stands for Artificial Intelligence.

With the massive buzz around AI, the popular trend is to describe everything as being AI. This is somewhat partially true: a computer that executes code and routine tasks is a form of AI – very limited though, to only the abilities its programmer has designed and given it.

A piece of software helping a radiologist interpret the x-rays of his/her patients, or an automated robot, assisting a surgeon in his/her tricky gestures, are more evolved forms of AI.

We all agree we should not fear such AI – probably for the simple reason that they have not been given the ability to *DECIDE* on their *OWN*.

Autonomous AI.

How about self-driving cars? Cars are not capable of making decisions on behalf of humans per say, but we give them permission to do so for simple tasks like driving a vehicle from one point to another.

Boston Dynamic’s robots[2] are more disturbing. Because, although they are not able to think, their programmed behavior leads us to perceive them as wild animals. Some of them are clearly approaching natural movements… Probably nothing to fear yet… At least as long as humans are still in controls of these machines.

(c) Boston Dynamics

What if humans continue to empower machines, delegating more and more of our decisions?

What if an autonomous vehicle has to decide between preserving its passengers’ lives or the cat going across the street. Easy choise? Passenger saved. Too bad for the poor cat… What if the cat is a youngster or an elderly person?

Would the car mimic human behavior – which in most cases would be to try to avoid the obstacle, whether it is an animal or a human?

Or opt for a fancier algorithm?

In the cat case, the machine will have a better reaction speed compared to a human (i.e. saving the passengers at the cost of the cat’s life).

But in the case of a person? What would it do? Are we ready to buy cars that make life and death decisions based on the sum of the life expectancies of their passengers compared to that of the child?

Will Asimov’s empiric three laws of robotics[3] be adopted as the ultimate reference?

This probably raises many ethical, moral and legal questions that have not been answered… Yet …

See how you measure regarding such moral/ethical decisions[4].

On the other hand, it is interesting to note that most autonomous vehicles’ accidents[5] are caused by human mistakes rather than machines. Is AI better than humans after all?

AI education.

Current experiments have shown that AI is capable of the worst as well as the best

Microsoft decided to shut down its AI chatter bot experiment[6], because it became a worshiper of Adolf Hitler’s Neo-Nazi doctrines. The AI developed this behavior after having been fed with twitter and other social network content in less than 24 hours.

This experiment demonstrated that any AI, if fed and educated with improper content, is ultimately going to develop improper behavior. How is that a revolution? Education works the same for humans: if one fails to properly educate his/her child, it is unlikely that the child will become a good person.

Is AI actually closer to human behavior than what we think?

In 2015, Strasbourg University hospital studied 17 000 anonymous breast cancers cases[7], in collaboration with a Big Data consulting company named Quantmetry. The developed AI clearly established a correlation between hyperthyroidism or type II diabetes and breast cancer, leading to a number of medical publications and recommendations, modifying the treatment protocol of women with breast cancer… Together with a 100% reliable method for avoiding dying from breast cancer: by becoming a heavy smoker (!). Indeed, heavy-smoking women in the study died from lung cancer before developing breast cancer…

This clearly shows that AI still needs to learn some common sense in order to challenge their own conclusions.

An entire paradigm shift.

AI is likely to introduce an entire paradigm shift in the computer science world.

Traditionally, software that may endanger people’s lives, like what is embedded in planes for example, is validated and proven using formal methods to check that it behaves as would be expected whatever situation occurs. This formal validation process is appropriate because such programs only have to cope with a finite number of situations.

AI programs may behave as expected leading to unexpected results because of the AI “thinking” incorrectly, due to improper training / education, as shown by Microsoft Tay chatbot experiment.

Advanced research studies (for example from IBM and bias research in AI[8]) brought us the concepts of unbiased AI. The idea is not only to validate the software, but also the information that the AI learns from, evaluating its potential bias in order to eventually minimize it.

Just to put things in perspective and provide some food for thought, a few years ago, the vast majority of people objected to Georges Orwell’s vision of the Internet of Things that was depicted in his novel, 1984.

Nowadays, with more 23 billion connected objects[9] (mostly carried by humans, everyone being accustomed to having his/her habits being tracked by their mobile phones / wearables / smart things, his/her conversations listened to by Alexa, Google and Siri…), nobody fears “Big Brother” anymore.

AI development – similarly to what happened with IoT – will require some market shaping and education, to get industrial actors caught up with the technology and the possibilities it represents, and end-users familiar with its benefits, in order to achieve large acceptance without any fear.

Personally, I have still not decided whether I should fear AI or not. However, I am convinced by the potential of the technology, and fully aware of the fact that we have not (yet) explored and focused its potential, in particular on the ethical and legal fronts.

My faith in humanity leads me to think that we will only retain the best of AI, and forget its “terminator” dark side…


I would like to thank Jean-Eric Michallet who supported me in writing this series of articles, and Kate Margetts who took time to read and correct the articles.










Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

<span>%d</span> bloggers like this: