Artificial Intelligence : The end of the human race?

block

Patrice Caine :
During the height of summer, an open letter to the United Nations warning of the dangers of Artificial Intelligence, signed by Tesla CEO Elon Musk and more than 100 other entrepreneurs, caused quite a stir in the media and the scientific community. The letter gave rise to a heated digitised debate between the Tesla CEO and Facebook founder Mark Zuckerberg. The letter’s signatories were alerting the international community about the possible defence applications of AI, and in particular about the threat of lethal autonomous weapons or “killer robots”. It is within this context that I wish to address the theme of Artificial Intelligence here at the Women’s forum for the Economy & Society Global Meeting in Paris (#WFGM17). The scope of the subject is constantly evolving and we at Thales owe the most recent advances in this domain to a woman at the CNRS / THALES research centre in Palaiseau.
To claim that AI could “spell the end of the human race” may grab headlines, but the underlying debate over moral and political limitations on the use of intelligent weapons is legitimate, provided the discussions are clear-headed and dispassionate. Artificial Intelligence will clearly offer new possibilities in reconnaissance, identification, raising the tempo of operations and potentially in decision-making too. But the decision to engage in combat will always be a political decision, and as such it will always depend on whether or not humans actually want to delegate that responsibility to a machine.
A moral and political response is therefore needed to address the legal vacuum that exists today.
The debate about establishing an international legal framework for such weapons began at the United Nations in 2016 and must continue, along with discussions on nuclear and biological weapons which led to the restrictions in place today.
Let’s try to remain modest and lucid about the level of maturity of AI. Even the most powerful artificial neural networks are still a very long way from matching the capabilities of the human brain, which is much more sophisticated … and most definitely one of the most complex objects in the universe.
In fact there is a difference between “weak AI” and “strong AI”. Strong AI would be endowed with consciousness, like human beings; however, all the AI platforms we know today, even the most advanced, are examples of weak AI, and this is likely to be the case for a long time to come.
For the simple reason that there is no evidence to prove that consciousness would spontaneously appear just because AI platforms become bigger and more complex.
Consciousness is what makes us human, what distinguishes us from robots, and it’s also the crucial factor in making a decision. Hollywood fantasies about machines breaking free from their creator and AI platforms destroying the human race are still in the realm of science fiction. “Embracing our humanity” is moreover one of the themes of the Women’s forum. It implies that some of the answers to the disruptions in the world are to be found at the heart of ourselves. In a certain manner, it’s down to the human to find the answers to the questions posed by AI and its utilisation. For in real life, an AI platform is still a machine and, like any other machine, it needs to be kept in check if it is to be trusted. That will mean controlling the quality and integrity of the data it uses, and ensuring that it learns in appropriate ways. It will mean taking steps to avoid the kind of dysfunctional situations reported recently, where robots started to exhibit racist behaviour simply because some of the data they were using was racially prejudiced.
(Patrice Caine is Chairman and CEO of Thales).

block