AI Need To Control Before It Controls Us

Shalini Verma

block

Stephen Hawking had famously warned that full Artificial Intelligence (AI) could spell the end of the human race. Such a dystopic end is hard to believe. Perhaps, we will never get that far.
But AI is getting fairly good at impersonating humans. While AI is nowhere close to passing the Turing test, there are bots that write like us, sound like us, look like us, and even make music like us. This opens up the world to widespread socio-political problems. It’s time to build a robust framework for regulating AI. There are good reasons for it.
AI is at the heart of global power struggles. US and China have been locked in a fight for AI supremacy, while small yet highly digital countries like Singapore and the UAE have bet on AI for competitive advantage. We have precedence of science getting dragged into geopolitics. The race to acquire nuclear weapons knowhow defined the cold war as the world was perilously balanced on the doctrine of MAD (Mutually Assured Destruction). Countries have attached a similar value on AI for long-term leverage.
Yet, the industry is afraid that sweeping regulatory laws will have a chilling effect on innovation. Having grown up in a regulated India, my peers have viewed regulation with strong suspicion, if not contempt. Yet the lack of regulation gave birth to the Internet.
In the US, Section 230 of the Communications Decency Act in 1996 is believed to have cleared the deck for the rise of the Internet. It ensured that websites were not liable for content posted by their visitors. While the law did protect the fledgling Internet, we all know how that played out 20 years later. A 16-month US congressional investigation of Amazon, Apple, Google, and Facebook has found them to hold monopoly power. The 451-page report says that they expanded their dominance through self-preferencing, predatory pricing, and exclusionary conduct.
Big Tech is the epicentre of AI innovation, which it licenses to other businesses. It enjoys the virtuous cycle of using customer data from such licensed services to further improve its machine learning models. It has snagged top AI researchers from universities, sometimes hiring entire departments. AI will allow a handful of companies and countries to hold a dominant position. It will also give the powerful tools to the dark web for criminal activities. Machine learning in particular needs to be constantly audited and monitored.
Yet, not everyone is convinced. EU regulators are already being criticised for focusing too much on regulation at the cost of innovation. A 2017 paper titled Artificial Intelligence and Public Policy published by Mercatus Center of George Mason University made a case against precautionary regulation of nascent AI. Three years later, we have a whiff of how well AI can produce original text in OpenAI’s latest model. Progressive Generative Adversarial Networks can be trained to produce an entirely new face that looks eerily human.
If AI regulation is delayed, then we run the risk of ‘regulatory capture’, when regulators serve the interests of entities they intend to regulate. Mark Zuckerberg has conveniently called for the regulation of the Internet because it will help to keep out competition for well-entrenched Facebook. In the absence of a clear demarcation between the producers and the consumers, and data moving in a cross-border environment, regulation is now tough without international cooperation.
Governments and the research fraternity agree that AI is showing bias. The recent AI summit hosted by the Indian government was essentially a public discourse on responsible AI. Amazon had to terminate its AI recruitment tool that trained itself to be biased against women. Bias in AI will place a new prejudiced filter on our digital life.
We can take pointers from the US antitrust report on Big Tech that has recommended more resources to antitrust agencies and new laws. The report has acknowledged the failure of antitrust agencies to take corrective actions on crucial occasions when the monopolistic digital platforms killed their competition. The law should have been updated once the socio-political ramification of radical social media content was in plain sight.
Laws need to be nuanced to walk the fine line between innovation and misuse. Malpractices like willful impersonation by bots, and risks in autonomous AI and brain-computer interface should be clearly defined. Policy makers need to define specific actions against such risks. Smart Dubai’s AI Ethics Advisory Board has discussed diagnostic tools for ethical AI. EU has proposed an ecosystem of trust to monitor risks. We need to start with high risk sectors like finance, automotive and healthcare. Setting mere norms are no longer enough.
Removing bias and lies, and instituting accountability and human oversight is core to regulating AI. This can only be done when countries start to cooperate with one another and build a common framework for regulation.

(Ms. Verma is CEO of PIVOT
 technologies)

block