AI should not be attributed legal personhood

block

Daniel Schlaepfer and Hugo Kruyne :
By the end of April, the European Commission will be announcing “an initiative on Artificial Intelligence and robotics”. We have argued that the EU’s MiFID II regulation is providing a model of financial regulation for the rest of the world to follow. Last December, Guy Verhofstadt wrote about Artificial Intelligence that “…the European Union should start establishing rules to protect all Europeans – and give the rest of the world a model to follow.” So from the perspective of financial services, now is the time for North America to be cognisant of Europe’s internal debate on AI regulation.
It’s likely that financial services will be the industry most severely disrupted by the arrival of AI, and this could fundamentally undermine the twin pillars of trust and responsibility on which the financial system relies. When AI controls financial services, and humans no longer take an active role in decision making, it poses serious questions about who is responsible for mistakes. Accordingly, the most prominent debate surrounding robotics and AI presently being discussed at EU level concerns whether or not robots should be attributed legal personhood – as corporations can be – in order to ascribe them with liabilities.
Endowing robots with personhood, however, is clearly the wrong approach. Paradoxically, the philosophy that informs our thinking on AI regulation should be that which prioritises human agents over computerised actors.
The reason is simple. Whereas present algorithmic trading strategies are intelligible, and explainable, the ambition of contemporary AI engineers is to develop machine learning, essentially ‘decision-making’ though pattern recognition. This can lead to code which generates output that is no longer clearly explainable by its creators.
So in finance, the highest ambition is to teach machines to ‘learn’ how to make profit; to program predictive capabilities into systems that would anticipate price changes before they happen in the market.. Yet while clients can be won over by profits in their portfolios, regulators will not be. So who will be held accountable if an AI engages in activity which violates regulations?
One answer has been suggested in a European Parliament recommendation to the Commission on Civil Law Rules and on Robotics. In Section 59(f), Members of European Parliament invite the Commission to consider giving the more sophisticated algorithms and computers carrying out these trades legal personhood.
This should be worrying. It is in the interests of both Europe and North America to ensure that the “European approach to Artificial Intelligence” is in no way animated by the spirit of Section 59(f).
Unleashing Artificial Intelligence into the financial industry – let alone personalising it – to act as a trading decision maker could pose great risks. Unchecked, its capabilities might gradually erode the very fabric of financial services, reliant as they are on human incentives and liabilities.
A strong argument exists stating that the legal status of personhood should only be conferred on an entity if doing so is consistent with the overarching purposes of the legal system. Ascribing robots legal personhood is not consistent with that for the same reason that it is not compatible with, or complementary to, the underlying pillars of finance. Both require moral agents to be responsible for their actions in order for the concepts of right and wrongdoing to be intelligible.
Responsibility must rest with people. But who? Some argue that more responsibility should lie with the software engineers. In the same way that one must be a licensed mechanical engineer in order to build a bridge, so too should AI’s pioneers be certified in some manner. If the technology is fundamentally unpredictable however, how would this sanitise its application onto markets in any way? Moreover, why should engineers bear responsibility for the way in which other people use their technology? Quite simply, the buck must stop with the firms that apply the AI. The counter-intuitive consequences of ascribing legal personhood to AI can be demonstrated reductio ad absurdum.
The EU suggestion is to ascribe differing degrees of rights and responsibilities to an AI depending on its ‘capacity’. So the greater the perceived level of autonomy, the more blame would be put on the machine itself and not on any third party, such as ’employer’, manufacturer, or software engineer. The problem is that rights and responsibilities of legal agents do not exist as a matter of degree. Of course, there exist different degrees of culpability, but the agent to which the specified degree of culpability is ultimately ascribed is wholly responsible for this portion of guilt, not partially or equivocally. A legal person must be a person, tout court.
Could one distribute blame between a robot, its designer, and employer? If so, some recipients of the punishment would be sentient, while others would not be. So what is the meaning of punishment, or crime?
These are not highfalutin philosophical questions. Artificial Intelligence is increasingly being applied across the financial industry, and we need only look back at the recent past to come across some important counterfactual, or indeed hypothetical, questions.
One of the biggest funds of Man Group Plc is said to make roughly half of its profits through AI, despite the technology only controlling a small proportion of its funds. Firms such as Point72 are looking to AI to automate decisions typically taken by money managers. Even UBS, the world’s biggest wealth manager, is said to be building ‘virtual agents’ to conduct investment research.
These examples should not automatically be viewed as progress. What if an AI had figured ten years ago that great returns could be accumulated by approving copious subprime mortgages? What if the government had bailed out the machines? Just this week, analysts at JPMorgan have accused 12 artificial intelligence hedge funds of playing a major part in the equities sell-off that hit global markets in early February.
(Daniel Schlaepfer is President and CEO and Hugo Kruyne is the Vice-President and COO of Select Vantage. Select Vantage Inc is a leading international day trading firm that, unusually for the sector, relies on human decision making over complex algorithms).

block