We humanize artificial intelligence at our peril
Banks and financial institutions are implementing systems which replace human trading with trading through artificial intelligence. These systems promise to cut down workforce requirements and bolster efficient trading by predicting better trends. It’s therefore evident why firms are investing heavily in AI. And governments are backing it too—the EU has called for an extra $24bn in AI funding and the US has already committed over $15bn to AI.
However, what are the consequences of this expansive growth in AI in the financial space? No legal structure and no moral guidance has been presented that is sufficiently adapted to take care of the mutable and increasingly independent functions being given to artificial intelligence. Late last month, the EU released a document “to provide a first mapping of liability challenges that occur in the context of emerging digital technologies.” The document outlined intentions to further study liability but mainly focused on AI’s use in hardware, devoting only one line its use in trading.
This lack of attention to regulation and legislation should be worrying. While cases of individual damage through rogue autonomous hardware are a problem, they are nothing compared to the potentially ruinous consequences of basing an entire economic system on amoral autonomous high-frequency trading.
Such wildly powerful processors, executing trades on their own, are undeterred by legal challenges and threats from regulatory bodies. While deep learning and self-improvement might appear like qualities which endow AI with independence, legally it cannot be defined as such.
Part of the problem is that AI is considered as independent and intelligent. The jump from supposed intelligence to reasoning is small and so we are wont to believe the language often associated with artificial intelligence in the media. “Artificial Intelligence”, “neural networks”, “self-improving algorithms” and “deep learning” all deepen the fantasy that we are working with human entities, with reasoning and independence. Humanizing AI is dangerous because while AI might be able to adapt and learn autonomously—which is to say according to its own rules—it does not act independently—without stimulus.
The answer to the legal liability question with AI should not be defined by our humanization of its functions. Instead it is essential that we be aware of liabilities throughout AI’s expansion onto the trading floor. What is currently understood as the relationship between the faulty product, the producer and the consumer is being muddied and it’s important that an arm’s length approach to trading bots not be established. If the fear of the regulators cannot deter AI, it needs to at least deter its creators.
Hugo Kruyne, May 4, 2018.