Technology has reduced the role that client networks and personal insight play in the advice that financial institutions provide to their customers, or so argued Bloomberg’s Matt Levine last week.
His analysis comes amid the revelation that Morgan Stanley will now use artificial intelligence (AI) to send their clients customized e-mails when markets are in turmoil. These e-mails would include personal touches, based on scans of social media as well as other information, to calm jittery nerves and keep clients invested.
This has elicited criticism due to the perceived emotional manipulation of the change. Artificial Intelligence that pretends to be human and, even worse, uses emotion to convince us to part with money resides firmly on the other side of the Uncanny Valley.
Levine characterises the idea of robots weaponizing emotions to convince people to stay invested as a dystopian feature of “late capitalism”. But to our minds the larger problem here is that blindly giving advice to investors during a downturn constitutes irresponsible trading.
As we recently wrote in EurActiv, when AI controls financial services, and humans no longer take an active role in decision making, serious questions arise about who is responsible for mistakes.
Great risks are posed to financial services if human incentives and liabilities are eroded by AI decision making. Additionally, when AI gives advice, it is not limited by the moral or ethical restrictions that financial advisers should be working under.
The financial service industry will face disruption as AI technology evolves. But the inappropriate use of this technology could fundamentally undermine the twin pillars of trust and responsibility on which the financial system relies.
The European Union’s new Markets in Financial Instruments Directive (MiFID II) legislation goes some way to re-elevate traders above computers – and this will be a healthy thing for finance in the long-run.