Traditional financial and investment management research and work-flows will be heavily disrupted by AI over the next 10 years, a symposium of leading researchers heard.
At the first Gillmore Centre for Financial Technology Symposium, which was held virtually online because of the coronavirus pandemic, fintech researchers and fund managers came together to highlight the need for the next generation of AI tools to be explainable to those using them.
The Gillmore Centre for Financial Technology is newly established at Warwick Business School to research emerging technologies in the financial sector such as AI, blockchain, mobile payments, cryptocurrencies and crowdfunding platforms.
It was launched in October 2019 thanks to a £3 million donation from Clive Gillmore, founding member and CEO of Mondrian Investment Partners and an alumnus of the University of Warwick.
Ram Gopal, Professor of Information Systems & Management and Director of the Gillmore Centre for Financial Technology, said: “Despite moving the symposium online it proved invaluable in detailing the emerging research and practices of AI in the finance industry. One of the themes of the talks was the need for explainable AI.
“This is a fast-emerging research area, with many interesting routes being taken. There is a need to build more reliable and accurate AI systems that not only help investment decision-making, but also reveal why these complex models have arrived at their decision. It is vital for both users and for improving the systems.
“Technology is fast changing the face of finance and investment. AI and machine learning tools are becoming more prominent in financial decision-making, which are creating opportunities as intermediaries are pushed aside, but also challenges that the Gillmore Centre for Financial Technology is now looking to investigate with its partners and collaborators.”
The symposium saw talks from academics at City, University of London, UCL, Warwick Busines School, University College Cork alongside fintech firms RavenPack, a data analytics platform for finance professionals, DataRobot, a machine learning platform, and AI-powered investment manager Rothko Investment Strategies.
Daniel Philps, moderating the symposium, revealed that “financial technology has seen two arms races begin to emerge; the first concentrating on the design of algorithms to make decisions, the second on scaling machine learning and expanding data resources.” The symposium heard that there is a trade-off to decide in both these areas.
While deep-learning, powered by neural networks, has seen AI’s greatest successes in its application, these systems tend to be black boxes where it is often unclear how outcomes are reached.
Artur d’Avila Garcez, of City, University London, revealed neural symbolic systems which bring together neural networks with rules-based AI, is a vital step forward in making these black box decisions explainable.
Do AI decisions need to be explainable?
Professor Garcez said: “Deep learning is the most accurate but least understandable form of AI. However, with knowledge extraction after the black box decision has been made, we can improve this and rebuild trust in AI.
“By combining rules-based AI with deep learning we can add knowledge, such as in the form of a decision tree, to show how a decision was arrived at. This is not only good from a regulatory perspective, but can also be used to improve system performance, so one can inspect the decision and learn from its mistakes. This will improve consumer confidence and also build more energy efficient AI systems as we rely less on vast amounts of data and more on knowledge.”
“As a practical example, Playtech, an online gambling software company, has to follow a regulatory framework to protect customers, so it uses AI to predict if a player might be at risk of harm and should be recommended to take a break from the game. Neural networks and random forests give the best results at predicting when somebody should self-exclude based on data such as frequency of play and betting intensity. By extracting knowledge from the black box using a decision tree, we can explain how a player is notified, we can check the system for unfair bias, and we can fix and improve the system.”
Tillman Weyde and Adam White, of City, University London, both introduced competing approaches that allow neural network outcomes to be explained.
Mr Philps with Raj Shah, of Rothko, argued traditional quantitative fund managers are adopting similar factor-based strategies to build a portfolio of stocks, which leads to crowding and shrinking returns.
“Rothko’s AI approach aims to combine human like stock-selection with the scale of using a quantitative approach,” said Mr Philps. “AI can remove behavioural biases, retain the memories of past mistakes and opportunities, while not losing information in the way traditional quant-based approaches tend to.”
Mr Shah added: “Rothko uses an ensemble approach, where more information and perspectives can drive better outcomes and, we believe, the generation of better returns and alpha.”
Peter Simon demonstrated DataRobot’s automated machine learning system and noted that, despite the power of commercially available machine learning, “there is no market-predicting magic box”.
Participants also heard that scaling machine learning and AI using cloud computing can allow many complex tasks to be batched. Christos Filelis-Papadopoulos, of the Democritus University of Thrace in Greece, espoused the benefits of distributed computing but cautioned that “of the bottlenecks that exist for deploying machine learning on the cloud, network speed is currently the biggest issue”.
Natural language processing applied to text data from news was introduced by Peter Hafez, of RavenPack. Mr Hafez showed how the sequence of COVID-19 related news from around the world had rippled through the market sell-off and added: “This approach to sentiment scoring aims to allow traders to tap the momentum in stocks before it happens.”
The Gillmore Financial Technology Centre is adopting a multi-disciplinary approach inviting academics from Finance, Information Systems & Management, Behavioural Science, Operations, and other groups with external researchers, industry partners, and governing agencies also invited to take part in its cutting-edge research.
Professor Gopal added: “Explainable AI is an important new area of research and one of the objectives of this symposium was to support and foster collaborative research, and I am pleased to see this happening. We had a diverse audience of industry professionals, researchers from Warwick and outside the University plus PhD students and the talks have inspired many discussions online.
“We will now be looking to organise more workshops and symposiums on cryptocurrencies, AI and blockchain.”
This article was written by Ram Gopal and originally published on the Warwick Business School website, whose London base at The Shard offers an ideal location for executive learning including an Executive MBA, a Distance Learning MBA and a range of executive diplomas.