Towards Inverse Reinforcement Learning for Limit Order Book Dynamics

Roa Vicens, Jacobo and Chtourou, Cyrine and Filos, Angelos and Rullan, Francisco and Gal, Yarin and Silva, Ricardo (2019). Towards Inverse Reinforcement Learning for Limit Order Book Dynamics. In: "ICML 2019 Workshop: AI in Finance: Applications and Infrastructure for Multi-Agent Learning at the 36 th International Conference on Machine Learning", 10 Jun 2019 – 15 Jun 2019, Long Beach, California.

Description

Title: Towards Inverse Reinforcement Learning for Limit Order Book Dynamics
Author/s:
  • Roa Vicens, Jacobo
  • Chtourou, Cyrine
  • Filos, Angelos
  • Rullan, Francisco
  • Gal, Yarin
  • Silva, Ricardo
Item Type: Presentation at Congress or Conference (Lecture)
Event Title: ICML 2019 Workshop: AI in Finance: Applications and Infrastructure for Multi-Agent Learning at the 36 th International Conference on Machine Learning
Event Dates: 10 Jun 2019 – 15 Jun 2019
Event Location: Long Beach, California
Title of Book: ICML 2019 Workshop: AI in Finance: Applications and Infrastructure for Multi-Agent Learning at the 36 th International Conference on Machine Learning
Date: 11 June 2019
Subjects:
Freetext Keywords: Inverse Reinforcement Learning
Faculty: E.T.S.I. Telecomunicación (UPM)
Department: Señales, Sistemas y Radiocomunicaciones
Creative Commons Licenses: Recognition - Share

Full text

[img]
Preview
PDF - Requires a PDF viewer, such as GSview, Xpdf or Adobe Acrobat Reader
Download (434kB) | Preview

Abstract

Multi-agent learning is a promising method to simulate aggregate competitive behaviour in finance. Learning expert agents’ reward functions through their external demonstrations is hence particularly relevant for subsequent design of realistic agent-based simulations. Inverse Reinforcement Learning (IRL) aims at acquiring such reward functions through inference, allowing to generalize the resulting policy to states not observed in the past. This paper investigates whether IRL can infer such rewards from agents within real financial stochastic environments: limit order books (LOB). We introduce a simple one level LOB, where the interactions of a number of stochastic agents and an expert trading agent are modelled as a Markov decision process. We consider two cases for the expert’s reward: either a simple linear function of state features; or a complex, more realistic non-linear function. Given the expert agent’s demonstrations, we attempt to discover their strategy by modelling their latent reward function using linear and Gaussian process (GP) regressors from previous literature, and our own approach through Bayesian neural networks (BNN). While the three methods can learn the linear case, only the GP-based and our proposed BNN methods are able to discover the non-linear reward case. Our BNN IRL algorithm outperforms the other two approaches as the number of samples increases. These results illustrate that complex behaviours, induced by non-linear reward functions amid agent-based stochastic scenarios, can be deduced through inference, encouraging the use of inverse reinforcement learning for opponent-modelling in multi-agent systems.

More information

Item ID: 67303
DC Identifier: https://oa.upm.es/67303/
OAI Identifier: oai:oa.upm.es:67303
Official URL: https://arxiv.org/pdf/1906.04813.pdf
Deposited by: Jacobo Roa Vicens
Deposited on: 02 Jun 2021 10:01
Last Modified: 02 Jun 2021 10:01
  • Logo InvestigaM (UPM)
  • Logo GEOUP4
  • Logo Open Access
  • Open Access
  • Logo Sherpa/Romeo
    Check whether the anglo-saxon journal in which you have published an article allows you to also publish it under open access.
  • Logo Dulcinea
    Check whether the spanish journal in which you have published an article allows you to also publish it under open access.
  • Logo de Recolecta
  • Logo del Observatorio I+D+i UPM
  • Logo de OpenCourseWare UPM