RT Conference Proceedings SR 00 A1 Sánchez Fernández, Matilde A1 Valcarcel Macua, Sergio A1 Zazo Bello, Santiago T1 A unified framework for linear function approximation of value functions in stochastic control YR 2013 FD 09/09/2013 - 13/09/2013 SP 1 OP 5 K1 Approximate dynamic programming, Linear value function approximation, Mean squared Bellman Error, Mean squared projected Bellman Error, Reinforcement Learning AB This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions. T2 21st European Signal Processing Conference (EUSIPCO) ED Marrakech, Morocco AV Published LK http://oa.upm.es/28942/ UL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6811729&tag=1