TY - CONF ID - upm28942 UR - http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6811729&tag=1 A1 - Sánchez Fernández, Matilde A1 - Valcarcel Macua, Sergio A1 - Zazo Bello, Santiago Y1 - 2013/// N2 - This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions. PB - IEEE KW - Approximate dynamic programming KW - Linear value function approximation KW - Mean squared Bellman Error KW - Mean squared projected Bellman Error KW - Reinforcement Learning TI - A unified framework for linear function approximation of value functions in stochastic control SP - 1 M2 - Marrakech, Morocco AV - public EP - 5 T2 - 21st European Signal Processing Conference (EUSIPCO) ER -