%0 Conference Paper %A Sánchez Fernández, Matilde %A Valcarcel Macua, Sergio %A Zazo Bello, Santiago %B 21st European Signal Processing Conference (EUSIPCO) %C Marrakech, Morocco %D 2013 %F upm:28942 %I IEEE %K Approximate dynamic programming, Linear value function approximation, Mean squared Bellman Error, Mean squared projected Bellman Error, Reinforcement Learning %P 1-5 %T A unified framework for linear function approximation of value functions in stochastic control %U http://oa.upm.es/28942/ %X This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.