Online variance minimization
作者:Manfred K. Warmuth, Dima Kuzmin
摘要
We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C t and try to select a parameter vector w t−1 such that the total variance over a sequence of trials \(\sum_{t=1}^{T} (\boldsymbol {w}^{t-1})^{\top} \boldsymbol {C}^{t}\boldsymbol {w}^{t-1}\) is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in ℝn are considered—the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue of the total covariance matrix \(\sum_{t=1}^{T} \boldsymbol {C}^{t}\). For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy regularization. In the second case, the algorithm has to maintain uncertainty information over all unit directions u. For this purpose, directions are represented as dyads uu ⊤ and the uncertainty over all directions as a mixture of dyads which is a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each of the two cases we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter.
论文关键词:Hedge algorithm, Weighted majority algorithm, Online learning, Expert setting, Density matrix, Matrix exponentiated gradient algorithm, Quantum relative entropy
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10994-011-5269-0