Maximum likelihood estimation for the tensor normal distribution: Algorithm, minimum sample size, and empirical bias and dispersion
作者:
Highlights:
•
摘要
Recently, there has been a growing interest in the analysis of multi-dimensional data arrays (e.g. when a univariate response is sampled in 3-D space or when a multivariate response is sampled in time and 2-D space). In this article, we scrutinize the problem of maximum likelihood estimation (MLE) for the tensor normal distribution of order 3 or more, which is characterized by the separability of its variance–covariance structure; there is one variance–covariance matrix per dimension. In the 3-D case, the system of likelihood equations for the three variance–covariance matrices has no analytical solution, and therefore needs to be solved iteratively. We studied the convergence of an iterative three-stage algorithm (MLE-3D) that we propose for this, determined the minimum sample size required for matrix estimates to exist, and computed by simulation the empirical bias and dispersion of the Kronecker product of the three variance–covariance matrix estimators in eight scenarios. We found that the standardized bias and a matrix measure of dispersion decrease monotonically and tend to vanish with increasing sample size, so the Kronecker product estimator is consistent. An example with 3-D spatial measures of glucose content in the brain is also presented. Finally, results are discussed and the 4-D case is presented with simulation results in an appendix. Software is available for interested users.
论文关键词:Empirical bias and dispersion,Maximum likelihood estimation,Minimum sample size,Multi-stage algorithm,Separable variance–covariance structure,Tensor normal distribution
论文评审过程:Received 15 November 2011, Available online 20 September 2012.
论文官网地址:https://doi.org/10.1016/j.cam.2012.09.017