Estimating a large precision (inverse covariance) matrix is difficult due to the curse of dimensionality. The sample covariance matrix is notoriously badfor estimating the covariance matrix when the dimension p of the multivariatevector is comparable or even larger than the number of time points n observed.It is singular and hence cannot be inverted for the precision matrix.We use the factor model and procedure proposed by Pan and Yao (2008)for multivariate time series data to carry out dimension reduction when p &/188; nor even p > n. A version of the unknown factors and the corresponding factorloadings matrix are obtained. We show that when each factor is shared byO(p) cross-sectional data points, the estimated factor loadings matrix, as wellas the estimated precision matrix for the original data, converge weakly inL2-norm to the true ones at a rate independent of p. This striking resultdemonstrates clearly when the \curse" is cancelled out by the \blessings" indimensionality. It is particularly useful in portfolio allocation in finance when the number of stocks p is large. Convergence rate in L2 norm for the precisionmatrix is directly related to the goodness of the estimated optimal portfolio,which converges weakly to the true one in the average squared L2 norm at arate also independent of p as a result.We also show that the method cannot estimate the covariance matrixbetter than the sample covariance matrix, which coincides with the result inFan et al. (2008) when factors are known. Simulations are done to compare with likelihood based methods. A set of real stock market data is analysed.
By Clifford Lam and Qiwei Yao