Wednesday, August 31, 2016

Scaling up Vector Autoregressive Models With Operator-Valued Random Fourier Features

Ah ! Random Features used in autoregressive modeling:



Scaling up Vector Autoregressive Models With Operator-Valued Random Fourier Features by Romain Brault, N eh emy Lim, and Florence d'Alch é-Buc

We consider a nonparametric approach to Vector Autoregressive modeling by working in vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHS). The main idea is to build vector-valued models (OKVAR) using Operator-Valued Kernels (OVK). As in the scalar case, regression with OVK boils down to learning as many weight parameters as data, except that here, weights are vectors. To avoid the inherent complexity in time and in memory to deal with kernels, we introduce Operator-Valued Random Fourier Features (ORFF) that extend Random Fourier Features devoted to scalar-valued kernels approximation. Applying the approach to decomposable kernels, we show that ORFF-VAR is able to compete with OKVAR in terms of accuracy on stationary nonlinear time series while keeping low execution time, comparable to VAR. Results on simulated datasets as well as real datasets are presented
some code information can be found here.
 and earlier: Random Fourier Features for Operator-Valued Kernels by Romain Brault, Florence d'Alché-Buc, Markus Heinonen

Devoted to multi-task learning and structured output learning, operator-valued kernels provide a flexible tool to build vector-valued functions in the context of Reproducing Kernel Hilbert Spaces. To scale up these methods, we extend the celebrated Random Fourier Feature methodology to get an approximation of operator-valued kernels. We propose a general principle for Operator-valued Random Fourier Feature construction relying on a generalization of Bochner's theorem for translation-invariant operator-valued Mercer kernels. We prove the uniform convergence of the kernel approximation for bounded and unbounded operator random Fourier features using appropriate Bernstein matrix concentration inequality. An experimental proof-of-concept shows the quality of the approximation and the efficiency of the corresponding linear models on example datasets.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly