This function computes the Partial Least Squares fit. This algorithm scales mainly in the number of observations.
kernel.pls.fit( X, y, m = ncol(X), compute.jacobian = FALSE, DoF.max = min(ncol(X) + 1, nrow(X) - 1) )
matrix of predictor observations.
vector of response observations. The length of
maximal number of Partial Least Squares components. Default is
Should the first derivative of the regression
coefficients be computed as well? Default is
upper bound on the Degrees of Freedom. Default is
matrix of regression coefficients
vector of regression intercepts
Degrees of Freedom
vector of estimated model error
matrix of fitted values
vector of squared length of fitted values
vector of residual sum of error
matrix of normalized PLS components
We first standardize
X to zero mean and unit variance.
Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10107
Kraemer, N., Braun, M.L. (2007) "Kernelizing PLS, Degrees of Freedom, and Efficient Model Selection", Proceedings of the 24th International Conference on Machine Learning, Omni Press, 441 - 448
Nicole Kraemer, Mikio L. Braun