This function computes the Partial Least Squares solution and the first derivative of the regression coefficients. This implementation scales mostly in the number of variables
linear.pls.fit( X, y, m = ncol(X), compute.jacobian = FALSE, DoF.max = min(ncol(X) + 1, nrow(X) - 1) )
X | matrix of predictor observations. |
---|---|
y | vector of response observations. The length of |
m | maximal number of Partial Least Squares components. Default is
|
compute.jacobian | Should the first derivative of the regression
coefficients be computed as well? Default is |
DoF.max | upper bound on the Degrees of Freedom. Default is
|
matrix of regression coefficients
vector of regression intercepts
Degrees of Freedom
vector of estimated model error
matrix of fitted values
vector of squared length of fitted values
vector of residual sum of error
matrix of normalized PLS components
We first standardize X
to zero mean and unit variance.
Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10107
Nicole Kraemer