This function computes the Partial Least Squares solution and the first derivative of the regression coefficients. This implementation scales mostly in the number of variables

linear.pls.fit(
X,
y,
m = ncol(X),
compute.jacobian = FALSE,
DoF.max = min(ncol(X) + 1, nrow(X) - 1)
)

## Arguments

X matrix of predictor observations. vector of response observations. The length of y is the same as the number of rows of X. maximal number of Partial Least Squares components. Default is m=ncol(X). Should the first derivative of the regression coefficients be computed as well? Default is FALSE upper bound on the Degrees of Freedom. Default is min(ncol(X)+1,nrow(X)-1).

## Value

coefficients

matrix of regression coefficients

intercept

vector of regression intercepts

DoF

Degrees of Freedom

sigmahat

vector of estimated model error

Yhat

matrix of fitted values

yhat

vector of squared length of fitted values

vector of residual sum of error

covarianceif compute.jacobian is TRUE, the function returns the array of covariance matrices for the PLS regression coefficients.
TT

matrix of normalized PLS components

## Details

We first standardize X to zero mean and unit variance.

Kraemer, N., Sugiyama M. (2011). "The Degrees of Freedom of Partial Least Squares Regression". Journal of the American Statistical Association 106 (494) https://www.tandfonline.com/doi/abs/10.1198/jasa.2011.tm10107

kernel.pls.fit, pls.cv,pls.model, pls.ic

Nicole Kraemer

## Examples


n<-50 # number of observations
p<-5 # number of variables
X<-matrix(rnorm(n*p),ncol=p)
y<-rnorm(n)

pls.object<-linear.pls.fit(X,y,m=5,compute.jacobian=TRUE)