在多元线性回归中,并不是所用特征越多越好;选择少量、合适的特征既可以避免过拟合,也可以增加模型解释度。这里介绍3种方法来选择特征:最优子集选择
、向前或向后逐步选择
、交叉验证法
。
最优子集选择
这种方法的思想很简单,就是把所有的特征组合都尝试建模一遍,然后选择最优的模型。基本如下:
对于p个特征,从k=1到k=p——
从p个特征中任意选择k个,建立C(p,k)个模型,选择最优的一个(RSS最小或R2最大);
从p个最优模型中选择一个最优模型(交叉验证误差、Cp、BIC、Adjusted R2等指标)。
这种方法优势很明显:所有各种可能的情况都尝遍了,最后选择的一定是最优;劣势一样很明显:当p越大时,计算量也会越发明显地增大(2^p)。因此这种方法只适用于p较小的情况。
以下为R中ISLR
包的Hitters
数据集为例,构建棒球运动员的多元线性模型。
> library(ISLR) > Hitters <- na.omit(Hitters) > dim(Hitters) # 除去Salary做为因变量,还剩下19个特征[1] 263 20> library(leaps) > regfit.full = regsubsets(Salary~.,Hitters,nvmax = 19) #选择最大19个特征的全子集选择模型> reg.summary = summary(regfit.full) # 可看到不同数量下的特征选择> plot(reg.summary$rss,xlab="Number of Variables",ylab="RSS",type = "l") # 特征越多,RSS越小> plot(reg.summary$adjr2,xlab="Number of Variables",ylab="Adjusted RSq",type = "l") > points(which.max(reg.summary$adjr2),reg.summary$adjr2[11],col="red",cex=2,pch=20) # 11个特征时,Adjusted R2最大> plot(reg.summary$cp,xlab="Number of Variables",ylab="Cp",type = "l") > points(which.min(reg.summary$cp),reg.summary$cp[10],col="red",cex=2,pch=20) # 10个特征时,Cp最小> plot(reg.summary$bic,xlab="Number of Variables",ylab="BIC",type = "l") > points(which.min(reg.summary$bic),reg.summary$bic[6],col="red",cex=2,pch=20) # 6个特征时,BIC最小> plot(regfit.full,scale = "r2") #特征越多,R2越大,这不意外> plot(regfit.full,scale = "adjr2") > plot(regfit.full,scale = "Cp") > plot(regfit.full,scale = "bic")
Adjust R2
、Cp
、BIC
是三个用来评价模型的统计量(定义和公式就不写了),Adjust R2越接近1说明模型拟合得越好;其他两个指标则是越小越好。
注意到在这3个指标下,特征选择的结果也不同。这里以Adjust R2为例,以它为指标选出了11个特征:
从图中可见,当Adjusted R2最大(当然也就比0.5多一点,也不怎么样)时,选出的11个特征为:AtBat
、Hits
、Walks
、CAtBat
、CRuns
、CRBI
、CWalks
、LeagueN
、DivisionW
、PutOuts
、Assists
。
可以直接查看模型的系数:
> coef(regfit.full,11) (Intercept) AtBat Hits Walks CAtBat 135.7512195 -2.1277482 6.9236994 5.6202755 -0.1389914 CRuns CRBI CWalks LeagueN DivisionW 1.4553310 0.7852528 -0.8228559 43.1116152 -111.1460252 PutOuts Assists 0.2894087 0.2688277
可见这11个特征与图中一致,现在特征筛选出来了,系数也算出来了,模型就已经构建出来了。
逐步回归法
这种方法的思想可以概括为“一条路走到黑”,每一次迭代都只能沿着上一次迭代的方向继续进行,不能反悔,不能丢锅。以向前逐步回归为例,基本过程如下:
对于p个特征,从k=1到k=p——
从p个特征中任意选择k个,建立C(p,k)个模型,选择最优的一个(RSS最小或R2最大);
基于上一步的最优模型的k个特征,再选择加入一个,这样就可以构建p-k个模型,从中最优;
重复以上过程,直到k=p迭代完成;
从p个模型中选择最优。
向后逐步回归法类似,只是一开始就用p个特征建模,之后每迭代一次就舍弃一个特征是模型更优。
这种方法与最优子集选择法的差别在于,最优子集选择法可以选择任意(k+1)个特征进行建模,而逐步回归法只能基于之前所选的k个特征进行(k+1)轮建模。所以逐步回归法不能保证最优,因为前面的特征选择中很有可能选中一些不是很重要的特征在后面的迭代中也必须加上,从而就不可能产生最优特征组合了。但优势就是计算量大大减小(p*(p+1)/2),因此实用性更强。
> regfit.fwd = regsubsets(Salary~.,data=Hitters,nvmax = 19,method = "forward") > summary(regfit.fwd) # 显示向前选择过程Subset selection object Call: regsubsets.formula(Salary ~ ., data = Hitters, nvmax = 19, method = "forward") Selection Algorithm: forward AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits1 ( 1 ) " " " " " " " " " " " " " " " " " " 2 ( 1 ) " " "*" " " " " " " " " " " " " " " 3 ( 1 ) " " "*" " " " " " " " " " " " " " " 4 ( 1 ) " " "*" " " " " " " " " " " " " " " 5 ( 1 ) "*" "*" " " " " " " " " " " " " " " 6 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 7 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 8 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 9 ( 1 ) "*" "*" " " " " " " "*" " " "*" " " 10 ( 1 ) "*" "*" " " " " " " "*" " " "*" " " 11 ( 1 ) "*" "*" " " " " " " "*" " " "*" " " 12 ( 1 ) "*" "*" " " "*" " " "*" " " "*" " " 13 ( 1 ) "*" "*" " " "*" " " "*" " " "*" " " 14 ( 1 ) "*" "*" "*" "*" " " "*" " " "*" " " 15 ( 1 ) "*" "*" "*" "*" " " "*" " " "*" "*" 16 ( 1 ) "*" "*" "*" "*" "*" "*" " " "*" "*" 17 ( 1 ) "*" "*" "*" "*" "*" "*" " " "*" "*" 18 ( 1 ) "*" "*" "*" "*" "*" "*" "*" "*" "*" 19 ( 1 ) "*" "*" "*" "*" "*" "*" "*" "*" "*" CHmRun CRuns CRBI CWalks LeagueN DivisionW PutOuts1 ( 1 ) " " " " "*" " " " " " " " " 2 ( 1 ) " " " " "*" " " " " " " " " 3 ( 1 ) " " " " "*" " " " " " " "*" 4 ( 1 ) " " " " "*" " " " " "*" "*" 5 ( 1 ) " " " " "*" " " " " "*" "*" 6 ( 1 ) " " " " "*" " " " " "*" "*" 7 ( 1 ) " " " " "*" "*" " " "*" "*" 8 ( 1 ) " " "*" "*" "*" " " "*" "*" 9 ( 1 ) " " "*" "*" "*" " " "*" "*" 10 ( 1 ) " " "*" "*" "*" " " "*" "*" 11 ( 1 ) " " "*" "*" "*" "*" "*" "*" 12 ( 1 ) " " "*" "*" "*" "*" "*" "*" 13 ( 1 ) " " "*" "*" "*" "*" "*" "*" 14 ( 1 ) " " "*" "*" "*" "*" "*" "*" 15 ( 1 ) " " "*" "*" "*" "*" "*" "*" 16 ( 1 ) " " "*" "*" "*" "*" "*" "*" 17 ( 1 ) " " "*" "*" "*" "*" "*" "*" 18 ( 1 ) " " "*" "*" "*" "*" "*" "*" 19 ( 1 ) "*" "*" "*" "*" "*" "*" "*" Assists Errors NewLeagueN1 ( 1 ) " " " " " " 2 ( 1 ) " " " " " " 3 ( 1 ) " " " " " " 4 ( 1 ) " " " " " " 5 ( 1 ) " " " " " " 6 ( 1 ) " " " " " " 7 ( 1 ) " " " " " " 8 ( 1 ) " " " " " " 9 ( 1 ) " " " " " " 10 ( 1 ) "*" " " " " 11 ( 1 ) "*" " " " " 12 ( 1 ) "*" " " " " 13 ( 1 ) "*" "*" " " 14 ( 1 ) "*" "*" " " 15 ( 1 ) "*" "*" " " 16 ( 1 ) "*" "*" " " 17 ( 1 ) "*" "*" "*" 18 ( 1 ) "*" "*" "*" 19 ( 1 ) "*" "*" "*" > regfit.bwd = regsubsets(Salary~.,data=Hitters,nvmax = 19,method = "backward") > summary(regfit.bwd) # 显示向后选择过程Subset selection object Call: regsubsets.formula(Salary ~ ., data = Hitters, nvmax = 19, method = "backward") Selection Algorithm: backward AtBat Hits HmRun Runs RBI Walks Years CAtBat CHits1 ( 1 ) " " " " " " " " " " " " " " " " " " 2 ( 1 ) " " "*" " " " " " " " " " " " " " " 3 ( 1 ) " " "*" " " " " " " " " " " " " " " 4 ( 1 ) "*" "*" " " " " " " " " " " " " " " 5 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 6 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 7 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 8 ( 1 ) "*" "*" " " " " " " "*" " " " " " " 9 ( 1 ) "*" "*" " " " " " " "*" " " "*" " " 10 ( 1 ) "*" "*" " " " " " " "*" " " "*" " " 11 ( 1 ) "*" "*" " " " " " " "*" " " "*" " " 12 ( 1 ) "*" "*" " " "*" " " "*" " " "*" " " 13 ( 1 ) "*" "*" " " "*" " " "*" " " "*" " " 14 ( 1 ) "*" "*" "*" "*" " " "*" " " "*" " " 15 ( 1 ) "*" "*" "*" "*" " " "*" " " "*" "*" 16 ( 1 ) "*" "*" "*" "*" "*" "*" " " "*" "*" 17 ( 1 ) "*" "*" "*" "*" "*" "*" " " "*" "*" 18 ( 1 ) "*" "*" "*" "*" "*" "*" "*" "*" "*" 19 ( 1 ) "*" "*" "*" "*" "*" "*" "*" "*" "*" CHmRun CRuns CRBI CWalks LeagueN DivisionW PutOuts1 ( 1 ) " " "*" " " " " " " " " " " 2 ( 1 ) " " "*" " " " " " " " " " " 3 ( 1 ) " " "*" " " " " " " " " "*" 4 ( 1 ) " " "*" " " " " " " " " "*" 5 ( 1 ) " " "*" " " " " " " " " "*" 6 ( 1 ) " " "*" " " " " " " "*" "*" 7 ( 1 ) " " "*" " " "*" " " "*" "*" 8 ( 1 ) " " "*" "*" "*" " " "*" "*" 9 ( 1 ) " " "*" "*" "*" " " "*" "*" 10 ( 1 ) " " "*" "*" "*" " " "*" "*" 11 ( 1 ) " " "*" "*" "*" "*" "*" "*" 12 ( 1 ) " " "*" "*" "*" "*" "*" "*" 13 ( 1 ) " " "*" "*" "*" "*" "*" "*" 14 ( 1 ) " " "*" "*" "*" "*" "*" "*" 15 ( 1 ) " " "*" "*" "*" "*" "*" "*" 16 ( 1 ) " " "*" "*" "*" "*" "*" "*" 17 ( 1 ) " " "*" "*" "*" "*" "*" "*" 18 ( 1 ) " " "*" "*" "*" "*" "*" "*" 19 ( 1 ) "*" "*" "*" "*" "*" "*" "*" Assists Errors NewLeagueN1 ( 1 ) " " " " " " 2 ( 1 ) " " " " " " 3 ( 1 ) " " " " " " 4 ( 1 ) " " " " " " 5 ( 1 ) " " " " " " 6 ( 1 ) " " " " " " 7 ( 1 ) " " " " " " 8 ( 1 ) " " " " " " 9 ( 1 ) " " " " " " 10 ( 1 ) "*" " " " " 11 ( 1 ) "*" " " " " 12 ( 1 ) "*" " " " " 13 ( 1 ) "*" "*" " " 14 ( 1 ) "*" "*" " " 15 ( 1 ) "*" "*" " " 16 ( 1 ) "*" "*" " " 17 ( 1 ) "*" "*" "*" 18 ( 1 ) "*" "*" "*" 19 ( 1 ) "*" "*" "*"
需要注意的是,全子集回归、向前逐步回归和向后逐步回归的特征选择结果可能不同:
> coef(regfit.full,7) (Intercept) Hits Walks CAtBat CHits 79.4509472 1.2833513 3.2274264 -0.3752350 1.4957073 CHmRun DivisionW PutOuts 1.4420538 -129.9866432 0.2366813 > coef(regfit.fwd,7) (Intercept) AtBat Hits Walks CRBI 109.7873062 -1.9588851 7.4498772 4.9131401 0.8537622 CWalks DivisionW PutOuts -0.3053070 -127.1223928 0.2533404 > coef(regfit.bwd,7) (Intercept) AtBat Hits Walks CRuns 105.6487488 -1.9762838 6.7574914 6.0558691 1.1293095 CWalks DivisionW PutOuts -0.7163346 -116.1692169 0.3028847
交叉验证法
交叉验证法是机器学习中一个普适的检验模型偏差和方差的方法,并不局限于具体的模型本身。这里介绍一种折中的k折交叉验证法
,过程如下:
将样本随机划入k(一般取10)个大小接近的折(fold)
http://www.cnblogs.com/lafengdatascientist/p/7168507.html