R 中的 xgboost:分步示例
Boosting是一种机器学习技术,已被证明可以生成具有高预测准确性的模型。
在实践中实现 boosting 最常见的方法之一是使用XGBoost ,它是“极限梯度提升”的缩写。
本教程提供了如何使用 XGBoost 在 R 中拟合增强模型的分步示例。
第1步:加载必要的包
首先,我们将加载必要的库。
library (xgboost) #for fitting the xgboost model library (caret) #for general data preparation and model fitting
第2步:加载数据
对于此示例,我们将使用MASS包中的波士顿数据集拟合改进的回归模型。
该数据集包含 13 个预测变量,我们将使用它们来预测名为mdev的响应变量,该变量表示波士顿周围不同人口普查区的房屋中值。
#load the data
data = MASS::Boston
#view the structure of the data
str(data)
'data.frame': 506 obs. of 14 variables:
$ crim: num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
$ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
$ indus: num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
$chas: int 0 0 0 0 0 0 0 0 0 0 ...
$ nox: num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
$rm: num 6.58 6.42 7.18 7 7.15 ...
$ age: num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
$ dis: num 4.09 4.97 4.97 6.06 6.06 ...
$rad: int 1 2 2 3 3 3 5 5 5 5 ...
$ tax: num 296 242 242 222 222 222 311 311 311 311 ...
$ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
$ black: num 397 397 393 395 397 ...
$ lstat: num 4.98 9.14 4.03 2.94 5.33 ...
$ medv: num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
我们可以看到数据集总共包含 506 个观测值和 14 个变量。
第三步:准备数据
接下来,我们将使用 caret 包中的createDataPartition()函数将原始数据集拆分为训练集和测试集。
对于这个例子,我们将选择使用 80% 的原始数据集作为训练集的一部分。
请注意,xgboost 包也使用矩阵数据,因此我们将使用data.matrix()函数来保存预测变量。
#make this example reproducible
set.seed(0)
#split into training (80%) and testing set (20%)
parts = createDataPartition(data$medv, p = .8 , list = F )
train = data[parts, ]
test = data[-parts, ]
#define predictor and response variables in training set
train_x = data. matrix (train[, -13])
train_y = train[,13]
#define predictor and response variables in testing set
test_x = data. matrix (test[, -13])
test_y = test[, 13]
#define final training and testing sets
xgb_train = xgb. DMatrix (data = train_x, label = train_y)
xgb_test = xgb. DMatrix (data = test_x, label = test_y)
第四步:调整模型
接下来,我们将使用xgb.train()函数调整 XGBoost 模型,该函数显示每个提升周期的训练和测试 RMSE(均方误差)。
请注意,我们选择在本示例中使用 70 轮,但对于更大的数据集,使用数百甚至数千轮并不罕见。请记住,回合数越多,运行时间越长。
另请注意, max. Degree参数指定各个决策树的开发深度。我们通常选择相当低的数字,例如 2 或 3,以便种植较小的树。事实证明,这种方法往往会产生更准确的模型。
#define watchlist
watchlist = list(train=xgb_train, test=xgb_test)
#fit XGBoost model and display training and testing data at each round
model = xgb.train(data = xgb_train, max.depth = 3 , watchlist=watchlist, nrounds = 70 )
[1] train-rmse:10.167523 test-rmse:10.839775
[2] train-rmse:7.521903 test-rmse:8.329679
[3] train-rmse:5.702393 test-rmse:6.691415
[4] train-rmse:4.463687 test-rmse:5.631310
[5] train-rmse:3.666278 test-rmse:4.878750
[6] train-rmse:3.159799 test-rmse:4.485698
[7] train-rmse:2.855133 test-rmse:4.230533
[8] train-rmse:2.603367 test-rmse:4.099881
[9] train-rmse:2.445718 test-rmse:4.084360
[10] train-rmse:2.327318 test-rmse:3.993562
[11] train-rmse:2.267629 test-rmse:3.944454
[12] train-rmse:2.189527 test-rmse:3.930808
[13] train-rmse:2.119130 test-rmse:3.865036
[14] train-rmse:2.086450 test-rmse:3.875088
[15] train-rmse:2.038356 test-rmse:3.881442
[16] train-rmse:2.010995 test-rmse:3.883322
[17] train-rmse:1.949505 test-rmse:3.844382
[18] train-rmse:1.911711 test-rmse:3.809830
[19] train-rmse:1.888488 test-rmse:3.809830
[20] train-rmse:1.832443 test-rmse:3.758502
[21] train-rmse:1.816150 test-rmse:3.770216
[22] train-rmse:1.801369 test-rmse:3.770474
[23] train-rmse:1.788891 test-rmse:3.766608
[24] train-rmse:1.751795 test-rmse:3.749583
[25] train-rmse:1.713306 test-rmse:3.720173
[26] train-rmse:1.672227 test-rmse:3.675086
[27] train-rmse:1.648323 test-rmse:3.675977
[28] train-rmse:1.609927 test-rmse:3.745338
[29] train-rmse:1.594891 test-rmse:3.756049
[30] train-rmse:1.578573 test-rmse:3.760104
[31] train-rmse:1.559810 test-rmse:3.727940
[32] train-rmse:1.547852 test-rmse:3.731702
[33] train-rmse:1.534589 test-rmse:3.729761
[34] train-rmse:1.520566 test-rmse:3.742681
[35] train-rmse:1.495155 test-rmse:3.732993
[36] train-rmse:1.467939 test-rmse:3.738329
[37] train-rmse:1.446343 test-rmse:3.713748
[38] train-rmse:1.435368 test-rmse:3.709469
[39] train-rmse:1.401356 test-rmse:3.710637
[40] train-rmse:1.390318 test-rmse:3.709461
[41] train-rmse:1.372635 test-rmse:3.708049
[42] train-rmse:1.367977 test-rmse:3.707429
[43] train-rmse:1.359531 test-rmse:3.711663
[44] train-rmse:1.335347 test-rmse:3.709101
[45] train-rmse:1.331750 test-rmse:3.712490
[46] train-rmse:1.313087 test-rmse:3.722981
[47] train-rmse:1.284392 test-rmse:3.712840
[48] train-rmse:1.257714 test-rmse:3.697482
[49] train-rmse:1.248218 test-rmse:3.700167
[50] train-rmse:1.243377 test-rmse:3.697914
[51] train-rmse:1.231956 test-rmse:3.695797
[52] train-rmse:1.219341 test-rmse:3.696277
[53] train-rmse:1.207413 test-rmse:3.691465
[54] train-rmse:1.197197 test-rmse:3.692108
[55] train-rmse:1.171748 test-rmse:3.683577
[56] train-rmse:1.156332 test-rmse:3.674458
[57] train-rmse:1.147686 test-rmse:3.686367
[58] train-rmse:1.143572 test-rmse:3.686375
[59] train-rmse:1.129780 test-rmse:3.679791
[60] train-rmse:1.111257 test-rmse:3.679022
[61] train-rmse:1.093541 test-rmse:3.699670
[62] train-rmse:1.083934 test-rmse:3.708187
[63] train-rmse:1.067109 test-rmse:3.712538
[64] train-rmse:1.053887 test-rmse:3.722480
[65] train-rmse:1.042127 test-rmse:3.720720
[66] train-rmse:1.031617 test-rmse:3.721224
[67] train-rmse:1.016274 test-rmse:3.699549
[68] train-rmse:1.008184 test-rmse:3.709522
[69] train-rmse:0.999220 test-rmse:3.708000
[70] train-rmse:0.985907 test-rmse:3.705192
从结果中我们可以看出,在56轮时达到了最小测试 RMSE。超过这一点,测试 RMSE 开始增加,表明我们过度拟合训练数据。
因此,我们将最终的 XGBoost 模型设置为使用 56 轮:
#define final model
final = xgboost(data = xgb_train, max.depth = 3 , nrounds = 56 , verbose = 0 )
注意: verbose=0参数告诉 R 不要显示每轮的训练和测试错误。
第 5 步:使用模型进行预测
最后,我们可以使用最终的改进模型来预测测试集中波士顿房屋的中值。
然后我们将计算模型的以下准确度指标:
- MSE:均方误差
- MAE:平均绝对误差
- RMSE:均方根误差
mean((test_y - pred_y)^2) #mse
caret::MAE(test_y, pred_y) #mae
caret::RMSE(test_y, pred_y) #rmse
[1] 13.50164
[1] 2.409426
[1] 3.674457
均方误差结果为3.674457 。这代表了对中位房价的预测与测试集中观察到的实际房价之间的平均差异。
如果我们愿意,我们可以将此 RMSE 与其他模型进行比较,例如 多元线性回归、 岭回归、 主成分回归等。查看哪个模型可以产生最准确的预测。
您可以在此处找到本示例中使用的完整 R 代码。