{"id":1233,"date":"2023-07-27T04:52:33","date_gmt":"2023-07-27T04:52:33","guid":{"rendered":"https:\/\/statorials.org\/pl\/xgboost-w-r\/"},"modified":"2023-07-27T04:52:33","modified_gmt":"2023-07-27T04:52:33","slug":"xgboost-w-r","status":"publish","type":"post","link":"https:\/\/statorials.org\/pl\/xgboost-w-r\/","title":{"rendered":"Xgboost w r: przyk\u0142ad krok po kroku"},"content":{"rendered":"<p><\/p>\n<hr>\n<p><span style=\"color: #000000;\"><a href=\"https:\/\/statorials.org\/pl\/usprawnic-uczenie-maszynowe\/\" target=\"_blank\" rel=\"noopener noreferrer\">Boosting<\/a> to technika uczenia maszynowego, kt\u00f3ra, jak wykazano, pozwala na tworzenie modeli o du\u017cej dok\u0142adno\u015bci predykcyjnej.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Jednym z najcz\u0119stszych sposob\u00f3w wdra\u017cania wzmocnienia w praktyce jest u\u017cycie <strong>XGBoost<\/strong> , skr\u00f3tu od \u201eekstremalnego wzmocnienia gradientu\u201d.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Ten samouczek zawiera przyk\u0142ad krok po kroku u\u017cycia XGBoost w celu dopasowania ulepszonego modelu w j\u0119zyku R.<\/span><\/p>\n<h3> <strong><span style=\"color: #000000;\">Krok 1: Za\u0142aduj niezb\u0119dne pakiety<\/span><\/strong><\/h3>\n<p> <span style=\"color: #000000;\">Najpierw za\u0142adujemy niezb\u0119dne biblioteki.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\">library<\/span> (xgboost) <span style=\"color: #008080;\">#for fitting the xgboost model<\/span>\n<span style=\"color: #993300;\">library<\/span> (caret) <span style=\"color: #008080;\">#for general data preparation and model fitting<\/span>\n<\/strong><\/pre>\n<h3> <span style=\"color: #000000;\"><strong>Krok 2: Za\u0142aduj dane<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">W tym przyk\u0142adzie dopasujemy ulepszony model regresji do zbioru danych <strong>Boston<\/strong> z pakietu <strong>MASS<\/strong> .<\/span><\/p>\n<p> <span style=\"color: #000000;\">Ten zbi\u00f3r danych zawiera 13 zmiennych predykcyjnych, kt\u00f3rych u\u017cyjemy do przewidzenia <a href=\"https:\/\/statorials.org\/pl\/zmienne-odpowiedzi-wyjasniajace\/\" target=\"_blank\" rel=\"noopener noreferrer\">zmiennej odpowiedzi<\/a> zwanej <strong>mdev<\/strong> , kt\u00f3ra reprezentuje \u015bredni\u0105 warto\u015b\u0107 dom\u00f3w w r\u00f3\u017cnych obwodach spisowych wok\u00f3\u0142 Bostonu.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#load the data\n<\/span>data = MASS::Boston\n\n<span style=\"color: #008080;\">#view the structure of the data\n<\/span>str(data) \n\n'data.frame': 506 obs. of 14 variables:\n $ crim: num 0.00632 0.02731 0.02729 0.03237 0.06905 ...\n $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...\n $ indus: num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...\n $chas: int 0 0 0 0 0 0 0 0 0 0 ...\n $ nox: num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...\n $rm: num 6.58 6.42 7.18 7 7.15 ...\n $ age: num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...\n $ dis: num 4.09 4.97 4.97 6.06 6.06 ...\n $rad: int 1 2 2 3 3 3 5 5 5 5 ...\n $ tax: num 296 242 242 222 222 222 311 311 311 311 ...\n $ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...\n $ black: num 397 397 393 395 397 ...\n $ lstat: num 4.98 9.14 4.03 2.94 5.33 ...\n $ medv: num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...\n<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Widzimy, \u017ce zbi\u00f3r danych zawiera \u0142\u0105cznie 506 <a href=\"https:\/\/statorials.org\/pl\/obserwacja-w-statystyce\/\" target=\"_blank\" rel=\"noopener noreferrer\">obserwacji<\/a> i 14 zmiennych.<\/span><\/p>\n<h3> <span style=\"color: #000000;\"><strong>Krok 3: Przygotuj dane<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Nast\u0119pnie u\u017cyjemy funkcji <strong>createDataPartition()<\/strong> z pakietu caret, aby podzieli\u0107 oryginalny zbi\u00f3r danych na zbi\u00f3r ucz\u0105cy i testowy.<\/span><\/p>\n<p> <span style=\"color: #000000;\">W tym przyk\u0142adzie zdecydujemy si\u0119 u\u017cy\u0107 80% oryginalnego zbioru danych jako cz\u0119\u015bci zbioru szkoleniowego.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Nale\u017cy pami\u0119ta\u0107, \u017ce pakiet xgboost r\u00f3wnie\u017c wykorzystuje dane macierzowe, wi\u0119c u\u017cyjemy funkcji <strong>data.matrix()<\/strong> do przechowywania naszych zmiennych predykcyjnych.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#make this example reproducible\n<\/span>set.seed(0)\n\n<span style=\"color: #008080;\">#split into training (80%) and testing set (20%)\n<\/span>parts = createDataPartition(data$medv, p = <span style=\"color: #008000;\">.8<\/span> , list = <span style=\"color: #008000;\">F<\/span> )\ntrain = data[parts, ]\ntest = data[-parts, ]\n\n<span style=\"color: #008080;\">#define predictor and response variables in training set\n<\/span>train_x = data. <span style=\"color: #3366ff;\">matrix<\/span> (train[, -13])\ntrain_y = train[,13]\n\n<span style=\"color: #008080;\">#define predictor and response variables in testing set\n<\/span>test_x = data. <span style=\"color: #3366ff;\">matrix<\/span> (test[, -13])\ntest_y = test[, 13]\n\n<span style=\"color: #008080;\">#define final training and testing sets\n<\/span>xgb_train = xgb. <span style=\"color: #3366ff;\">DMatrix<\/span> (data = train_x, label = train_y)\nxgb_test = xgb. <span style=\"color: #3366ff;\">DMatrix<\/span> (data = test_x, label = test_y)\n<\/span><\/span><\/strong><\/pre>\n<h3> <span style=\"color: #000000;\"><strong>Krok 4: Dostosuj model<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Nast\u0119pnie dostroimy model XGBoost za pomoc\u0105 funkcji <strong>xgb.train()<\/strong> , kt\u00f3ra wy\u015bwietla RMSE trenowania i testowania (\u015bredni b\u0142\u0105d kwadratowy) dla ka\u017cdego cyklu wzmacniania.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Nale\u017cy pami\u0119ta\u0107, \u017ce w tym przyk\u0142adzie zdecydowali\u015bmy si\u0119 u\u017cy\u0107 70 rund, ale w przypadku znacznie wi\u0119kszych zbior\u00f3w danych nierzadko u\u017cywa si\u0119 setek, a nawet tysi\u0119cy rund. Pami\u0119taj tylko, \u017ce im wi\u0119cej rund, tym d\u0142u\u017cszy czas dzia\u0142ania.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Nale\u017cy r\u00f3wnie\u017c pami\u0119ta\u0107, \u017ce argument <strong>max. Degree<\/strong> okre\u015bla g\u0142\u0119boko\u015b\u0107 rozwoju poszczeg\u00f3lnych drzew decyzyjnych. Zwykle wybieramy t\u0119 liczb\u0119 do\u015b\u0107 nisk\u0105, np. 2 lub 3, aby hodowa\u0107 mniejsze drzewa. Wykazano, \u017ce takie podej\u015bcie pozwala uzyska\u0107 dok\u0142adniejsze modele.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#define watchlist\n<\/span>watchlist = list(train=xgb_train, test=xgb_test)\n\n<span style=\"color: #008080;\">#fit XGBoost model and display training and testing data at each round\n<\/span>model = xgb.train(data = xgb_train, max.depth = <span style=\"color: #008000;\">3<\/span> , watchlist=watchlist, nrounds = <span style=\"color: #008000;\">70<\/span> )\n\n[1] train-rmse:10.167523 test-rmse:10.839775 \n[2] train-rmse:7.521903 test-rmse:8.329679 \n[3] train-rmse:5.702393 test-rmse:6.691415 \n[4] train-rmse:4.463687 test-rmse:5.631310 \n[5] train-rmse:3.666278 test-rmse:4.878750 \n[6] train-rmse:3.159799 test-rmse:4.485698 \n[7] train-rmse:2.855133 test-rmse:4.230533 \n[8] train-rmse:2.603367 test-rmse:4.099881 \n[9] train-rmse:2.445718 test-rmse:4.084360 \n[10] train-rmse:2.327318 test-rmse:3.993562 \n[11] train-rmse:2.267629 test-rmse:3.944454 \n[12] train-rmse:2.189527 test-rmse:3.930808 \n[13] train-rmse:2.119130 test-rmse:3.865036 \n[14] train-rmse:2.086450 test-rmse:3.875088 \n[15] train-rmse:2.038356 test-rmse:3.881442 \n[16] train-rmse:2.010995 test-rmse:3.883322 \n[17] train-rmse:1.949505 test-rmse:3.844382 \n[18] train-rmse:1.911711 test-rmse:3.809830 \n[19] train-rmse:1.888488 test-rmse:3.809830 \n[20] train-rmse:1.832443 test-rmse:3.758502 \n[21] train-rmse:1.816150 test-rmse:3.770216 \n[22] train-rmse:1.801369 test-rmse:3.770474 \n[23] train-rmse:1.788891 test-rmse:3.766608 \n[24] train-rmse:1.751795 test-rmse:3.749583 \n[25] train-rmse:1.713306 test-rmse:3.720173 \n[26] train-rmse:1.672227 test-rmse:3.675086 \n[27] train-rmse:1.648323 test-rmse:3.675977 \n[28] train-rmse:1.609927 test-rmse:3.745338 \n[29] train-rmse:1.594891 test-rmse:3.756049 \n[30] train-rmse:1.578573 test-rmse:3.760104 \n[31] train-rmse:1.559810 test-rmse:3.727940 \n[32] train-rmse:1.547852 test-rmse:3.731702 \n[33] train-rmse:1.534589 test-rmse:3.729761 \n[34] train-rmse:1.520566 test-rmse:3.742681 \n[35] train-rmse:1.495155 test-rmse:3.732993 \n[36] train-rmse:1.467939 test-rmse:3.738329 \n[37] train-rmse:1.446343 test-rmse:3.713748 \n[38] train-rmse:1.435368 test-rmse:3.709469 \n[39] train-rmse:1.401356 test-rmse:3.710637 \n[40] train-rmse:1.390318 test-rmse:3.709461 \n[41] train-rmse:1.372635 test-rmse:3.708049 \n[42] train-rmse:1.367977 test-rmse:3.707429 \n[43] train-rmse:1.359531 test-rmse:3.711663 \n[44] train-rmse:1.335347 test-rmse:3.709101 \n[45] train-rmse:1.331750 test-rmse:3.712490 \n[46] train-rmse:1.313087 test-rmse:3.722981 \n[47] train-rmse:1.284392 test-rmse:3.712840 \n[48] train-rmse:1.257714 test-rmse:3.697482 \n[49] train-rmse:1.248218 test-rmse:3.700167 \n[50] train-rmse:1.243377 test-rmse:3.697914 \n[51] train-rmse:1.231956 test-rmse:3.695797 \n[52] train-rmse:1.219341 test-rmse:3.696277 \n[53] train-rmse:1.207413 test-rmse:3.691465 \n[54] train-rmse:1.197197 test-rmse:3.692108 \n[55] train-rmse:1.171748 test-rmse:3.683577 \n[56] train-rmse:1.156332 test-rmse:3.674458 \n[57] train-rmse:1.147686 test-rmse:3.686367 \n[58] train-rmse:1.143572 test-rmse:3.686375 \n[59] train-rmse:1.129780 test-rmse:3.679791 \n[60] train-rmse:1.111257 test-rmse:3.679022 \n[61] train-rmse:1.093541 test-rmse:3.699670 \n[62] train-rmse:1.083934 test-rmse:3.708187 \n[63] train-rmse:1.067109 test-rmse:3.712538 \n[64] train-rmse:1.053887 test-rmse:3.722480 \n[65] train-rmse:1.042127 test-rmse:3.720720 \n[66] train-rmse:1.031617 test-rmse:3.721224 \n[67] train-rmse:1.016274 test-rmse:3.699549 \n[68] train-rmse:1.008184 test-rmse:3.709522 \n[69] train-rmse:0.999220 test-rmse:3.708000 \n[70] train-rmse:0.985907 test-rmse:3.705192 \n<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Z wyniku widzimy, \u017ce minimalny test RMSE osi\u0105ga si\u0119 po <strong>56<\/strong> rundach. Powy\u017cej tego punktu test RMSE zaczyna rosn\u0105\u0107, co wskazuje, \u017ce <a href=\"https:\/\/statorials.org\/pl\/nadmierne-dopasowanie-uczenia-maszynowego\/\" target=\"_blank\" rel=\"noopener noreferrer\">nadmiernie dopasowujemy dane ucz\u0105ce<\/a> .<\/span><\/p>\n<p> <span style=\"color: #000000;\">Zatem ustawimy nasz ostateczny model XGBoost na 56 rund:<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#define final model\n<\/span>final = xgboost(data = xgb_train, max.depth = <span style=\"color: #008000;\">3<\/span> , nrounds = <span style=\"color: #008000;\">56<\/span> , verbose = <span style=\"color: #008000;\">0<\/span> )<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Uwaga: Argument <strong>verbose=0<\/strong> m\u00f3wi R, aby nie wy\u015bwietla\u0142 b\u0142\u0119du uczenia i testowania dla ka\u017cdej rundy.<\/span><\/p>\n<h3> <span style=\"color: #000000;\"><strong>Krok 5: U\u017cyj modelu do przewidywania<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Na koniec mo\u017cemy u\u017cy\u0107 ostatecznie ulepszonego modelu do przewidywania \u015bredniej warto\u015bci dom\u00f3w w Bostonie w zestawie testowym.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Nast\u0119pnie obliczymy nast\u0119puj\u0105ce metryki dok\u0142adno\u015bci modelu:<\/span><\/p>\n<ul>\n<li> <span style=\"color: #000000;\"><strong>MSE:<\/strong> b\u0142\u0105d \u015bredniokwadratowy<\/span><\/li>\n<li> <span style=\"color: #000000;\"><strong>MAE:<\/strong> \u015bredni b\u0142\u0105d bezwzgl\u0119dny<\/span><\/li>\n<li> <span style=\"color: #000000;\"><strong>RMSE:<\/strong> \u015bredni b\u0142\u0105d kwadratowy<\/span><\/li>\n<\/ul>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\"><span style=\"color: #000000;\">mean((test_y - pred_y)^2)<\/span> #mse\n<span style=\"color: #000000;\">caret::MAE(test_y, pred_y)<\/span> #mae\n<span style=\"color: #000000;\">caret::RMSE(test_y, pred_y)<\/span> #rmse\n\n<\/span>[1] 13.50164\n[1] 2.409426\n[1] 3.674457<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">\u015aredni b\u0142\u0105d kwadratowy wynosi <strong>3,674457<\/strong> . Stanowi to \u015bredni\u0105 r\u00f3\u017cnic\u0119 mi\u0119dzy prognoz\u0105 wykonan\u0105 dla mediany warto\u015bci dom\u00f3w a rzeczywistymi warto\u015bciami dom\u00f3w zaobserwowanymi w zestawie testowym.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Je\u015bli chcemy, mo\u017cemy por\u00f3wna\u0107 ten RMSE z innymi modelami, takimi jak <a href=\"https:\/\/statorials.org\/pl\/wielokrotna-regresja-liniowa-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">wielokrotna regresja liniowa<\/a> , <a href=\"https:\/\/statorials.org\/pl\/regresja-grzebienia-w-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regresja grzbietowa<\/a> , <a href=\"https:\/\/statorials.org\/pl\/regresja-g\u0142ownych-sk\u0142adnikow-w-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regresja g\u0142\u00f3wnych sk\u0142adowych<\/a> itp. aby zobaczy\u0107, kt\u00f3ry model daje najdok\u0142adniejsze przewidywania.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Pe\u0142ny kod R u\u017cyty w tym przyk\u0142adzie znajdziesz <a href=\"https:\/\/github.com\/Statorials\/R-Guides\/blob\/main\/xgboost.R\" target=\"_blank\" rel=\"noopener noreferrer\">tutaj<\/a> .<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Boosting to technika uczenia maszynowego, kt\u00f3ra, jak wykazano, pozwala na tworzenie modeli o du\u017cej dok\u0142adno\u015bci predykcyjnej. Jednym z najcz\u0119stszych sposob\u00f3w wdra\u017cania wzmocnienia w praktyce jest u\u017cycie XGBoost , skr\u00f3tu od \u201eekstremalnego wzmocnienia gradientu\u201d. Ten samouczek zawiera przyk\u0142ad krok po kroku u\u017cycia XGBoost w celu dopasowania ulepszonego modelu w j\u0119zyku R. Krok 1: Za\u0142aduj niezb\u0119dne pakiety [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-1233","post","type-post","status-publish","format-standard","hentry","category-przewodnik"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>XGBoost w R: przyk\u0142ad krok po kroku<\/title>\n<meta name=\"description\" content=\"Ten samouczek zawiera przyk\u0142adowy krok po kroku spos\u00f3b uruchomienia XGBoost w R, popularnej technice uczenia maszynowego.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/statorials.org\/pl\/xgboost-w-r\/\" \/>\n<meta property=\"og:locale\" content=\"pl_PL\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"XGBoost w R: przyk\u0142ad krok po kroku\" \/>\n<meta property=\"og:description\" content=\"Ten samouczek zawiera przyk\u0142adowy krok po kroku spos\u00f3b uruchomienia XGBoost w R, popularnej technice uczenia maszynowego.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/statorials.org\/pl\/xgboost-w-r\/\" \/>\n<meta property=\"og:site_name\" content=\"Statorials\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-27T04:52:33+00:00\" \/>\n<meta name=\"author\" content=\"Benjamin Anderson\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Napisane przez\" \/>\n\t<meta name=\"twitter:data1\" content=\"Benjamin Anderson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Szacowany czas czytania\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minuty\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/statorials.org\/pl\/xgboost-w-r\/\",\"url\":\"https:\/\/statorials.org\/pl\/xgboost-w-r\/\",\"name\":\"XGBoost w R: przyk\u0142ad krok po kroku\",\"isPartOf\":{\"@id\":\"https:\/\/statorials.org\/pl\/#website\"},\"datePublished\":\"2023-07-27T04:52:33+00:00\",\"dateModified\":\"2023-07-27T04:52:33+00:00\",\"author\":{\"@id\":\"https:\/\/statorials.org\/pl\/#\/schema\/person\/6484727a4612df3e69f016c3129c6965\"},\"description\":\"Ten samouczek zawiera przyk\u0142adowy krok po kroku spos\u00f3b uruchomienia XGBoost w R, popularnej technice uczenia maszynowego.\",\"breadcrumb\":{\"@id\":\"https:\/\/statorials.org\/pl\/xgboost-w-r\/#breadcrumb\"},\"inLanguage\":\"pl-PL\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/statorials.org\/pl\/xgboost-w-r\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/statorials.org\/pl\/xgboost-w-r\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Dom\",\"item\":\"https:\/\/statorials.org\/pl\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Xgboost w r: przyk\u0142ad krok po kroku\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/statorials.org\/pl\/#website\",\"url\":\"https:\/\/statorials.org\/pl\/\",\"name\":\"Statorials\",\"description\":\"Tw\u00f3j przewodnik po kompetencjach statystycznych!\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/statorials.org\/pl\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"pl-PL\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/statorials.org\/pl\/#\/schema\/person\/6484727a4612df3e69f016c3129c6965\",\"name\":\"Benjamin Anderson\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pl-PL\",\"@id\":\"https:\/\/statorials.org\/pl\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/statorials.org\/pl\/wp-content\/uploads\/2023\/11\/Benjamin-Anderson-96x96.jpg\",\"contentUrl\":\"https:\/\/statorials.org\/pl\/wp-content\/uploads\/2023\/11\/Benjamin-Anderson-96x96.jpg\",\"caption\":\"Benjamin Anderson\"},\"description\":\"Cze\u015b\u0107, jestem Benjamin i jestem emerytowanym profesorem statystyki, kt\u00f3ry zosta\u0142 oddanym nauczycielem Statorials. Dzi\u0119ki bogatemu do\u015bwiadczeniu i wiedzy specjalistycznej w dziedzinie statystyki ch\u0119tnie dziel\u0119 si\u0119 swoj\u0105 wiedz\u0105, aby wzmocni\u0107 pozycj\u0119 uczni\u00f3w za po\u015brednictwem Statorials. Wiedzie\u0107 wi\u0119cej\",\"sameAs\":[\"https:\/\/statorials.org\/pl\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"XGBoost w R: przyk\u0142ad krok po kroku","description":"Ten samouczek zawiera przyk\u0142adowy krok po kroku spos\u00f3b uruchomienia XGBoost w R, popularnej technice uczenia maszynowego.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/statorials.org\/pl\/xgboost-w-r\/","og_locale":"pl_PL","og_type":"article","og_title":"XGBoost w R: przyk\u0142ad krok po kroku","og_description":"Ten samouczek zawiera przyk\u0142adowy krok po kroku spos\u00f3b uruchomienia XGBoost w R, popularnej technice uczenia maszynowego.","og_url":"https:\/\/statorials.org\/pl\/xgboost-w-r\/","og_site_name":"Statorials","article_published_time":"2023-07-27T04:52:33+00:00","author":"Benjamin Anderson","twitter_card":"summary_large_image","twitter_misc":{"Napisane przez":"Benjamin Anderson","Szacowany czas czytania":"4 minuty"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/statorials.org\/pl\/xgboost-w-r\/","url":"https:\/\/statorials.org\/pl\/xgboost-w-r\/","name":"XGBoost w R: przyk\u0142ad krok po kroku","isPartOf":{"@id":"https:\/\/statorials.org\/pl\/#website"},"datePublished":"2023-07-27T04:52:33+00:00","dateModified":"2023-07-27T04:52:33+00:00","author":{"@id":"https:\/\/statorials.org\/pl\/#\/schema\/person\/6484727a4612df3e69f016c3129c6965"},"description":"Ten samouczek zawiera przyk\u0142adowy krok po kroku spos\u00f3b uruchomienia XGBoost w R, popularnej technice uczenia maszynowego.","breadcrumb":{"@id":"https:\/\/statorials.org\/pl\/xgboost-w-r\/#breadcrumb"},"inLanguage":"pl-PL","potentialAction":[{"@type":"ReadAction","target":["https:\/\/statorials.org\/pl\/xgboost-w-r\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/statorials.org\/pl\/xgboost-w-r\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Dom","item":"https:\/\/statorials.org\/pl\/"},{"@type":"ListItem","position":2,"name":"Xgboost w r: przyk\u0142ad krok po kroku"}]},{"@type":"WebSite","@id":"https:\/\/statorials.org\/pl\/#website","url":"https:\/\/statorials.org\/pl\/","name":"Statorials","description":"Tw\u00f3j przewodnik po kompetencjach statystycznych!","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/statorials.org\/pl\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"pl-PL"},{"@type":"Person","@id":"https:\/\/statorials.org\/pl\/#\/schema\/person\/6484727a4612df3e69f016c3129c6965","name":"Benjamin Anderson","image":{"@type":"ImageObject","inLanguage":"pl-PL","@id":"https:\/\/statorials.org\/pl\/#\/schema\/person\/image\/","url":"https:\/\/statorials.org\/pl\/wp-content\/uploads\/2023\/11\/Benjamin-Anderson-96x96.jpg","contentUrl":"https:\/\/statorials.org\/pl\/wp-content\/uploads\/2023\/11\/Benjamin-Anderson-96x96.jpg","caption":"Benjamin Anderson"},"description":"Cze\u015b\u0107, jestem Benjamin i jestem emerytowanym profesorem statystyki, kt\u00f3ry zosta\u0142 oddanym nauczycielem Statorials. Dzi\u0119ki bogatemu do\u015bwiadczeniu i wiedzy specjalistycznej w dziedzinie statystyki ch\u0119tnie dziel\u0119 si\u0119 swoj\u0105 wiedz\u0105, aby wzmocni\u0107 pozycj\u0119 uczni\u00f3w za po\u015brednictwem Statorials. Wiedzie\u0107 wi\u0119cej","sameAs":["https:\/\/statorials.org\/pl"]}]}},"yoast_meta":{"yoast_wpseo_title":"","yoast_wpseo_metadesc":"","yoast_wpseo_canonical":""},"_links":{"self":[{"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/posts\/1233","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/comments?post=1233"}],"version-history":[{"count":0,"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/posts\/1233\/revisions"}],"wp:attachment":[{"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/media?parent=1233"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/categories?post=1233"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/statorials.org\/pl\/wp-json\/wp\/v2\/tags?post=1233"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}