{"id":1235,"date":"2023-07-27T04:52:33","date_gmt":"2023-07-27T04:52:33","guid":{"rendered":"https:\/\/statorials.org\/id\/xgboost-di-r\/"},"modified":"2023-07-27T04:52:33","modified_gmt":"2023-07-27T04:52:33","slug":"xgboost-di-r","status":"publish","type":"post","link":"https:\/\/statorials.org\/id\/xgboost-di-r\/","title":{"rendered":"Xgboost di r: contoh langkah demi langkah"},"content":{"rendered":"<p><\/p>\n<hr>\n<p><span style=\"color: #000000;\"><a href=\"https:\/\/statorials.org\/id\/meningkatkan-pembelajaran-mesin\/\" target=\"_blank\" rel=\"noopener noreferrer\">Boosting<\/a> adalah teknik pembelajaran mesin yang terbukti menghasilkan model dengan akurasi prediksi tinggi.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Salah satu cara paling umum untuk menerapkan peningkatan dalam praktik adalah dengan menggunakan <strong>XGBoost<\/strong> , kependekan dari &#8220;peningkatan gradien ekstrem&#8221;.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Tutorial ini memberikan contoh langkah demi langkah tentang cara menggunakan XGBoost agar sesuai dengan model yang disempurnakan di R.<\/span><\/p>\n<h3> <strong><span style=\"color: #000000;\">Langkah 1: Muat paket yang diperlukan<\/span><\/strong><\/h3>\n<p> <span style=\"color: #000000;\">Pertama, kita akan memuat perpustakaan yang diperlukan.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\">library<\/span> (xgboost) <span style=\"color: #008080;\">#for fitting the xgboost model<\/span>\n<span style=\"color: #993300;\">library<\/span> (caret) <span style=\"color: #008080;\">#for general data preparation and model fitting<\/span>\n<\/strong><\/pre>\n<h3> <span style=\"color: #000000;\"><strong>Langkah 2: Muat data<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Untuk contoh ini, kami akan menyesuaikan model regresi yang ditingkatkan ke kumpulan data <strong>Boston<\/strong> dari paket <strong>MASS<\/strong> .<\/span><\/p>\n<p> <span style=\"color: #000000;\">Kumpulan data ini berisi 13 variabel prediktor yang akan kita gunakan untuk memprediksi <a href=\"https:\/\/statorials.org\/id\/variabel-tanggapan-penjelas\/\" target=\"_blank\" rel=\"noopener noreferrer\">variabel respons<\/a> yang disebut <strong>mdev<\/strong> , yang mewakili nilai median rumah di berbagai wilayah sensus di sekitar Boston.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#load the data\n<\/span>data = MASS::Boston\n\n<span style=\"color: #008080;\">#view the structure of the data\n<\/span>str(data) \n\n'data.frame': 506 obs. of 14 variables:\n $ crim: num 0.00632 0.02731 0.02729 0.03237 0.06905 ...\n $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...\n $ indus: num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...\n $chas: int 0 0 0 0 0 0 0 0 0 0 ...\n $ nox: num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...\n $rm: num 6.58 6.42 7.18 7 7.15 ...\n $ age: num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...\n $ dis: num 4.09 4.97 4.97 6.06 6.06 ...\n $rad: int 1 2 2 3 3 3 5 5 5 5 ...\n $ tax: num 296 242 242 222 222 222 311 311 311 311 ...\n $ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...\n $ black: num 397 397 393 395 397 ...\n $ lstat: num 4.98 9.14 4.03 2.94 5.33 ...\n $ medv: num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...\n<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Kita dapat melihat bahwa dataset tersebut berisi 506 <a href=\"https:\/\/statorials.org\/id\/pengamatan-dalam-statistik\/\" target=\"_blank\" rel=\"noopener noreferrer\">observasi<\/a> dan total 14 variabel.<\/span><\/p>\n<h3> <span style=\"color: #000000;\"><strong>Langkah 3: Siapkan datanya<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Selanjutnya, kita akan menggunakan fungsi <strong>createDataPartition()<\/strong> dari paket caret untuk membagi kumpulan data asli menjadi kumpulan pelatihan dan pengujian.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Untuk contoh ini, kami akan memilih untuk menggunakan 80% kumpulan data asli sebagai bagian dari kumpulan pelatihan.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Perhatikan bahwa paket xgboost juga menggunakan data matriks, jadi kita akan menggunakan fungsi <strong>data.matrix()<\/strong> untuk menampung variabel prediktor kita.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#make this example reproducible\n<\/span>set.seed(0)\n\n<span style=\"color: #008080;\">#split into training (80%) and testing set (20%)\n<\/span>parts = createDataPartition(data$medv, p = <span style=\"color: #008000;\">.8<\/span> , list = <span style=\"color: #008000;\">F<\/span> )\ntrain = data[parts, ]\ntest = data[-parts, ]\n\n<span style=\"color: #008080;\">#define predictor and response variables in training set\n<\/span>train_x = data. <span style=\"color: #3366ff;\">matrix<\/span> (train[, -13])\ntrain_y = train[,13]\n\n<span style=\"color: #008080;\">#define predictor and response variables in testing set\n<\/span>test_x = data. <span style=\"color: #3366ff;\">matrix<\/span> (test[, -13])\ntest_y = test[, 13]\n\n<span style=\"color: #008080;\">#define final training and testing sets\n<\/span>xgb_train = xgb. <span style=\"color: #3366ff;\">DMatrix<\/span> (data = train_x, label = train_y)\nxgb_test = xgb. <span style=\"color: #3366ff;\">DMatrix<\/span> (data = test_x, label = test_y)\n<\/span><\/span><\/strong><\/pre>\n<h3> <span style=\"color: #000000;\"><strong>Langkah 4: Sesuaikan Model<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Selanjutnya, kita akan menyetel model XGBoost menggunakan fungsi <strong>xgb.train()<\/strong> , yang menampilkan pelatihan dan pengujian RMSE (mean square error) untuk setiap siklus peningkatan.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Perhatikan bahwa kami memilih untuk menggunakan 70 putaran untuk contoh ini, namun untuk kumpulan data yang jauh lebih besar, tidak jarang menggunakan ratusan atau bahkan ribuan putaran. Perlu diingat bahwa semakin banyak putaran, semakin lama waktu prosesnya.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Perhatikan juga bahwa argumen <strong>max.degree<\/strong> menentukan kedalaman pengembangan pohon keputusan individu. Kami biasanya memilih angka yang cukup rendah, seperti 2 atau 3, untuk menumbuhkan pohon yang lebih kecil. Pendekatan ini terbukti cenderung menghasilkan model yang lebih akurat.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#define watchlist\n<\/span>watchlist = list(train=xgb_train, test=xgb_test)\n\n<span style=\"color: #008080;\">#fit XGBoost model and display training and testing data at each round\n<\/span>model = xgb.train(data = xgb_train, max.depth = <span style=\"color: #008000;\">3<\/span> , watchlist=watchlist, nrounds = <span style=\"color: #008000;\">70<\/span> )\n\n[1] train-rmse:10.167523 test-rmse:10.839775 \n[2] train-rmse:7.521903 test-rmse:8.329679 \n[3] train-rmse:5.702393 test-rmse:6.691415 \n[4] train-rmse:4.463687 test-rmse:5.631310 \n[5] train-rmse:3.666278 test-rmse:4.878750 \n[6] train-rmse:3.159799 test-rmse:4.485698 \n[7] train-rmse:2.855133 test-rmse:4.230533 \n[8] train-rmse:2.603367 test-rmse:4.099881 \n[9] train-rmse:2.445718 test-rmse:4.084360 \n[10] train-rmse:2.327318 test-rmse:3.993562 \n[11] train-rmse:2.267629 test-rmse:3.944454 \n[12] train-rmse:2.189527 test-rmse:3.930808 \n[13] train-rmse:2.119130 test-rmse:3.865036 \n[14] train-rmse:2.086450 test-rmse:3.875088 \n[15] train-rmse:2.038356 test-rmse:3.881442 \n[16] train-rmse:2.010995 test-rmse:3.883322 \n[17] train-rmse:1.949505 test-rmse:3.844382 \n[18] train-rmse:1.911711 test-rmse:3.809830 \n[19] train-rmse:1.888488 test-rmse:3.809830 \n[20] train-rmse:1.832443 test-rmse:3.758502 \n[21] train-rmse:1.816150 test-rmse:3.770216 \n[22] train-rmse:1.801369 test-rmse:3.770474 \n[23] train-rmse:1.788891 test-rmse:3.766608 \n[24] train-rmse:1.751795 test-rmse:3.749583 \n[25] train-rmse:1.713306 test-rmse:3.720173 \n[26] train-rmse:1.672227 test-rmse:3.675086 \n[27] train-rmse:1.648323 test-rmse:3.675977 \n[28] train-rmse:1.609927 test-rmse:3.745338 \n[29] train-rmse:1.594891 test-rmse:3.756049 \n[30] train-rmse:1.578573 test-rmse:3.760104 \n[31] train-rmse:1.559810 test-rmse:3.727940 \n[32] train-rmse:1.547852 test-rmse:3.731702 \n[33] train-rmse:1.534589 test-rmse:3.729761 \n[34] train-rmse:1.520566 test-rmse:3.742681 \n[35] train-rmse:1.495155 test-rmse:3.732993 \n[36] train-rmse:1.467939 test-rmse:3.738329 \n[37] train-rmse:1.446343 test-rmse:3.713748 \n[38] train-rmse:1.435368 test-rmse:3.709469 \n[39] train-rmse:1.401356 test-rmse:3.710637 \n[40] train-rmse:1.390318 test-rmse:3.709461 \n[41] train-rmse:1.372635 test-rmse:3.708049 \n[42] train-rmse:1.367977 test-rmse:3.707429 \n[43] train-rmse:1.359531 test-rmse:3.711663 \n[44] train-rmse:1.335347 test-rmse:3.709101 \n[45] train-rmse:1.331750 test-rmse:3.712490 \n[46] train-rmse:1.313087 test-rmse:3.722981 \n[47] train-rmse:1.284392 test-rmse:3.712840 \n[48] train-rmse:1.257714 test-rmse:3.697482 \n[49] train-rmse:1.248218 test-rmse:3.700167 \n[50] train-rmse:1.243377 test-rmse:3.697914 \n[51] train-rmse:1.231956 test-rmse:3.695797 \n[52] train-rmse:1.219341 test-rmse:3.696277 \n[53] train-rmse:1.207413 test-rmse:3.691465 \n[54] train-rmse:1.197197 test-rmse:3.692108 \n[55] train-rmse:1.171748 test-rmse:3.683577 \n[56] train-rmse:1.156332 test-rmse:3.674458 \n[57] train-rmse:1.147686 test-rmse:3.686367 \n[58] train-rmse:1.143572 test-rmse:3.686375 \n[59] train-rmse:1.129780 test-rmse:3.679791 \n[60] train-rmse:1.111257 test-rmse:3.679022 \n[61] train-rmse:1.093541 test-rmse:3.699670 \n[62] train-rmse:1.083934 test-rmse:3.708187 \n[63] train-rmse:1.067109 test-rmse:3.712538 \n[64] train-rmse:1.053887 test-rmse:3.722480 \n[65] train-rmse:1.042127 test-rmse:3.720720 \n[66] train-rmse:1.031617 test-rmse:3.721224 \n[67] train-rmse:1.016274 test-rmse:3.699549 \n[68] train-rmse:1.008184 test-rmse:3.709522 \n[69] train-rmse:0.999220 test-rmse:3.708000 \n[70] train-rmse:0.985907 test-rmse:3.705192 \n<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Dari hasil tersebut terlihat bahwa tes RMSE minimum dicapai pada <strong>56<\/strong> putaran. Di luar titik ini, RMSE pengujian mulai meningkat, yang menunjukkan bahwa kami <a href=\"https:\/\/statorials.org\/id\/pembelajaran-mesin-yang-berlebihan\/\" target=\"_blank\" rel=\"noopener noreferrer\">melakukan overfitting pada data pelatihan<\/a> .<\/span><\/p>\n<p> <span style=\"color: #000000;\">Jadi, kami akan menetapkan model XGBoost terakhir kami untuk menggunakan 56 putaran:<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#define final model\n<\/span>final = xgboost(data = xgb_train, max.depth = <span style=\"color: #008000;\">3<\/span> , nrounds = <span style=\"color: #008000;\">56<\/span> , verbose = <span style=\"color: #008000;\">0<\/span> )<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Catatan: Argumen <strong>verbose=0<\/strong> memberitahu R untuk tidak menampilkan kesalahan pelatihan dan pengujian untuk setiap putaran.<\/span><\/p>\n<h3> <span style=\"color: #000000;\"><strong>Langkah 5: Gunakan model untuk membuat prediksi<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Terakhir, kita dapat menggunakan model akhir yang ditingkatkan untuk membuat prediksi tentang nilai median rumah di Boston pada set pengujian.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Kami kemudian akan menghitung metrik akurasi berikut untuk model tersebut:<\/span><\/p>\n<ul>\n<li> <span style=\"color: #000000;\"><strong>MSE:<\/strong> kesalahan kuadrat rata-rata<\/span><\/li>\n<li> <span style=\"color: #000000;\"><strong>MAE:<\/strong> berarti kesalahan mutlak<\/span><\/li>\n<li> <span style=\"color: #000000;\"><strong>RMSE:<\/strong> kesalahan akar rata-rata kuadrat<\/span><\/li>\n<\/ul>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\"><span style=\"color: #000000;\">mean((test_y - pred_y)^2)<\/span> #mse\n<span style=\"color: #000000;\">caret::MAE(test_y, pred_y)<\/span> #mae\n<span style=\"color: #000000;\">caret::RMSE(test_y, pred_y)<\/span> #rmse\n\n<\/span>[1] 13.50164\n[1] 2.409426\n[1] 3.674457<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Kesalahan kuadrat rata-rata ternyata <strong>3.674457<\/strong> . Ini mewakili perbedaan rata-rata antara prediksi yang dibuat untuk nilai median rumah dan nilai rumah sebenarnya yang diamati dalam set pengujian.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Jika mau, kita dapat membandingkan RMSE ini dengan model lain seperti <a href=\"https:\/\/statorials.org\/id\/regresi-linier-berganda-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regresi linier berganda<\/a> , <a href=\"https:\/\/statorials.org\/id\/regresi-puncak-di-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regresi ridge<\/a> , <a href=\"https:\/\/statorials.org\/id\/regresi-komponen-utama-di-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regresi komponen utama<\/a> , dll. untuk melihat model mana yang menghasilkan prediksi paling akurat.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Anda dapat menemukan kode R lengkap yang digunakan dalam contoh ini <a href=\"https:\/\/github.com\/Statorials\/R-Guides\/blob\/main\/xgboost.R\" target=\"_blank\" rel=\"noopener noreferrer\">di sini<\/a> .<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Boosting adalah teknik pembelajaran mesin yang terbukti menghasilkan model dengan akurasi prediksi tinggi. Salah satu cara paling umum untuk menerapkan peningkatan dalam praktik adalah dengan menggunakan XGBoost , kependekan dari &#8220;peningkatan gradien ekstrem&#8221;. Tutorial ini memberikan contoh langkah demi langkah tentang cara menggunakan XGBoost agar sesuai dengan model yang disempurnakan di R. Langkah 1: Muat [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>XGBoost di R: contoh langkah demi langkah<\/title>\n<meta name=\"description\" content=\"Tutorial ini memberikan contoh langkah demi langkah tentang cara menjalankan XGBoost di R, sebuah teknik pembelajaran mesin yang populer.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/statorials.org\/id\/xgboost-di-r\/\" \/>\n<meta property=\"og:locale\" content=\"id_ID\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"XGBoost di R: contoh langkah demi langkah\" \/>\n<meta property=\"og:description\" content=\"Tutorial ini memberikan contoh langkah demi langkah tentang cara menjalankan XGBoost di R, sebuah teknik pembelajaran mesin yang populer.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/statorials.org\/id\/xgboost-di-r\/\" \/>\n<meta property=\"og:site_name\" content=\"Statorials\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-27T04:52:33+00:00\" \/>\n<meta name=\"author\" content=\"Benjamin anderson\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Ditulis oleh\" \/>\n\t<meta name=\"twitter:data1\" content=\"Benjamin anderson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimasi waktu membaca\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 menit\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/statorials.org\/id\/xgboost-di-r\/\",\"url\":\"https:\/\/statorials.org\/id\/xgboost-di-r\/\",\"name\":\"XGBoost di R: contoh langkah demi langkah\",\"isPartOf\":{\"@id\":\"https:\/\/statorials.org\/id\/#website\"},\"datePublished\":\"2023-07-27T04:52:33+00:00\",\"dateModified\":\"2023-07-27T04:52:33+00:00\",\"author\":{\"@id\":\"https:\/\/statorials.org\/id\/#\/schema\/person\/3d17a1160dd2d052b7c78e502cb9ec81\"},\"description\":\"Tutorial ini memberikan contoh langkah demi langkah tentang cara menjalankan XGBoost di R, sebuah teknik pembelajaran mesin yang populer.\",\"breadcrumb\":{\"@id\":\"https:\/\/statorials.org\/id\/xgboost-di-r\/#breadcrumb\"},\"inLanguage\":\"id\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/statorials.org\/id\/xgboost-di-r\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/statorials.org\/id\/xgboost-di-r\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/statorials.org\/id\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Xgboost di r: contoh langkah demi langkah\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/statorials.org\/id\/#website\",\"url\":\"https:\/\/statorials.org\/id\/\",\"name\":\"Statorials\",\"description\":\"Panduan anda untuk kompetensi statistik!\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/statorials.org\/id\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"id\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/statorials.org\/id\/#\/schema\/person\/3d17a1160dd2d052b7c78e502cb9ec81\",\"name\":\"Benjamin anderson\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"id\",\"@id\":\"https:\/\/statorials.org\/id\/#\/schema\/person\/image\/\",\"url\":\"http:\/\/statorials.org\/id\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg\",\"contentUrl\":\"http:\/\/statorials.org\/id\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg\",\"caption\":\"Benjamin anderson\"},\"description\":\"Halo, saya Benjamin, pensiunan profesor statistika yang menjadi guru Statorial yang berdedikasi. Dengan pengalaman dan keahlian yang luas di bidang statistika, saya ingin berbagi ilmu untuk memberdayakan mahasiswa melalui Statorials. Baca selengkapnya\",\"sameAs\":[\"http:\/\/statorials.org\/id\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"XGBoost di R: contoh langkah demi langkah","description":"Tutorial ini memberikan contoh langkah demi langkah tentang cara menjalankan XGBoost di R, sebuah teknik pembelajaran mesin yang populer.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/statorials.org\/id\/xgboost-di-r\/","og_locale":"id_ID","og_type":"article","og_title":"XGBoost di R: contoh langkah demi langkah","og_description":"Tutorial ini memberikan contoh langkah demi langkah tentang cara menjalankan XGBoost di R, sebuah teknik pembelajaran mesin yang populer.","og_url":"https:\/\/statorials.org\/id\/xgboost-di-r\/","og_site_name":"Statorials","article_published_time":"2023-07-27T04:52:33+00:00","author":"Benjamin anderson","twitter_card":"summary_large_image","twitter_misc":{"Ditulis oleh":"Benjamin anderson","Estimasi waktu membaca":"4 menit"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/statorials.org\/id\/xgboost-di-r\/","url":"https:\/\/statorials.org\/id\/xgboost-di-r\/","name":"XGBoost di R: contoh langkah demi langkah","isPartOf":{"@id":"https:\/\/statorials.org\/id\/#website"},"datePublished":"2023-07-27T04:52:33+00:00","dateModified":"2023-07-27T04:52:33+00:00","author":{"@id":"https:\/\/statorials.org\/id\/#\/schema\/person\/3d17a1160dd2d052b7c78e502cb9ec81"},"description":"Tutorial ini memberikan contoh langkah demi langkah tentang cara menjalankan XGBoost di R, sebuah teknik pembelajaran mesin yang populer.","breadcrumb":{"@id":"https:\/\/statorials.org\/id\/xgboost-di-r\/#breadcrumb"},"inLanguage":"id","potentialAction":[{"@type":"ReadAction","target":["https:\/\/statorials.org\/id\/xgboost-di-r\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/statorials.org\/id\/xgboost-di-r\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/statorials.org\/id\/"},{"@type":"ListItem","position":2,"name":"Xgboost di r: contoh langkah demi langkah"}]},{"@type":"WebSite","@id":"https:\/\/statorials.org\/id\/#website","url":"https:\/\/statorials.org\/id\/","name":"Statorials","description":"Panduan anda untuk kompetensi statistik!","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/statorials.org\/id\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"id"},{"@type":"Person","@id":"https:\/\/statorials.org\/id\/#\/schema\/person\/3d17a1160dd2d052b7c78e502cb9ec81","name":"Benjamin anderson","image":{"@type":"ImageObject","inLanguage":"id","@id":"https:\/\/statorials.org\/id\/#\/schema\/person\/image\/","url":"http:\/\/statorials.org\/id\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg","contentUrl":"http:\/\/statorials.org\/id\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg","caption":"Benjamin anderson"},"description":"Halo, saya Benjamin, pensiunan profesor statistika yang menjadi guru Statorial yang berdedikasi. Dengan pengalaman dan keahlian yang luas di bidang statistika, saya ingin berbagi ilmu untuk memberdayakan mahasiswa melalui Statorials. Baca selengkapnya","sameAs":["http:\/\/statorials.org\/id"]}]}},"yoast_meta":{"yoast_wpseo_title":"","yoast_wpseo_metadesc":"","yoast_wpseo_canonical":""},"_links":{"self":[{"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/posts\/1235"}],"collection":[{"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/comments?post=1235"}],"version-history":[{"count":0,"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/posts\/1235\/revisions"}],"wp:attachment":[{"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/media?parent=1235"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/categories?post=1235"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/statorials.org\/id\/wp-json\/wp\/v2\/tags?post=1235"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}