{"id":1234,"date":"2023-07-27T04:52:33","date_gmt":"2023-07-27T04:52:33","guid":{"rendered":"https:\/\/statorials.org\/pt\/xgboost-em-r\/"},"modified":"2023-07-27T04:52:33","modified_gmt":"2023-07-27T04:52:33","slug":"xgboost-em-r","status":"publish","type":"post","link":"https:\/\/statorials.org\/pt\/xgboost-em-r\/","title":{"rendered":"Xgboost em r: um exemplo passo a passo"},"content":{"rendered":"<p><\/p>\n<hr>\n<p><span style=\"color: #000000;\"><a href=\"https:\/\/statorials.org\/pt\/impulsionar-o-aprendizado-de-maquina\/\" target=\"_blank\" rel=\"noopener noreferrer\">Boosting<\/a> \u00e9 uma t\u00e9cnica de aprendizado de m\u00e1quina que demonstrou produzir modelos com alta precis\u00e3o preditiva.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Uma das maneiras mais comuns de implementar o boosting na pr\u00e1tica \u00e9 usar <strong>XGBoost<\/strong> , abrevia\u00e7\u00e3o de &#8220;extreme gradiente boosting&#8221;.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Este tutorial fornece um exemplo passo a passo de como usar o XGBoost para ajustar um modelo aprimorado em R.<\/span><\/p>\n<h3> <strong><span style=\"color: #000000;\">Passo 1: Carregue os pacotes necess\u00e1rios<\/span><\/strong><\/h3>\n<p> <span style=\"color: #000000;\">Primeiro, carregaremos as bibliotecas necess\u00e1rias.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\">library<\/span> (xgboost) <span style=\"color: #008080;\">#for fitting the xgboost model<\/span>\n<span style=\"color: #993300;\">library<\/span> (caret) <span style=\"color: #008080;\">#for general data preparation and model fitting<\/span>\n<\/strong><\/pre>\n<h3> <span style=\"color: #000000;\"><strong>Etapa 2: carregar dados<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Para este exemplo, ajustaremos um modelo de regress\u00e3o aprimorado ao conjunto de dados <strong>de Boston<\/strong> do pacote <strong>MASS<\/strong> .<\/span><\/p>\n<p> <span style=\"color: #000000;\">Este conjunto de dados cont\u00e9m 13 vari\u00e1veis preditoras que usaremos para prever uma <a href=\"https:\/\/statorials.org\/pt\/respostas-explicativas-das-variaveis\/\" target=\"_blank\" rel=\"noopener noreferrer\">vari\u00e1vel de resposta<\/a> chamada <strong>mdev<\/strong> , que representa o valor mediano de resid\u00eancias em diferentes setores censit\u00e1rios ao redor de Boston.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#load the data\n<\/span>data = MASS::Boston\n\n<span style=\"color: #008080;\">#view the structure of the data\n<\/span>str(data) \n\n'data.frame': 506 obs. of 14 variables:\n $ crim: num 0.00632 0.02731 0.02729 0.03237 0.06905 ...\n $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...\n $ indus: num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...\n $chas: int 0 0 0 0 0 0 0 0 0 0 ...\n $ nox: num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...\n $rm: num 6.58 6.42 7.18 7 7.15 ...\n $ age: num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...\n $ dis: num 4.09 4.97 4.97 6.06 6.06 ...\n $rad: int 1 2 2 3 3 3 5 5 5 5 ...\n $ tax: num 296 242 242 222 222 222 311 311 311 311 ...\n $ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...\n $ black: num 397 397 393 395 397 ...\n $ lstat: num 4.98 9.14 4.03 2.94 5.33 ...\n $ medv: num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...\n<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Podemos ver que o conjunto de dados cont\u00e9m 506 <a href=\"https:\/\/statorials.org\/pt\/observacao-em-estatisticas\/\" target=\"_blank\" rel=\"noopener noreferrer\">observa\u00e7\u00f5es<\/a> e 14 vari\u00e1veis no total.<\/span><\/p>\n<h3> <span style=\"color: #000000;\"><strong>Etapa 3: preparar os dados<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">A seguir, usaremos a fun\u00e7\u00e3o <strong>createDataPartition()<\/strong> do pacote caret para dividir o conjunto de dados original em um conjunto de treinamento e teste.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Para este exemplo, escolheremos usar 80% do conjunto de dados original como parte do conjunto de treinamento.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Observe que o pacote xgboost tamb\u00e9m usa dados de matriz, ent\u00e3o usaremos a fun\u00e7\u00e3o <strong>data.matrix()<\/strong> para armazenar nossas vari\u00e1veis preditoras.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#make this example reproducible\n<\/span>set.seed(0)\n\n<span style=\"color: #008080;\">#split into training (80%) and testing set (20%)\n<\/span>parts = createDataPartition(data$medv, p = <span style=\"color: #008000;\">.8<\/span> , list = <span style=\"color: #008000;\">F<\/span> )\ntrain = data[parts, ]\ntest = data[-parts, ]\n\n<span style=\"color: #008080;\">#define predictor and response variables in training set\n<\/span>train_x = data. <span style=\"color: #3366ff;\">matrix<\/span> (train[, -13])\ntrain_y = train[,13]\n\n<span style=\"color: #008080;\">#define predictor and response variables in testing set\n<\/span>test_x = data. <span style=\"color: #3366ff;\">matrix<\/span> (test[, -13])\ntest_y = test[, 13]\n\n<span style=\"color: #008080;\">#define final training and testing sets\n<\/span>xgb_train = xgb. <span style=\"color: #3366ff;\">DMatrix<\/span> (data = train_x, label = train_y)\nxgb_test = xgb. <span style=\"color: #3366ff;\">DMatrix<\/span> (data = test_x, label = test_y)\n<\/span><\/span><\/strong><\/pre>\n<h3> <span style=\"color: #000000;\"><strong>Etapa 4: ajuste o modelo<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">A seguir, ajustaremos o modelo XGBoost usando a fun\u00e7\u00e3o <strong>xgb.train()<\/strong> , que exibe o RMSE (erro quadr\u00e1tico m\u00e9dio) de treinamento e teste para cada ciclo de refor\u00e7o.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Observe que optamos por usar 70 rodadas neste exemplo, mas para conjuntos de dados muito maiores n\u00e3o \u00e9 incomum usar centenas ou mesmo milhares de rodadas. Basta ter em mente que quanto mais rodadas, maior ser\u00e1 o tempo de execu\u00e7\u00e3o.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Observe tamb\u00e9m que o argumento <strong>max.degree<\/strong> especifica a profundidade do desenvolvimento de \u00e1rvores de decis\u00e3o individuais. Geralmente escolhemos esse n\u00famero bem baixo, como 2 ou 3, para fazer crescer \u00e1rvores menores. Foi demonstrado que esta abordagem tende a produzir modelos mais precisos.<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#define watchlist\n<\/span>watchlist = list(train=xgb_train, test=xgb_test)\n\n<span style=\"color: #008080;\">#fit XGBoost model and display training and testing data at each round\n<\/span>model = xgb.train(data = xgb_train, max.depth = <span style=\"color: #008000;\">3<\/span> , watchlist=watchlist, nrounds = <span style=\"color: #008000;\">70<\/span> )\n\n[1] train-rmse:10.167523 test-rmse:10.839775 \n[2] train-rmse:7.521903 test-rmse:8.329679 \n[3] train-rmse:5.702393 test-rmse:6.691415 \n[4] train-rmse:4.463687 test-rmse:5.631310 \n[5] train-rmse:3.666278 test-rmse:4.878750 \n[6] train-rmse:3.159799 test-rmse:4.485698 \n[7] train-rmse:2.855133 test-rmse:4.230533 \n[8] train-rmse:2.603367 test-rmse:4.099881 \n[9] train-rmse:2.445718 test-rmse:4.084360 \n[10] train-rmse:2.327318 test-rmse:3.993562 \n[11] train-rmse:2.267629 test-rmse:3.944454 \n[12] train-rmse:2.189527 test-rmse:3.930808 \n[13] train-rmse:2.119130 test-rmse:3.865036 \n[14] train-rmse:2.086450 test-rmse:3.875088 \n[15] train-rmse:2.038356 test-rmse:3.881442 \n[16] train-rmse:2.010995 test-rmse:3.883322 \n[17] train-rmse:1.949505 test-rmse:3.844382 \n[18] train-rmse:1.911711 test-rmse:3.809830 \n[19] train-rmse:1.888488 test-rmse:3.809830 \n[20] train-rmse:1.832443 test-rmse:3.758502 \n[21] train-rmse:1.816150 test-rmse:3.770216 \n[22] train-rmse:1.801369 test-rmse:3.770474 \n[23] train-rmse:1.788891 test-rmse:3.766608 \n[24] train-rmse:1.751795 test-rmse:3.749583 \n[25] train-rmse:1.713306 test-rmse:3.720173 \n[26] train-rmse:1.672227 test-rmse:3.675086 \n[27] train-rmse:1.648323 test-rmse:3.675977 \n[28] train-rmse:1.609927 test-rmse:3.745338 \n[29] train-rmse:1.594891 test-rmse:3.756049 \n[30] train-rmse:1.578573 test-rmse:3.760104 \n[31] train-rmse:1.559810 test-rmse:3.727940 \n[32] train-rmse:1.547852 test-rmse:3.731702 \n[33] train-rmse:1.534589 test-rmse:3.729761 \n[34] train-rmse:1.520566 test-rmse:3.742681 \n[35] train-rmse:1.495155 test-rmse:3.732993 \n[36] train-rmse:1.467939 test-rmse:3.738329 \n[37] train-rmse:1.446343 test-rmse:3.713748 \n[38] train-rmse:1.435368 test-rmse:3.709469 \n[39] train-rmse:1.401356 test-rmse:3.710637 \n[40] train-rmse:1.390318 test-rmse:3.709461 \n[41] train-rmse:1.372635 test-rmse:3.708049 \n[42] train-rmse:1.367977 test-rmse:3.707429 \n[43] train-rmse:1.359531 test-rmse:3.711663 \n[44] train-rmse:1.335347 test-rmse:3.709101 \n[45] train-rmse:1.331750 test-rmse:3.712490 \n[46] train-rmse:1.313087 test-rmse:3.722981 \n[47] train-rmse:1.284392 test-rmse:3.712840 \n[48] train-rmse:1.257714 test-rmse:3.697482 \n[49] train-rmse:1.248218 test-rmse:3.700167 \n[50] train-rmse:1.243377 test-rmse:3.697914 \n[51] train-rmse:1.231956 test-rmse:3.695797 \n[52] train-rmse:1.219341 test-rmse:3.696277 \n[53] train-rmse:1.207413 test-rmse:3.691465 \n[54] train-rmse:1.197197 test-rmse:3.692108 \n[55] train-rmse:1.171748 test-rmse:3.683577 \n[56] train-rmse:1.156332 test-rmse:3.674458 \n[57] train-rmse:1.147686 test-rmse:3.686367 \n[58] train-rmse:1.143572 test-rmse:3.686375 \n[59] train-rmse:1.129780 test-rmse:3.679791 \n[60] train-rmse:1.111257 test-rmse:3.679022 \n[61] train-rmse:1.093541 test-rmse:3.699670 \n[62] train-rmse:1.083934 test-rmse:3.708187 \n[63] train-rmse:1.067109 test-rmse:3.712538 \n[64] train-rmse:1.053887 test-rmse:3.722480 \n[65] train-rmse:1.042127 test-rmse:3.720720 \n[66] train-rmse:1.031617 test-rmse:3.721224 \n[67] train-rmse:1.016274 test-rmse:3.699549 \n[68] train-rmse:1.008184 test-rmse:3.709522 \n[69] train-rmse:0.999220 test-rmse:3.708000 \n[70] train-rmse:0.985907 test-rmse:3.705192 \n<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">A partir do resultado, podemos ver que o RMSE m\u00ednimo do teste \u00e9 alcan\u00e7ado em <strong>56<\/strong> rodadas. Al\u00e9m deste ponto, o RMSE do teste come\u00e7a a aumentar, indicando que <a href=\"https:\/\/statorials.org\/pt\/overfitting-de-aprendizado-de-maquina\/\" target=\"_blank\" rel=\"noopener noreferrer\">estamos superajustando os dados de treinamento<\/a> .<\/span><\/p>\n<p> <span style=\"color: #000000;\">Ent\u00e3o, definiremos nosso modelo XGBoost final para usar 56 rodadas:<\/span><\/p>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\">#define final model\n<\/span>final = xgboost(data = xgb_train, max.depth = <span style=\"color: #008000;\">3<\/span> , nrounds = <span style=\"color: #008000;\">56<\/span> , verbose = <span style=\"color: #008000;\">0<\/span> )<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">Nota: O argumento <strong>verbose=0<\/strong> diz ao R para n\u00e3o exibir o erro de treinamento e teste para cada rodada.<\/span><\/p>\n<h3> <span style=\"color: #000000;\"><strong>Etapa 5: use o modelo para fazer previs\u00f5es<\/strong><\/span><\/h3>\n<p> <span style=\"color: #000000;\">Finalmente, podemos usar o modelo final melhorado para fazer previs\u00f5es sobre o valor m\u00e9dio das casas de Boston no conjunto de teste.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Em seguida, calcularemos as seguintes m\u00e9tricas de precis\u00e3o para o modelo:<\/span><\/p>\n<ul>\n<li> <span style=\"color: #000000;\"><strong>MSE:<\/strong> erro quadr\u00e1tico m\u00e9dio<\/span><\/li>\n<li> <span style=\"color: #000000;\"><strong>MAE:<\/strong> erro m\u00e9dio absoluto<\/span><\/li>\n<li> <span style=\"color: #000000;\"><strong>RMSE:<\/strong> raiz do erro quadr\u00e1tico m\u00e9dio<\/span><\/li>\n<\/ul>\n<pre style=\"background-color: #ececec; font-size: 15px;\"> <strong><span style=\"color: #993300;\"><span style=\"color: #000000;\"><span style=\"color: #008080;\"><span style=\"color: #000000;\">mean((test_y - pred_y)^2)<\/span> #mse\n<span style=\"color: #000000;\">caret::MAE(test_y, pred_y)<\/span> #mae\n<span style=\"color: #000000;\">caret::RMSE(test_y, pred_y)<\/span> #rmse\n\n<\/span>[1] 13.50164\n[1] 2.409426\n[1] 3.674457<\/span><\/span><\/strong><\/pre>\n<p> <span style=\"color: #000000;\">O erro quadr\u00e1tico m\u00e9dio \u00e9 <strong>3,674457<\/strong> . Isso representa a diferen\u00e7a m\u00e9dia entre a previs\u00e3o feita para os valores medianos das casas e os valores reais das casas observados no conjunto de teste.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Se quisermos, poder\u00edamos comparar este RMSE com outros modelos, como <a href=\"https:\/\/statorials.org\/pt\/regressao-linear-multipla-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regress\u00e3o linear m\u00faltipla<\/a> , <a href=\"https:\/\/statorials.org\/pt\/regressao-de-crista-em-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regress\u00e3o de crista<\/a> , <a href=\"https:\/\/statorials.org\/pt\/regressao-de-componentes-principais-em-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">regress\u00e3o de componentes principais<\/a> , etc. para ver qual modelo produz as previs\u00f5es mais precisas.<\/span><\/p>\n<p> <span style=\"color: #000000;\">Voc\u00ea pode encontrar o c\u00f3digo R completo usado neste exemplo <a href=\"https:\/\/github.com\/Statorials\/R-Guides\/blob\/main\/xgboost.R\" target=\"_blank\" rel=\"noopener noreferrer\">aqui<\/a> .<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Boosting \u00e9 uma t\u00e9cnica de aprendizado de m\u00e1quina que demonstrou produzir modelos com alta precis\u00e3o preditiva. Uma das maneiras mais comuns de implementar o boosting na pr\u00e1tica \u00e9 usar XGBoost , abrevia\u00e7\u00e3o de &#8220;extreme gradiente boosting&#8221;. Este tutorial fornece um exemplo passo a passo de como usar o XGBoost para ajustar um modelo aprimorado em [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[],"class_list":["post-1234","post","type-post","status-publish","format-standard","hentry","category-guia"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>XGBoost em R: um exemplo passo a passo<\/title>\n<meta name=\"description\" content=\"Este tutorial fornece um exemplo passo a passo de como executar o XGBoost em R, uma t\u00e9cnica popular de aprendizado de m\u00e1quina.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/statorials.org\/pt\/xgboost-em-r\/\" \/>\n<meta property=\"og:locale\" content=\"pt_PT\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"XGBoost em R: um exemplo passo a passo\" \/>\n<meta property=\"og:description\" content=\"Este tutorial fornece um exemplo passo a passo de como executar o XGBoost em R, uma t\u00e9cnica popular de aprendizado de m\u00e1quina.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/statorials.org\/pt\/xgboost-em-r\/\" \/>\n<meta property=\"og:site_name\" content=\"Statorials\" \/>\n<meta property=\"article:published_time\" content=\"2023-07-27T04:52:33+00:00\" \/>\n<meta name=\"author\" content=\"Dr. benjamim anderson\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Escrito por\" \/>\n\t<meta name=\"twitter:data1\" content=\"Dr. benjamim anderson\" \/>\n\t<meta name=\"twitter:label2\" content=\"Tempo estimado de leitura\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutos\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/statorials.org\/pt\/xgboost-em-r\/\",\"url\":\"https:\/\/statorials.org\/pt\/xgboost-em-r\/\",\"name\":\"XGBoost em R: um exemplo passo a passo\",\"isPartOf\":{\"@id\":\"https:\/\/statorials.org\/pt\/#website\"},\"datePublished\":\"2023-07-27T04:52:33+00:00\",\"dateModified\":\"2023-07-27T04:52:33+00:00\",\"author\":{\"@id\":\"https:\/\/statorials.org\/pt\/#\/schema\/person\/e08f98e8db95e0aa9c310e1b27c9c666\"},\"description\":\"Este tutorial fornece um exemplo passo a passo de como executar o XGBoost em R, uma t\u00e9cnica popular de aprendizado de m\u00e1quina.\",\"breadcrumb\":{\"@id\":\"https:\/\/statorials.org\/pt\/xgboost-em-r\/#breadcrumb\"},\"inLanguage\":\"pt-PT\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/statorials.org\/pt\/xgboost-em-r\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/statorials.org\/pt\/xgboost-em-r\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Lar\",\"item\":\"https:\/\/statorials.org\/pt\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Xgboost em r: um exemplo passo a passo\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/statorials.org\/pt\/#website\",\"url\":\"https:\/\/statorials.org\/pt\/\",\"name\":\"Statorials\",\"description\":\"O seu guia para a literacia estat\u00edstica!\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/statorials.org\/pt\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"pt-PT\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/statorials.org\/pt\/#\/schema\/person\/e08f98e8db95e0aa9c310e1b27c9c666\",\"name\":\"Dr. benjamim anderson\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pt-PT\",\"@id\":\"https:\/\/statorials.org\/pt\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/statorials.org\/pt\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg\",\"contentUrl\":\"https:\/\/statorials.org\/pt\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg\",\"caption\":\"Dr. benjamim anderson\"},\"description\":\"Ol\u00e1, sou Benjamin, um professor aposentado de estat\u00edstica que se tornou professor dedicado na Statorials. Com vasta experi\u00eancia e conhecimento na \u00e1rea de estat\u00edstica, estou empenhado em compartilhar meu conhecimento para capacitar os alunos por meio de Statorials. Saber mais\",\"sameAs\":[\"https:\/\/statorials.org\/pt\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"XGBoost em R: um exemplo passo a passo","description":"Este tutorial fornece um exemplo passo a passo de como executar o XGBoost em R, uma t\u00e9cnica popular de aprendizado de m\u00e1quina.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/statorials.org\/pt\/xgboost-em-r\/","og_locale":"pt_PT","og_type":"article","og_title":"XGBoost em R: um exemplo passo a passo","og_description":"Este tutorial fornece um exemplo passo a passo de como executar o XGBoost em R, uma t\u00e9cnica popular de aprendizado de m\u00e1quina.","og_url":"https:\/\/statorials.org\/pt\/xgboost-em-r\/","og_site_name":"Statorials","article_published_time":"2023-07-27T04:52:33+00:00","author":"Dr. benjamim anderson","twitter_card":"summary_large_image","twitter_misc":{"Escrito por":"Dr. benjamim anderson","Tempo estimado de leitura":"5 minutos"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/statorials.org\/pt\/xgboost-em-r\/","url":"https:\/\/statorials.org\/pt\/xgboost-em-r\/","name":"XGBoost em R: um exemplo passo a passo","isPartOf":{"@id":"https:\/\/statorials.org\/pt\/#website"},"datePublished":"2023-07-27T04:52:33+00:00","dateModified":"2023-07-27T04:52:33+00:00","author":{"@id":"https:\/\/statorials.org\/pt\/#\/schema\/person\/e08f98e8db95e0aa9c310e1b27c9c666"},"description":"Este tutorial fornece um exemplo passo a passo de como executar o XGBoost em R, uma t\u00e9cnica popular de aprendizado de m\u00e1quina.","breadcrumb":{"@id":"https:\/\/statorials.org\/pt\/xgboost-em-r\/#breadcrumb"},"inLanguage":"pt-PT","potentialAction":[{"@type":"ReadAction","target":["https:\/\/statorials.org\/pt\/xgboost-em-r\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/statorials.org\/pt\/xgboost-em-r\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Lar","item":"https:\/\/statorials.org\/pt\/"},{"@type":"ListItem","position":2,"name":"Xgboost em r: um exemplo passo a passo"}]},{"@type":"WebSite","@id":"https:\/\/statorials.org\/pt\/#website","url":"https:\/\/statorials.org\/pt\/","name":"Statorials","description":"O seu guia para a literacia estat\u00edstica!","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/statorials.org\/pt\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"pt-PT"},{"@type":"Person","@id":"https:\/\/statorials.org\/pt\/#\/schema\/person\/e08f98e8db95e0aa9c310e1b27c9c666","name":"Dr. benjamim anderson","image":{"@type":"ImageObject","inLanguage":"pt-PT","@id":"https:\/\/statorials.org\/pt\/#\/schema\/person\/image\/","url":"https:\/\/statorials.org\/pt\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg","contentUrl":"https:\/\/statorials.org\/pt\/wp-content\/uploads\/2023\/10\/Dr.-Benjamin-Anderson-96x96.jpg","caption":"Dr. benjamim anderson"},"description":"Ol\u00e1, sou Benjamin, um professor aposentado de estat\u00edstica que se tornou professor dedicado na Statorials. Com vasta experi\u00eancia e conhecimento na \u00e1rea de estat\u00edstica, estou empenhado em compartilhar meu conhecimento para capacitar os alunos por meio de Statorials. Saber mais","sameAs":["https:\/\/statorials.org\/pt"]}]}},"yoast_meta":{"yoast_wpseo_title":"","yoast_wpseo_metadesc":"","yoast_wpseo_canonical":""},"_links":{"self":[{"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/posts\/1234","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/comments?post=1234"}],"version-history":[{"count":0,"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/posts\/1234\/revisions"}],"wp:attachment":[{"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/media?parent=1234"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/categories?post=1234"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/statorials.org\/pt\/wp-json\/wp\/v2\/tags?post=1234"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}