美文网首页
Kaggle|Courses|XGBoost[待补充]

Kaggle|Courses|XGBoost[待补充]

作者: 十二支箭 | 来源:发表于2020-04-26 21:13 被阅读0次

    In this tutorial, you will learn how to build and optimize models with gradient boosting. This method dominates many Kaggle competitions and achieves state-of-the-art results on a variety of datasets.

    Introduction

    For much of this course, you have made predictions with the random forest method, which achieves better performance than a single decision tree simply by averaging the predictions of many decision trees.

    We refer to the random forest method as an "ensemble method". By definition, ensemble methods combine the predictions of several models (e.g., several trees, in the case of random forests).

    Next, we'll learn about another ensemble method called gradient boosting.

    Gradient Boosting

    Gradient boosting is a method that goes through cycles to iteratively add models into an ensemble.

    It begins by initializing the ensemble with a single model, whose predictions can be pretty naive. (Even if its predictions are wildly inaccurate, subsequent additions to the ensemble will address those errors.)

    Then, we start the cycle:

    • First, we use the current ensemble to generate predictions for each observation in the dataset. To make a prediction, we add the predictions from all models in the ensemble.
    • These predictions are used to calculate a loss function (like mean squared error, for instance).
    • Then, we use the loss function to fit a new model that will be added to the ensemble. Specifically, we determine model parameters so that adding this new model to the ensemble will reduce the loss. (Side note: The "gradient" in "gradient boosting" refers to the fact that we'll use gradient descent on the loss function to determine the parameters in this new model.)
    • Finally, we add the new model to ensemble, and ...
    • ... repeat!

    Example

    We begin by loading the training and validation data in X_train, X_valid, y_train, and y_valid.

    Output

    Code

    In this example, you'll work with the XGBoost library. XGBoost stands for extreme gradient boosting, which is an implementation of gradient boosting with several additional features focused on performance and speed. (Scikit-learn has another version of gradient boosting, but XGBoost has some technical advantages.)

    In the next code cell, we import the scikit-learn API for XGBoost (xgboost.XGBRegressor). This allows us to build and fit a model just as we would in scikit-learn. As you'll see in the output, the XGBRegressor class has many tunable parameters -- you'll learn about those soon!

    <pre style="box-sizing: border-box; text-rendering: auto; -webkit-font-smoothing: antialiased; overflow: auto; font-family: &quot;Roboto Mono&quot;, Monaco, Consolas, monospace; font-size: 14px; display: block; padding: 0px; margin: 0px; line-height: 1.7; word-break: break-all; overflow-wrap: break-word; color: rgba(0, 0, 0, 0.7); background-color: rgb(247, 247, 247); border: none; border-radius: 2px; white-space: pre-wrap; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-style: initial; text-decoration-color: initial;">from xgboost import XGBRegressor
    
    my_model = XGBRegressor()
    my_model.fit(X_train, y_train)</pre>
    
    

    We also make predictions and evaluate the model.

    相关文章

      网友评论

          本文标题:Kaggle|Courses|XGBoost[待补充]

          本文链接:https://www.haomeiwen.com/subject/nvojwhtx.html