美文网首页
领域经典paper

领域经典paper

作者: 婉妃 | 来源:发表于2019-06-29 09:59 被阅读0次

    gbdt

    gbdt

    Breiman, L. (June 1997). "Arcing The Edge" (PDF). Technical Report 486. Statistics Department, University of California, Berkeley.
    Friedman, J. H. (February 1999). "Greedy Function Approximation: A Gradient Boosting Machine" (PDF).
    Friedman, J. H. (March 1999). "Stochastic Gradient Boosting" (PDF).
    Mason, L.; Baxter, J.; Bartlett, P. L.; Frean, Marcus (1999). "Boosting Algorithms as Gradient Descent" (PDF). In S.A. Solla and T.K. Leen and K. Müller (ed.). Advances in Neural Information Processing Systems 12. MIT Press. pp. 512–518.
    Mason, L.; Baxter, J.; Bartlett, P. L.; Frean, Marcus (May 1999). "Boosting Algorithms as Gradient Descent in Function Space" (PDF).
    Cheng Li. "A Gentle Introduction to Gradient Boosting"(PDF).
    Hastie, T.; Tibshirani, R.; Friedman, J. H. (2009). "10. Boosting and Additive Trees". The Elements of Statistical Learning (2nd ed.). New York: Springer. pp. 337–384. ISBN <bdi>978-0-387-84857-0</bdi>. Archived from the original on 2009-11-10.

    1. ^ Note: in case of usual CART trees, the trees are fitted using least-squares loss, and sothe coefficient <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle b_{jm}}"><semantics><annotation encoding="application/x-tex">{\displaystyle b_{jm}}</annotation></semantics></math>[图片上传失败...(image-cc11e5-1561771515960)] for the region <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle R_{jm}}"><semantics><annotation encoding="application/x-tex">{\displaystyle R_{jm}}</annotation></semantics></math>[图片上传失败...(image-a2b85-1561771515960)] is equal to just the value of output variable, averaged over all training instances in <math xmlns="http://www.w3.org/1998/Math/MathML" alttext="{\displaystyle R_{jm}}"><semantics><annotation encoding="application/x-tex">{\displaystyle R_{jm}}</annotation></semantics></math>[图片上传失败...(image-9cc4ee-1561771515960)] .
    2. ^ Note that this is different from bagging, which samples with replacement because it usessamples of the same size as the training set.
    3. ^ Jump up to:a b c Ridgeway, Greg (2007). Generalized Boosted Models: A guide to the gbm package.
    4. ^ Learn Gradient Boosting Algorithm for better predictions (with codes in R)
    5. ^ Tianqi Chen. Introduction to Boosted Trees
    6. ^ Cossock, David and Zhang, Tong (2008). Statistical Analysis of Bayes Optimal Subset Ranking Archived 2010-08-07 at the Wayback Machine, page 14.
    7. ^ Yandex corporate blog entry about new ranking model "Snezhinsk" (in Russian)
      Friedman, Jerome (2003). "Multiple Additive Regression Trees with Application in Epidemiology". Statistics in Medicine. 22 (9): 1365–1381. doi:10.1002/sim.1501. PMID 12704603.
      Elith, Jane (2008). "A working guide to boosted regression trees". Journal of Animal Ecology. 77 (4): 802–813. doi:10.1111/j.1365-2656.2008.01390.x. PMID 18397250.
      Elith, Jane. "Boosted Regression Trees for ecological modeling" (PDF). CRAN. CRAN. Retrieved 31 August 20

    相关文章

      网友评论

          本文标题:领域经典paper

          本文链接:https://www.haomeiwen.com/subject/hauucctx.html