DidacticBoost: A Simple Implementation and Demonstration of Gradient Boosting

A basic, clear implementation of tree-based gradient boosting designed to illustrate the core operation of boosting models. Tuning parameters (such as stochastic subsampling, modified learning rate, or regularization) are not implemented. The only adjustable parameter is the number of training rounds. If you are looking for a high performance boosting implementation with tuning parameters, consider the 'xgboost' package.

Version: 0.1.1
Depends: R (≥ 3.1.1), rpart (≥ 4.1-10)
Suggests: testthat
Published: 2016-04-19
DOI: 10.32614/CRAN.package.DidacticBoost
Author: David Shaub [aut, cre]
Maintainer: David Shaub <davidshaub at gmx.com>
BugReports: https://github.com/dashaub/DidacticBoost/issues
License: GPL-3
URL: https://github.com/dashaub/DidacticBoost
NeedsCompilation: no
CRAN checks: DidacticBoost results

Documentation:

Reference manual: DidacticBoost.pdf

Downloads:

Package source: DidacticBoost_0.1.1.tar.gz
Windows binaries: r-devel: DidacticBoost_0.1.1.zip, r-release: DidacticBoost_0.1.1.zip, r-oldrel: DidacticBoost_0.1.1.zip
macOS binaries: r-release (arm64): DidacticBoost_0.1.1.tgz, r-oldrel (arm64): DidacticBoost_0.1.1.tgz, r-release (x86_64): DidacticBoost_0.1.1.tgz, r-oldrel (x86_64): DidacticBoost_0.1.1.tgz

Linking:

Please use the canonical form https://CRAN.R-project.org/package=DidacticBoost to link to this page.