![]() Our proposed mechanism can be easily adapted to other tasks(e.g. We demonstrate superior performance to uncalibrated and naively-calibrated on-line boosting ensembles in terms of probability estimation. We proceed to resolve this decision with the aid of bandit optimization algorithms. In the online setting, a decision needs to be made on each round: shall the new example(s) be used to update the parameters of the ensemble or those of the calibrator. In batch learning, calibration is achieved by reserving part of the training data for training the calibrator function. In this work, we demonstrate that online boosting is also prone to producing distorted probability estimates. The outputs of the ensemble need to be properly calibrated before they can be used as probability estimates. Probability estimates generated by boosting ensembles are poorly calibrated because of the margin maximization nature of the algorithm.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |