Global AI and Data Science

 View Only

Bagging Vs. Boosting

By Moloy De posted Thu September 17, 2020 10:35 PM

  

Ensemble is a Machine Learning concept to train multiple models using the same learning algorithm. The ensembles take part in a bigger group of methods, called multi-classifiers, where a set of hundreds or thousands of learners with a common objective are fused together to solve the problem. Bagging and Boosting are both ensemble techniques, where a set of weak learners are combined to create a strong learner that obtains better performance than a single one.

The main causes of error in learning are due to noise, bias and variance. Ensemble helps to minimize these factors. These methods are designed to improve the stability and the accuracy of Machine Learning algorithms. Combinations of multiple classifiers decrease variance, especially in the case of unstable classifiers, and may produce a more reliable classification than a single classifier.

Bagging and Boosting get N learners by generating additional data in the training stage. N new training data sets are produced by random sampling with replacement from the original set. By sampling with replacement some observations may be repeated in each new training data set.

In the case of Bagging, any element has the same probability to appear in a new data set. However, for Boosting the observations are weighted and therefore some of them will take part in the new sets more often. While the training stage is parallel for Bagging where each model is built independently, Boosting builds the new learner in a sequential way. In Boosting algorithms each classifier is trained on data, taking into account the previous classifiers’ success. After each training step, the weights are redistributed. Misclassified data increases its weights to emphasize the most difficult cases. In this way, subsequent learners will focus on them during their training.

To predict the class of new data we only need to apply the N learners to the new observations. In Bagging the result is obtained by averaging the responses of the N learners, or majority vote. However, Boosting assigns a second set of weights, this time for the N classifiers, in order to take a weighted average of their estimates. A learner with good a classification result on the training data will be assigned a higher weight than a poor one. So, when evaluating a new learner, boosting needs to keep track of learners’ errors, too.

Some of the Boosting techniques include an extra-condition to keep or discard a single learner. For example, in AdaBoost, the most renowned, an error less than 50% is required to maintain the model; otherwise, the iteration is repeated until achieving a learner better than a random guess.

There’s not an outright winner between bagging and boosting. It depends on the data, the simulation and the circumstances. Bagging and Boosting decrease the variance of your single estimate as they combine several estimates from different models. So, the result may be a model with higher stability.

 

If the problem is that the single model gets a very low performance, Bagging will rarely get a better bias. However, Boosting could generate a combined model with lower errors as it optimizes the advantages and reduces pitfalls of the single model.

By contrast, if the difficulty of the single model is over-fitting, then Bagging is the best option. Boosting for its part doesn’t help to avoid over-fitting; in fact, this technique is faced with this problem itself. For this reason, Bagging is effective more often than Boosting.

Similarities

  1. Both are ensemble methods to get N learners from 1 learner.
  2. Both generate several training data sets by random sampling.
  3. Both make the final decision by averaging the learners or taking the majority of them.
  4. Both are good at reducing variance and provide higher stability.

 

Differences

  1. While learners are built independently for Bagging, boosting tries to add new models that do well where previous models fail.
  2. Boosting determines weights for the data to tip the scales in favor of the most difficult cases.
  3. It is an equally weighted average for Bagging and a weighted average for Boosting, more weight to those with better performance on training data.
  4. Boosting tries to reduce bias. On the other hand, Bagging may solve the over-fitting problem, while Boosting can increase it.

REFERENCE: What is the difference between Bagging and Boosting? 

QUESTION I: Is Random Forest a Bagging or Boosting?

QUESTION II: Could Neural Network be used as a weak learner in Bagging or Boosting?


#GlobalAIandDataScience
#GlobalDataScience
0 comments
15 views

Permalink