Skip main navigation (Press Enter).
Log in
Toggle navigation
Log in
Community
Topic Groups
Champions
Directory
Program overview
Rising Champions
IBM Champions group
User Groups
Directory
Benefits
Events
Dev Days
Conference
Community events
User Groups events
All TechXchange events
Participate
TechXchange Group
Welcome Corner
Blogging
Member directory
Community leaders
Resources
IBM TechXchange
Community
Conference
Events
IBM Developer
IBM Training
IBM TechXchange
Community
Conference
Events
IBM Developer
IBM Training
Global AI and Data Science
×
Global AI & Data Science
Train, tune and distribute models with generative AI and machine learning capabilities
Group Home
Threads
4K
Blogs
907
Events
1
Library
370
Members
28.3K
View Only
Share
Share on LinkedIn
Share on X
Share on Facebook
Back to Blog List
Classification with AdaBoost
By
Moloy De
posted
Thu November 19, 2020 06:52 PM
Like
AdaBoost, short for Adaptive Boosting, is a machine learning meta-algorithm formulated by Yoav Freund and Robert Schapire, who won the 2003 Gödel Prize for their work. It can be used in conjunction with many other types of learning algorithms to improve performance. The output of the other learning algorithms ('weak learners') is combined into a weighted sum that represents the final output of the boosted classifier. AdaBoost is adaptive in the sense that subsequent weak learners are tweaked in favor of those instances misclassified by previous classifiers.
AdaBoost is sensitive to noisy data and outliers. In some problems it can be less susceptible to the overfitting problem than other learning algorithms. The individual learners can be weak, but as long as the performance of each one is slightly better than random guessing, the final model can be proven to converge to a strong learner.
Every learning algorithm tends to suit some problem types better than others, and typically has many different parameters and configurations to adjust before it achieves optimal performance on a dataset. AdaBoost (with decision trees as the weak learners) is often referred to as the best out-of-the-box classifier. When used with decision tree learning, information gathered at each stage of the AdaBoost algorithm about the relative 'hardness' of each training sample is fed into the tree growing algorithm such that later trees tend to focus on harder-to-classify examples.
Following is the performance of SVM in a classification problem:
Following the performance of Decision Tree on the same classification problem:
Following is the performance of AdaBoost with Decision Tree as weak learners:
QUESTION I: Could AdaBoost be formulated as a Linear Regression problem?
QUESTION II: Could AdaBoost be formulated as a Gradient Descent problem?
REFERENCE:
Wikipedia
#GlobalAIandDataScience
#GlobalDataScience
0 comments
4 views
Permalink
Copy
https://community.ibm.com/community/user/blogs/moloy-de1/2020/11/19/points-to-ponder
Powered by Higher Logic