Boosting is a machine learning technique that is used to reduce errors in predictive data analysis. It is an ensemble meta-algorithm that converts weak learners to strong ones. Boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. After a weak learner is added, the data weights are readjusted, known as "re-weighting". Misclassified input data gain a higher weight and examples that are classified correctly lose weight. Boosting algorithms can be based on convex or non-convex optimization algorithms. The boosting algorithm assesses model predictions and increases the weight of samples with a more significant error. It also assigns a weight based on model performance. A model that outputs excellent predictions will have a high amount of influence over the final decision. Boosting offers the following major benefits: ease of implementation, easy-to-understand and easy-to-interpret algorithms that learn from their mistakes, and built-in routines to handle missing data. There are several types of boosting algorithms, some of the most famous and useful models are Gradient Boosting, AdaBoost, and XGBoost.
It is important to note that "boosting" can also refer to a different concept in video games, where it is the act of a user speeding up the process of gaining awards or experience in a game.