Ensemble Learners
Supervised learning > classification
- generally: (using weak learners) learn over subset of data to learn simple rules => combine simple rules to create a more complex rule
- how to choose subsets:
- bagging: choose uniformly randomly, with replacement => combine by averaging
- boosting: choose and focus on the "hardest" examples (by error) => combine by weighted mean
Pseudo boosting
1 2 3 4 5 6 |
|
Given
- binary classification training data: \(\{ x_i, y_i \}\) all labels \(\{1, -1 \}\)
- initially \(D_1(i) = 1/n\) uniform distribution of examples
-
at each time step thereafter, construct a new distribution:
\(\displaystyle D_{t+1}(i) = \frac{D_t(i) \cdot e^{-\alpha_t y_i h_t(x_i)}}{z(t)}\) where \(\alpha_t = \frac{1}{2} \ln \frac{1 - e_t}{e_t}\)
i.e. starting with the previous distribution, make it bigger or smaller, based on how well the current hypothesis performs => correctly classified examples have less weight and the incorrect ("harder" examples) get more weight
-
\(H_{\text{final}}(x) = \text{sgn}(\sum_{t} \alpha_t h_t(x))\)