An iterative boosting-based ensemble for streaming data classification
•The IBS ensemble bases on iteratively applying boosting to learn from data stream.•IBS adjusts to new concept by gathering knowledge according to its current accuracy.•Adding more base learners when accuracy is low helps IBS recover fast from drifts.•IBS join features from batch and online algorith...
Uloženo v:
| Vydáno v: | Information fusion Ročník 45; s. 66 - 78 |
|---|---|
| Hlavní autoři: | , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier B.V
01.01.2019
|
| Témata: | |
| ISSN: | 1566-2535, 1872-6305 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | •The IBS ensemble bases on iteratively applying boosting to learn from data stream.•IBS adjusts to new concept by gathering knowledge according to its current accuracy.•Adding more base learners when accuracy is low helps IBS recover fast from drifts.•IBS join features from batch and online algorithms, as fast learning and flexibility.•Results show IBS is effective and low cost to handle classification in data stream.
Among the many issues related to data stream applications, those involved in predictive tasks such as classification and regression, play a significant role in Machine Learning (ML). The so-called ensemble-based approaches have characteristics that can be appealing to data stream applications, such as easy updating and high flexibility. In spite of that, some of the current approaches consider unsuitable ways of updating the ensemble along with the continuous stream processing, such as growing it indefinitely or deleting all its base learners when trying to overcome a concept drift. Such inadequate actions interfere with two inherent characteristics of data streams namely, its possible infinite length and its need for prompt responses. In this paper, a new ensemble-based algorithm, suitable for classification tasks, is proposed. It relies on applying boosting to new batches of data aiming at maintaining the ensemble by adding a certain number of base learners, which is established as a function of the current ensemble accuracy rate. The updating mechanism enhances the model flexibility, allowing the ensemble to gather knowledge fast to quickly overcome high error rates, due to concept drift, while maintaining satisfactory results by slowing down the updating rate in stable concepts. Results comparing the proposed ensemble-based algorithm against eight other ensembles found in the literature show that the proposed algorithm is very competitive when dealing with data stream classification. |
|---|---|
| ISSN: | 1566-2535 1872-6305 |
| DOI: | 10.1016/j.inffus.2018.01.003 |