A bias–variance trade-off governs individual differences in on-line learning in an unpredictable environment
Decisions often benefit from learned expectations about the sequential structure of the evidence. Here we show that individual differences in this learning process can reflect different implicit assumptions about sequence complexity, leading to performance trade-offs. For a task requiring decisions...
Gespeichert in:
| Veröffentlicht in: | Nature human behaviour Jg. 2; H. 3; S. 213 - 224 |
|---|---|
| Hauptverfasser: | , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
London
Nature Publishing Group UK
01.03.2018
Nature Publishing Group |
| Schlagworte: | |
| ISSN: | 2397-3374, 2397-3374 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Decisions often benefit from learned expectations about the sequential structure of the evidence. Here we show that individual differences in this learning process can reflect different implicit assumptions about sequence complexity, leading to performance trade-offs. For a task requiring decisions about dynamic evidence streams, human subjects with more flexible, history-dependent choices (low bias) had greater trial-to-trial choice variability (high variance). In contrast, subjects with more history-independent choices (high bias) were more predictable (low variance). We accounted for these behaviours using models in which assumed complexity was encoded by the size of the hypothesis space over the latent rate of change of the source of evidence. The most parsimonious model used an efficient sampling algorithm in which the range of sampled hypotheses represented an information bottleneck that gave rise to a bias–variance trade-off. This trade-off, which is well known in machine learning, may thus also have broad applicability to human decision-making.
Glaze et al. show that individual variability in learning from noisy evidence involves a bias–variance trade-off that is best explained by a model using a sampling algorithm that approximates optimal inference. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2397-3374 2397-3374 |
| DOI: | 10.1038/s41562-018-0297-4 |