Meta-learning : theory, algorithms and applications

Deep neural networks (DNNs) with their dense and complex algorithms provide real possibilities for Artificial General Intelligence (AGI).Meta-learning with DNNs brings AGI much closer: artificial agents solving intelligent tasks that human beings can achieve, even transcending what they can achieve.

Uloženo v:
Podrobná bibliografie
Hlavní autor: Zou, Lan
Médium: E-kniha Kniha
Jazyk:angličtina
Vydáno: London Academic Press 2023
Elsevier Science & Technology
Vydání:1
Témata:
ISBN:0323899315, 9780323899314
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Obsah:
  • 3.4.5. Extended algorithm 3 -- 3.5. Relation network -- 3.5.1. Background knowledge -- 3.5.2. Methodology -- C-Way one-shot -- C-Way K-shot -- C-Way zero-shot -- Objective function -- 3.6. Summary -- References -- Chapter 4: Optimization-based meta-learning approaches -- 4.1. Introduction -- 4.2. LSTM meta-learner -- 4.2.1. Background knowledge -- Covariate shift -- Batch normalization -- Long short-term memory -- Gradient-based optimization -- 4.2.2. Methodology -- Gradient independent assumption and initialization -- Meta-training and meta-testing batch normalization -- Parameter sharing -- 4.3. Model-agnostic meta-learning -- 4.3.1. Background knowledge -- Transfer learning -- Fine-tuning -- 4.3.2. Methodology -- Task adaptation -- 4.3.3. Illustration 1: Few-shot regression and few-shot classification -- 4.3.4. Illustration 2: Policy gradient reinforcement learning -- 4.3.5. Illustration 3: Meta-imitation learning -- 4.3.6. Related Algorithm 1: Meta-SGD -- 4.3.7. Related Algorithm 2: Feature reuse-The effectiveness of MAML -- 4.3.8. Related Algorithm 3: Adaptive hyperparameter generation for fast adaptation -- 4.4. Reptile -- 4.4.1. Background knowledge -- First-order model-agnostic meta-learning -- 4.4.2. Methodology -- 4.4.2.1. Serial version -- 4.4.2.2. Parallel or batch version -- The optimization assumption -- Analysis -- 4.4.3. Related Algorithm 1 -- 4.4.4. Related Algorithm 2 -- 4.4.5. Related Algorithm 3 -- 4.4.6. Related Algorithm 4 -- 4.5. Summary -- References -- Part II: Applications -- Chapter 5: Meta-learning for computer vision -- 5.1. Introduction -- 5.1.1. Limitations -- 5.2. Image classification -- 5.2.1. Introduction -- Development -- Approaches -- Benchmarks -- One-stage semisupervised learning -- One-stage unsupervised learning -- Multistage semisupervised learning
  • 5.2.2. Decision boundary sharpness and few-shot image classification -- 5.2.3. Semisupervised few-shot image classification with refined prototypical network -- 5.2.4. Few-shot unsupervised image classification -- 5.2.5. One-shot image deformation -- 5.2.6. Heterogeneous multitask learning in image classification -- 5.2.7. Few-shot classification with transductive inference -- 5.2.8. Closed-form base learners -- 5.2.9. Long-tailed image classification -- 5.2.10. Image classification via incremental learning without forgetting -- Comparison and contrast of iTAML and reptile -- Lower bound of sample -- 5.2.11. Few-shot open set recognition -- 5.2.12. Deficiency of pretrained knowledge in few-shot learning -- 5.2.13. Bayesian strategy with deep kernel for regression and cross-domain image classification in a few-shot setting -- 5.2.14. Statistical diversity in personalized models of federated learning -- 5.2.15. Meta-learning deficiency in few-shot learning -- 5.3. Face recognition and face presentation attack -- 5.3.1. Introduction -- Facial recognition -- Face antispoofing -- 5.3.2. Person-specific talking head generation for unseen people and portrait painting in few-shot regimes -- 5.3.3. Face presentation attack and domain generalization -- 5.3.4. Anti-face-spoofing in few-shot and zero-shot scenarios -- 5.3.5. Generalized face recognition in the unseen domain -- 5.4. Object detection -- 5.4.1. Introduction -- Approaches -- Benchmarks -- 5.4.2. Long-tailed data object detection in few-shot scenarios -- 5.4.3. Object detection in few-shot scenarios -- 5.4.4. Unseen object detection and viewpoint estimation in low-data settings -- 5.5. Fine-grained image recognition -- 5.5.1. Introduction -- Approaches -- Benchmarks -- 5.5.2. Fine-grained visual categorization -- 5.5.3. One-shot fine-grained visual recognition
  • Intro -- Meta-Learning: Theory, Algorithms and Applications -- Copyright -- Dedication -- Contents -- Preface -- Acknowledgments -- Chapter 1: Meta-learning basics and background -- 1.1. Introduction -- 1.2. Meta-learning -- 1.2.1. Definitions -- 1.2.2. Evaluation -- 1.2.3. Datasets and benchmarks -- 1.3. Machine learning -- 1.3.1. Models -- 1.3.2. Limitations -- 1.3.3. Related concepts -- 1.3.4. Further Reading -- 1.4. Deep learning -- 1.4.1. Models -- 1.4.2. Limitations -- 1.4.3. Further readings -- 1.5. Transfer learning -- 1.5.1. Multitask learning -- 1.6. Few-shot learning -- 1.7. Probabilistic modeling -- 1.8. Bayesian inference -- References -- Part I: Theory &amp -- mechanisms -- Chapter 2: Model-based meta-learning approaches -- 2.1. Introduction -- 2.2. Memory-augmented neural networks -- 2.2.1. Background knowledge -- 2.2.2. Methodology -- Task setup -- Memory retrieval -- Least recently used access -- 2.2.3. Extended algorithm 1 -- 2.2.4. Extended algorithm 2 -- 2.3. Meta-networks -- 2.3.1. Background knowledge -- 2.3.2. Methodology -- Slow weights and fast weights -- Layer augmentation -- 2.3.3. Main loss functions and representation loss functions -- 2.4. Summary -- References -- Chapter 3: Metric-based meta-learning approaches -- 3.1. Introduction -- 3.2. Convolutional Siamese neural networks -- 3.2.1. Background knowledge -- 3.2.2. Methodology -- Combination of the twin Siamese networks -- Objective function -- Optimization -- 3.2.3. Extended algorithm 1 -- 3.3. Matching networks -- 3.3.1. Background knowledge -- 3.3.2. Methodology -- The attention kernel -- Full context embedding -- Episode-based training -- 3.3.3. Extended algorithm 1 -- 3.4. Prototypical networks -- 3.4.1. Background knowledge -- 3.4.2. Methodology -- Bregman divergence requirement -- 3.4.3. Extended algorithm 1 -- 3.4.4. Extended algorithm 2
  • 6.1.1. Limitations -- 6.2. Semantic parsing -- 6.2.1. Introduction -- Development -- Benchmarks -- 6.2.2. Natural language to structured query generation in few-shot learning -- Implementation -- 6.2.3. Semantic parsing in low-resource scenarios -- 6.2.4. Context-dependent semantic parser with few-shot learning -- 6.3. Machine translation -- 6.3.1. Introduction -- 6.3.2. Multidomain neural machine translation in low-resource scenarios -- 6.3.3. Multilingual neural machine translation in few-shot scenarios -- 6.4. Dialogue system -- 6.4.1. Introduction -- 6.4.2. Few-shot personalizing dialogue generation -- 6.4.3. Domain adaptation in a dialogue system -- 6.4.4. Natural language generation by few-shot learning concerning task-oriented dialogue systems -- 6.5. Knowledge graph -- 6.5.1. Introduction -- 6.5.2. Multihop knowledge graph reasoning in few-shot scenarios -- 6.5.3. Knowledge graphs link prediction in few-shot scenarios -- 6.5.4. Knowledge base complex question answering -- 6.5.5. Named-entity recognition in cross-lingual scenarios -- 6.6. Relation extraction -- 6.6.1. Introduction -- 6.6.2. Few-shot supervised relation classification -- 6.6.3. Relation extraction with few-shot and zero-shot learning -- 6.7. Sentiment analysis -- 6.7.1. Introduction -- Benchmark and dataset -- 6.7.2. Text emotion distribution learning with small samples -- 6.8. Emerging topics -- 6.8.1. Domain-specific word embedding under lifelong learning setting -- Background knowledge -- Methodology -- 6.8.2. Multilabel classification -- Background knowledge -- Methodology -- 6.8.3. Representation under a low-resource setting -- Background knowledge -- Methodology -- 6.8.4. Compositional generalization -- Background knowledge -- Methodology -- 6.8.5. Zero-shot transfer learning for query suggestion -- Background knowledge -- Methodology -- 6.9. Summary -- References
  • 5.5.4. Few-shot fine-grained image recognition -- 5.6. Image segmentation -- 5.6.1. Introduction -- Modern development -- 5.6.2. Multiobject few-shot semantic segmentation -- 5.6.3. Few-shot static object instance-level detection -- 5.7. Object tracking -- 5.7.1. Introduction -- 5.7.2. Offline object tracking -- 5.7.3. Real-time online object tracking -- 5.7.4. Real-time object tracking with channel pruning -- One-shot channel pruning -- 5.7.5. Object tracking via instance detection -- 5.8. Label noise -- 5.8.1. Introduction -- Approaches -- Benchmarks -- 5.8.2. Reweighting examples through online approximation -- 5.8.3. Hallucinated clean representation for noisy-labeled visual recognition -- 5.8.4. Data valuation using reinforcement learning -- 5.8.5. Teacher-student networks for image classification on noisy labels -- 5.8.6. Sample reweighting function construction -- 5.8.7. Loss correction approach -- 5.8.8. Meta-relabeling through data coefficients -- 5.8.9. Meta-label correction -- 5.9. Superresolution -- 5.9.1. Introduction -- Approaches -- Datasets and benchmarks -- 5.9.2. Meta-transfer learning for zero-shot superresolution -- 5.9.3. LR-HR image pair superresolution -- 5.9.4. No-reference image quality assessment -- 5.10. Multimodal learning -- 5.10.1. Introduction -- Deep learning approaches -- Benchmarks -- 5.10.2. Visual question answering system -- 5.11. Other emerging topics -- 5.11.1. Domain generalization -- 5.11.2. High-accuracy 3D appearance-based gaze estimation in few-shot regimes -- 5.11.3. Benchmark of cross-domain few-shot learning in vision tasks -- 5.11.4. Latent embedding optimization in low-dimensional space -- 5.11.5. Image captioning -- 5.11.6. Memorization issue -- 5.11.7. Meta-pseudo label -- 5.12. Summary -- References -- Chapter 6: Meta-learning for natural language processing -- 6.1. Introduction
  • Chapter 7: Meta-reinforcement learning