Deep learning with Python : a hands-on introduction

Discover the practical aspects of implementing deep-learning solutions using the rich Python ecosystem. This book bridges the gap between the academic state-of-the-art and the industry state-of-the-practice by introducing you to deep learning frameworks such as Keras, Theano, and Caffe. The practica...

Celý popis

Uloženo v:
Podrobná bibliografie
Hlavní autor: Ketkar, Nikhil
Médium: E-kniha Kniha
Jazyk:angličtina
Vydáno: Berkeley, CA Apress 2017
Apress L. P
Vydání:1
Témata:
ISBN:1484227654, 9781484227657, 1484227662, 9781484227664
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Obsah:
  • Intro -- Contents at a Glance -- Contents -- About the Author -- About the Technical Reviewer -- Acknowledgments -- Chapter 1: Introduction to Deep Learning -- Historical Context -- Advances in Related Fields -- Prerequisites -- Overview of Subsequent Chapters -- Installing the Required Libraries -- Chapter 2: Machine Learning Fundamentals -- Intuition -- Binary Classification -- Regression -- Generalization -- Regularization -- Summary -- Chapter 3: Feed Forward Neural Networks -- Unit -- Overall Structure of a Neural Network -- Expressing the Neural Network in Vector Form -- Evaluating the output of the Neural Network -- Training the Neural Network -- Deriving Cost Functions using Maximum Likelihood -- Binary Cross Entropy -- Cross Entropy -- Squared Error -- Summary of Loss Functions -- Types of Units/Activation Functions/Layers -- Linear Unit -- Sigmoid Unit -- Softmax Layer -- Rectified Linear Unit (ReLU) -- Hyperbolic Tangent -- Neural Network Hands-on with AutoGrad -- Summary -- Chapter 4: Introduction to Theano -- What is Theano -- Theano Hands-On -- Summary -- Chapter 5: Convolutional Neural Networks -- Convolution Operation -- Pooling Operation -- Convolution-Detector-Pooling Building Block -- Convolution Variants -- Intuition behind CNNs -- Summary -- Chapter 6: Recurrent Neural Networks -- RNN Basics -- Training RNNs -- Bidirectional RNNs -- Gradient Explosion and Vanishing -- Gradient Clipping -- Long Short Term Memory -- Summary -- Chapter 7: Introduction to Keras -- Summary -- Chapter 8: Stochastic Gradient Descent -- Optimization Problems -- Method of Steepest Descent -- Batch, Stochastic (Single and Mini-batch) Descent -- Batch -- Stochastic Single Example -- Stochastic Mini-batch -- Batch vs. Stochastic -- Challenges with SGD -- Local Minima -- Saddle Points -- Selecting the Learning Rate -- Slow Progress in Narrow Valleys
  • Algorithmic Variations on SGD -- Momentum -- Nesterov Accelerated Gradient (NAS) -- Annealing and Learning Rate Schedules -- Adagrad -- RMSProp -- Adadelta -- Adam -- Resilient Backpropagation -- Equilibrated SGD -- Tricks and Tips for using SGD -- Preprocessing Input Data -- Choice of Activation Function -- Preprocessing Target Value -- Initializing Parameters -- Shuffling Data -- Batch Normalization -- Early Stopping -- Gradient Noise -- Parallel and Distributed SGD -- Hogwild -- Downpour -- Hands-on SGD with Downhill -- Summary -- Chapter 9: Automatic Differentiation -- Numerical Differentiation -- Symbolic Differentiation -- Automatic Differentiation Fundamentals -- Forward/Tangent Linear Mode -- Reverse/Cotangent/Adjoint Linear Mode -- Implementation of Automatic Differentiation -- Source Code Transformation -- Operator Overloading -- Hands-on Automatic Differentiation with Autograd -- Summary -- Chapter 10: Introduction to GPUs -- Summary -- Chapter 11: Introduction to Tensorflow -- Summary -- Chapter 12: Introduction to PyTorch -- Summary -- Chapter 13: Regularization Techniques -- Model Capacity, Overfitting, and Underfitting -- Regularizing the Model -- Early Stopping -- Norm Penalties -- Dropout -- Summary -- Chapter 14: Training Deep Learning Models -- Performance Metrics -- Data Procurement -- Splitting Data for Training/Validation/Test -- Establishing Achievable Limits on the Error Rate -- Establishing the Baseline with Standard Choices -- Building an Automated, End-to-End Pipeline -- Orchestration for Visibility -- Analysis of Overfitting and Underfitting -- Hyper-Parameter Tuning -- Summary -- Index