Hands-On Deep Learning Algorithms with Python Master Deep Learning Algorithms with Extensive Math by Implementing Them Using TensorFlow
This book introduces basic-to-advanced deep learning algorithms used in a production environment by AI researchers and principal data scientists; it explains algorithms intuitively, including the underlying math, and shows how to implement them using popular Python-based deep learning libraries such...
Saved in:
| Main Author: | |
|---|---|
| Format: | eBook |
| Language: | English |
| Published: |
Birmingham
Packt Publishing, Limited
2019
Packt Publishing Limited |
| Edition: | 1 |
| Subjects: | |
| ISBN: | 9781789344158, 1789344158 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Table of Contents:
- Chapter 3: Gradient Descent and Its Variants -- Demystifying gradient descent -- Performing gradient descent in regression -- Importing the libraries -- Preparing the dataset -- Defining the loss function -- Computing the gradients of the loss function -- Updating the model parameters -- Gradient descent versus stochastic gradient descent -- Momentum-based gradient descent -- Gradient descent with momentum -- Nesterov accelerated gradient -- Adaptive methods of gradient descent -- Setting a learning rate adaptively using Adagrad -- Doing away with the learning rate using Adadelta -- Overcoming the limitations of Adagrad using RMSProp -- Adaptive moment estimation -- Adamax - Adam based on infinity-norm -- Adaptive moment estimation with AMSGrad -- Nadam - adding NAG to ADAM -- Summary -- Questions -- Further reading -- Chapter 4: Generating Song Lyrics Using RNN -- Introducing RNNs -- The difference between feedforward networks and RNNs -- Forward propagation in RNNs -- Backpropagating through time -- Gradients with respect to the hidden to output weight, V -- Gradients with respect to hidden to hidden layer weights, W -- Gradients with respect to input to the hidden layer weight, U -- Vanishing and exploding gradients problem -- Gradient clipping -- Generating song lyrics using RNNs -- Implementing in TensorFlow -- Data preparation -- Defining the network parameters -- Defining placeholders -- Defining forward propagation -- Defining BPTT -- Start generating songs -- Different types of RNN architectures -- One-to-one architecture -- One-to-many architecture -- Many-to-one architecture -- Many-to-many architecture -- Summary -- Questions -- Further reading -- Chapter 5: Improvements to the RNN -- LSTM to the rescue -- Understanding the LSTM cell -- Forget gate -- Input gate -- Output gate -- Updating the cell state -- Updating hidden state
- Defining the generator -- Defining the discriminator -- Defining the input placeholders -- Starting the GAN! -- Computing the loss function -- Discriminator loss -- Generator loss -- Optimizing the loss -- Starting the training -- Generating handwritten digits -- DCGAN - Adding convolution to a GAN -- Deconvolutional generator -- Convolutional discriminator -- Implementing DCGAN to generate CIFAR images -- Exploring the dataset -- Defining the discriminator -- Defining the generator -- Defining the inputs -- Starting the DCGAN -- Computing the loss function -- Discriminator loss -- Generator loss -- Optimizing the loss -- Train the DCGAN -- Least squares GAN -- Loss function -- LSGAN in TensorFlow -- Discriminator loss -- Generator loss -- GANs with Wasserstein distance -- Are we minimizing JS divergence in GANs? -- What is the Wasserstein distance? -- Demystifying the k-Lipschitz function -- The loss function of WGAN -- WGAN in TensorFlow -- Summary -- Questions -- Further reading -- Chapter 9: Learning More about GANs -- Conditional GANs -- Loss function of CGAN -- Generating specific handwritten digits using CGAN -- Defining the generator -- Defining discriminator -- Start the GAN! -- Computing the loss function -- Discriminator loss -- Generator loss -- Optimizing the loss -- Start training the CGAN -- Generate the handwritten digit, 7 -- Understanding InfoGAN -- Mutual information -- Architecture of the InfoGAN -- Constructing an InfoGAN in TensorFlow -- Defining generator -- Defining the discriminator -- Define the input placeholders -- Start the GAN -- Computing loss function -- Discriminator loss -- Generator loss -- Mutual information -- Optimizing the loss -- Beginning training -- Generating handwritten digits -- Translating images using a CycleGAN -- Role of generators -- Role of discriminators -- Loss function -- Cycle consistency loss
- Converting photos to paintings using a CycleGAN
- Forward propagation in LSTM -- Backpropagation in LSTM -- Gradients with respect to gates -- Gradients with respect to weights -- Gradients with respect to V -- Gradients with respect to W -- Gradients with respect to U -- Predicting Bitcoin prices using LSTM model -- Data preparation -- Defining the parameters -- Define the LSTM cell -- Defining forward propagation -- Defining backpropagation -- Training the LSTM model -- Making predictions using the LSTM model -- Gated recurrent units -- Understanding the GRU cell -- Update gate -- Reset gate -- Updating hidden state -- Forward propagation in a GRU cell -- Backpropagation in a GRU cell -- Gradient with respect to gates -- Gradients with respect to weights -- Gradients with respect to V -- Gradients with respect to W -- Gradients with respect to U -- Implementing a GRU cell in TensorFlow -- Defining the weights -- Defining forward propagation -- Bidirectional RNN -- Going deep with deep RNN -- Language translation using the seq2seq model -- Encoder -- Decoder -- Attention is all we need -- Summary -- Questions -- Further reading -- Chapter 6: Demystifying Convolutional Networks -- What are CNNs? -- Convolutional layers -- Strides -- Padding -- Pooling layers -- Fully connected layers -- The architecture of CNNs -- The math behind CNNs -- Forward propagation -- Backward propagation -- Implementing a CNN in TensorFlow -- Defining helper functions -- Defining the convolutional network -- Computing loss -- Starting the training -- Visualizing extracted features -- CNN architectures -- LeNet architecture -- Understanding AlexNet -- Architecture of VGGNet -- GoogleNet -- Inception v1 -- Inception v2 and v3 -- Capsule networks -- Understanding Capsule networks -- Computing prediction vectors -- Coupling coefficients -- Squashing function -- Dynamic routing algorithm -- Architecture of the Capsule network
- The loss function -- Margin loss -- Reconstruction loss -- Building Capsule networks in TensorFlow -- Defining the squash function -- Defining a dynamic routing algorithm -- Computing primary and digit capsules -- Masking the digit capsule -- Defining the decoder -- Computing the accuracy of the model -- Calculating loss -- Margin loss -- Reconstruction loss -- Total loss -- Training the Capsule network -- Summary -- Questions -- Further reading -- Chapter 7: Learning Text Representations -- Understanding the word2vec model -- Understanding the CBOW model -- CBOW with a single context word -- Forward propagation -- Backward propagation -- CBOW with multiple context words -- Understanding skip-gram model -- Forward propagation in skip-gram -- Backward propagation -- Various training strategies -- Hierarchical softmax -- Negative sampling -- Subsampling frequent words -- Building the word2vec model using gensim -- Loading the dataset -- Preprocessing and preparing the dataset -- Building the model -- Evaluating the embeddings -- Visualizing word embeddings in TensorBoard -- Doc2vec -- Paragraph Vector - Distributed Memory model -- Paragraph Vector - Distributed Bag of Words model -- Finding similar documents using doc2vec -- Understanding skip-thoughts algorithm -- Quick-thoughts for sentence embeddings -- Summary -- Questions -- Further reading -- Section 3: Advanced Deep Learning Algorithms -- Chapter 8: Generating Images Using GANs -- Differences between discriminative and generative models -- Say hello to GANs! -- Breaking down the generator -- Breaking down the discriminator -- How do they learn though? -- Architecture of a GAN -- Demystifying the loss function -- Discriminator loss -- First term -- Second term -- Final term -- Generator loss -- Total loss -- Heuristic loss -- Generating images using GANs in TensorFlow -- Reading the dataset
- Cover -- Title Page -- Copyright and Credits -- Dedication -- About Packt -- Contributors -- Table of Contents -- Preface -- Section 1: Getting Started with Deep Learning -- Chapter 1: Introduction to Deep Learning -- What is deep learning? -- Biological and artificial neurons -- ANN and its layers -- Input layer -- Hidden layer -- Output layer -- Exploring activation functions -- The sigmoid function -- The tanh function -- The Rectified Linear Unit function -- The leaky ReLU function -- The Exponential linear unit function -- The Swish function -- The softmax function -- Forward propagation in ANN -- How does ANN learn? -- Debugging gradient descent with gradient checking -- Putting it all together -- Building a neural network from scratch -- Summary -- Questions -- Further reading -- Chapter 2: Getting to Know TensorFlow -- What is TensorFlow? -- Understanding computational graphs and sessions -- Sessions -- Variables, constants, and placeholders -- Variables -- Constants -- Placeholders and feed dictionaries -- Introducing TensorBoard -- Creating a name scope -- Handwritten digit classification using TensorFlow -- Importing the required libraries -- Loading the dataset -- Defining the number of neurons in each layer -- Defining placeholders -- Forward propagation -- Computing loss and backpropagation -- Computing accuracy -- Creating summary -- Training the model -- Visualizing graphs in TensorBoard -- Introducing eager execution -- Math operations in TensorFlow -- TensorFlow 2.0 and Keras -- Bonjour Keras -- Defining the model -- Defining a sequential model -- Defining a functional model -- Compiling the model -- Training the model -- Evaluating the model -- MNIST digit classification using TensorFlow 2.0 -- Should we use Keras or TensorFlow? -- Summary -- Questions -- Further reading -- Section 2: Fundamental Deep Learning Algorithms
- Hands-On Deep Learning Algorithms with Python: Master deep learning algorithms with extensive math by implementing them using TensorFlow

