Hands-On Deep Learning Algorithms with Python Master Deep Learning Algorithms with Extensive Math by Implementing Them Using TensorFlow

This book introduces basic-to-advanced deep learning algorithms used in a production environment by AI researchers and principal data scientists; it explains algorithms intuitively, including the underlying math, and shows how to implement them using popular Python-based deep learning libraries such...

Celý popis

Uložené v:
Podrobná bibliografia
Hlavný autor: Ravichandiran, Sudharsan
Médium: E-kniha
Jazyk:English
Vydavateľské údaje: Birmingham Packt Publishing, Limited 2019
Packt Publishing Limited
Vydanie:1
Predmet:
ISBN:9781789344158, 1789344158
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract This book introduces basic-to-advanced deep learning algorithms used in a production environment by AI researchers and principal data scientists; it explains algorithms intuitively, including the underlying math, and shows how to implement them using popular Python-based deep learning libraries such as TensorFlow.
AbstractList This book introduces basic-to-advanced deep learning algorithms used in a production environment by AI researchers and principal data scientists; it explains algorithms intuitively, including the underlying math, and shows how to implement them using popular Python-based deep learning libraries such as TensorFlow.
Author Ravichandiran, Sudharsan
Author_xml – sequence: 1
  fullname: Ravichandiran, Sudharsan
BookMark eNpVjz1PwzAYhI34ELR0ZM-GGAL-jO2xpIUiRSoDYo0c501aSO3ipKr677FUBnrL3SOdTroRunDeAUJ3BD_iqCctFZFKM84Fyc7Q5ITP_zMR6gqNSBRnTGlxjSZ9_xU3GNZcEXGD0oVxdZ8uXTID2CYFmODWrk2mXevDelht-mQfLXk_DCvvbtFlY7oeJn8-Rp8v8498kRbL17d8WqSGcoF5Kq3NJAUKkmtlpWkEZqw2FDMOEhTOONONsFZkAA3mspKW0qZixghDOAY2Rg_H4b3pBgg1tGF3iKHcmGDLk8Oxe3_sboP_2UE_lFB5_23BDcF05fw5F4oTSgT7BWuyWHo
ContentType eBook
DEWEY 005.3
DOI 10.0000/9781789344516
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9781789344516
1789344514
Edition 1
ExternalDocumentID 9781789344516
EBC5841215
GroupedDBID -VX
38.
AABBV
AAFKH
AAKGN
AANYM
AAXUV
AAZEP
AAZGR
ABIWA
ABMRC
ABRSK
ABWNX
ACBYE
ACCPI
ACIWJ
ACMFT
ACXXF
ADBND
AECLD
AEDWI
AEHEP
AEIUR
AEMZR
AETWE
AFQEX
AHWGJ
ALMA_UNASSIGNED_HOLDINGS
APVFW
ATDNW
BBABE
BPBUR
CMZ
CZZ
DUGUG
DYXOI
E2F
EBFEC
EBSCA
ESHEC
IHRAH
K-E
KT4
L7C
NEJRU
OHILO
OODEK
PASLL
QD8
TD3
UE6
ID FETCH-LOGICAL-a24504-7cc672e2e7498c7af5033da2034e7e806439f5cc56eef047b7c22fb3aa5a140e3
ISBN 9781789344158
1789344158
IngestDate Fri Nov 21 19:13:33 EST 2025
Wed Nov 26 05:25:21 EST 2025
IsPeerReviewed false
IsScholarly false
LCCallNum_Ident QA76.76.A65 R385 2019
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-a24504-7cc672e2e7498c7af5033da2034e7e806439f5cc56eef047b7c22fb3aa5a140e3
OCLC 1111433895
PQID EBC5841215
PageCount 498
ParticipantIDs walterdegruyter_marc_9781789344516
proquest_ebookcentral_EBC5841215
PublicationCentury 2000
PublicationDate 2019
[2019]
PublicationDateYYYYMMDD 2019-01-01
PublicationDate_xml – year: 2019
  text: 2019
PublicationDecade 2010
PublicationPlace Birmingham
PublicationPlace_xml – name: Birmingham
– name: Birmingham, UK
PublicationYear 2019
Publisher Packt Publishing, Limited
Packt Publishing Limited
Publisher_xml – name: Packt Publishing, Limited
– name: Packt Publishing Limited
RestrictionsOnAccess restricted access
SSID ssj0003094815
Score 2.112997
Snippet This book introduces basic-to-advanced deep learning algorithms used in a production environment by AI researchers and principal data scientists; it explains...
SourceID walterdegruyter
proquest
SourceType Publisher
SubjectTerms Algorithms
COMPUTERS / Bioinformatics
COMPUTERS / Buyer's Guides
COMPUTERS / General
TensorFlow
Subtitle Master Deep Learning Algorithms with Extensive Math by Implementing Them Using TensorFlow
TableOfContents Chapter 3: Gradient Descent and Its Variants -- Demystifying gradient descent -- Performing gradient descent in regression -- Importing the libraries -- Preparing the dataset -- Defining the loss function -- Computing the gradients of the loss function -- Updating the model parameters -- Gradient descent versus stochastic gradient descent -- Momentum-based gradient descent -- Gradient descent with momentum -- Nesterov accelerated gradient -- Adaptive methods of gradient descent -- Setting a learning rate adaptively using Adagrad -- Doing away with the learning rate using Adadelta -- Overcoming the limitations of Adagrad using RMSProp -- Adaptive moment estimation -- Adamax - Adam based on infinity-norm -- Adaptive moment estimation with AMSGrad -- Nadam - adding NAG to ADAM -- Summary -- Questions -- Further reading -- Chapter 4: Generating Song Lyrics Using RNN -- Introducing RNNs -- The difference between feedforward networks and RNNs -- Forward propagation in RNNs -- Backpropagating through time -- Gradients with respect to the hidden to output weight, V -- Gradients with respect to hidden to hidden layer weights, W -- Gradients with respect to input to the hidden layer weight, U -- Vanishing and exploding gradients problem -- Gradient clipping -- Generating song lyrics using RNNs -- Implementing in TensorFlow -- Data preparation -- Defining the network parameters -- Defining placeholders -- Defining forward propagation -- Defining BPTT -- Start generating songs -- Different types of RNN architectures -- One-to-one architecture -- One-to-many architecture -- Many-to-one architecture -- Many-to-many architecture -- Summary -- Questions -- Further reading -- Chapter 5: Improvements to the RNN -- LSTM to the rescue -- Understanding the LSTM cell -- Forget gate -- Input gate -- Output gate -- Updating the cell state -- Updating hidden state
Defining the generator -- Defining the discriminator -- Defining the input placeholders -- Starting the GAN! -- Computing the loss function -- Discriminator loss -- Generator loss -- Optimizing the loss -- Starting the training -- Generating handwritten digits -- DCGAN - Adding convolution to a GAN -- Deconvolutional generator -- Convolutional discriminator -- Implementing DCGAN to generate CIFAR images -- Exploring the dataset -- Defining the discriminator -- Defining the generator -- Defining the inputs -- Starting the DCGAN -- Computing the loss function -- Discriminator loss -- Generator loss -- Optimizing the loss -- Train the DCGAN -- Least squares GAN -- Loss function -- LSGAN in TensorFlow -- Discriminator loss -- Generator loss -- GANs with Wasserstein distance -- Are we minimizing JS divergence in GANs? -- What is the Wasserstein distance? -- Demystifying the k-Lipschitz function -- The loss function of WGAN -- WGAN in TensorFlow -- Summary -- Questions -- Further reading -- Chapter 9: Learning More about GANs -- Conditional GANs -- Loss function of CGAN -- Generating specific handwritten digits using CGAN -- Defining the generator -- Defining discriminator -- Start the GAN! -- Computing the loss function -- Discriminator loss -- Generator loss -- Optimizing the loss -- Start training the CGAN -- Generate the handwritten digit, 7 -- Understanding InfoGAN -- Mutual information -- Architecture of the InfoGAN -- Constructing an InfoGAN in TensorFlow -- Defining generator -- Defining the discriminator -- Define the input placeholders -- Start the GAN -- Computing loss function -- Discriminator loss -- Generator loss -- Mutual information -- Optimizing the loss -- Beginning training -- Generating handwritten digits -- Translating images using a CycleGAN -- Role of generators -- Role of discriminators -- Loss function -- Cycle consistency loss
Converting photos to paintings using a CycleGAN
Forward propagation in LSTM -- Backpropagation in LSTM -- Gradients with respect to gates -- Gradients with respect to weights -- Gradients with respect to V -- Gradients with respect to W -- Gradients with respect to U -- Predicting Bitcoin prices using LSTM model -- Data preparation -- Defining the parameters -- Define the LSTM cell -- Defining forward propagation -- Defining backpropagation -- Training the LSTM model -- Making predictions using the LSTM model -- Gated recurrent units -- Understanding the GRU cell -- Update gate -- Reset gate -- Updating hidden state -- Forward propagation in a GRU cell -- Backpropagation in a GRU cell -- Gradient with respect to gates -- Gradients with respect to weights -- Gradients with respect to V -- Gradients with respect to W -- Gradients with respect to U -- Implementing a GRU cell in TensorFlow -- Defining the weights -- Defining forward propagation -- Bidirectional RNN -- Going deep with deep RNN -- Language translation using the seq2seq model -- Encoder -- Decoder -- Attention is all we need -- Summary -- Questions -- Further reading -- Chapter 6: Demystifying Convolutional Networks -- What are CNNs? -- Convolutional layers -- Strides -- Padding -- Pooling layers -- Fully connected layers -- The architecture of CNNs -- The math behind CNNs -- Forward propagation -- Backward propagation -- Implementing a CNN in TensorFlow -- Defining helper functions -- Defining the convolutional network -- Computing loss -- Starting the training -- Visualizing extracted features -- CNN architectures -- LeNet architecture -- Understanding AlexNet -- Architecture of VGGNet -- GoogleNet -- Inception v1 -- Inception v2 and v3 -- Capsule networks -- Understanding Capsule networks -- Computing prediction vectors -- Coupling coefficients -- Squashing function -- Dynamic routing algorithm -- Architecture of the Capsule network
The loss function -- Margin loss -- Reconstruction loss -- Building Capsule networks in TensorFlow -- Defining the squash function -- Defining a dynamic routing algorithm -- Computing primary and digit capsules -- Masking the digit capsule -- Defining the decoder -- Computing the accuracy of the model -- Calculating loss -- Margin loss -- Reconstruction loss -- Total loss -- Training the Capsule network -- Summary -- Questions -- Further reading -- Chapter 7: Learning Text Representations -- Understanding the word2vec model -- Understanding the CBOW model -- CBOW with a single context word -- Forward propagation -- Backward propagation -- CBOW with multiple context words -- Understanding skip-gram model -- Forward propagation in skip-gram -- Backward propagation -- Various training strategies -- Hierarchical softmax -- Negative sampling -- Subsampling frequent words -- Building the word2vec model using gensim -- Loading the dataset -- Preprocessing and preparing the dataset -- Building the model -- Evaluating the embeddings -- Visualizing word embeddings in TensorBoard -- Doc2vec -- Paragraph Vector - Distributed Memory model -- Paragraph Vector - Distributed Bag of Words model -- Finding similar documents using doc2vec -- Understanding skip-thoughts algorithm -- Quick-thoughts for sentence embeddings -- Summary -- Questions -- Further reading -- Section 3: Advanced Deep Learning Algorithms -- Chapter 8: Generating Images Using GANs -- Differences between discriminative and generative models -- Say hello to GANs! -- Breaking down the generator -- Breaking down the discriminator -- How do they learn though? -- Architecture of a GAN -- Demystifying the loss function -- Discriminator loss -- First term -- Second term -- Final term -- Generator loss -- Total loss -- Heuristic loss -- Generating images using GANs in TensorFlow -- Reading the dataset
Cover -- Title Page -- Copyright and Credits -- Dedication -- About Packt -- Contributors -- Table of Contents -- Preface -- Section 1: Getting Started with Deep Learning -- Chapter 1: Introduction to Deep Learning -- What is deep learning? -- Biological and artificial neurons -- ANN and its layers -- Input layer -- Hidden layer -- Output layer -- Exploring activation functions -- The sigmoid function -- The tanh function -- The Rectified Linear Unit function -- The leaky ReLU function -- The Exponential linear unit function -- The Swish function -- The softmax function -- Forward propagation in ANN -- How does ANN learn? -- Debugging gradient descent with gradient checking -- Putting it all together -- Building a neural network from scratch -- Summary -- Questions -- Further reading -- Chapter 2: Getting to Know TensorFlow -- What is TensorFlow? -- Understanding computational graphs and sessions -- Sessions -- Variables, constants, and placeholders -- Variables -- Constants -- Placeholders and feed dictionaries -- Introducing TensorBoard -- Creating a name scope -- Handwritten digit classification using TensorFlow -- Importing the required libraries -- Loading the dataset -- Defining the number of neurons in each layer -- Defining placeholders -- Forward propagation -- Computing loss and backpropagation -- Computing accuracy -- Creating summary -- Training the model -- Visualizing graphs in TensorBoard -- Introducing eager execution -- Math operations in TensorFlow -- TensorFlow 2.0 and Keras -- Bonjour Keras -- Defining the model -- Defining a sequential model -- Defining a functional model -- Compiling the model -- Training the model -- Evaluating the model -- MNIST digit classification using TensorFlow 2.0 -- Should we use Keras or TensorFlow? -- Summary -- Questions -- Further reading -- Section 2: Fundamental Deep Learning Algorithms
Hands-On Deep Learning Algorithms with Python: Master deep learning algorithms with extensive math by implementing them using TensorFlow
Title Hands-On Deep Learning Algorithms with Python
URI https://ebookcentral.proquest.com/lib/[SITE_ID]/detail.action?docID=5841215
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07T8MwELagZWDhjSgvRYgNRaSJE8cjVEVISKVDhbpFju20CEirNC3tv-fsOC2pGGBgsRLLzuPO8j189x1C10EQ8oT4vk24G4OBgqlNQVDbAZYJxXHY9AKui02QTifs92nXwAtMdDkBkqbhfE7H_8pq6ANmq9TZP7B7-VDogGtgOrTAdmjXNOLlbVnmJxUT-zmFPUSOS-DUwc3d-2CUvebDD5PI1l0ouIDV-c5MBc6nINqMK3QqhmDsmkVj_AEqBaniD-gy_pZ_92FVE6UKu7FJQE1R0GTBT7uokmFF4ER1XBWtek2KLGP7KtM2Ud3FvqdiLZ96eOn88hwNElNgnqrX3VZmVbT9nU8dNyDkIJsu8vKcWov_3h6qS5UTso82ZHqAdstKGJbZGA-RXdLeUrS3StpbK9pbivZWQfsj9PLQ7rUebVOIwmbw9Q6GhcwD4kpXEkxDTliiDn8Fcx0PSyJDrdYlPud-IGXiYBLDyneT2GPMZ2DBSu8Y1dJRKk-QJQX2cSKwiBnFrggoYaGfeGEsmpwyVzaQVf57pM_LTZBu1L5vgbKowEAa6GqNJpHCPYkqNDz9zaAztL1aQueolmdTeYG2-Cx_nWSXmmdfqs0obw
linkProvider Knovel
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.title=Hands-On+Deep+Learning+Algorithms+with+Python&rft.au=Ravichandiran%2C+Sudharsan&rft.date=2019-01-01&rft.pub=Packt+Publishing+Limited&rft.isbn=9781789344516&rft_id=info:doi/10.0000%2F9781789344516&rft.externalDBID=n%2Fa&rft.externalDocID=9781789344516
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781789344158/lc.gif&client=summon&freeimage=true
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781789344158/mc.gif&client=summon&freeimage=true
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=9781789344158/sc.gif&client=summon&freeimage=true