Practical Machine Learning for Data Analysis Using Python
This book is a problem solver's guide for creating real-world intelligent systems. It provides a comprehensive approach with concepts, practices, hands-on examples, and sample code. The book teaches readers the vital skills required to understand and solve different problems with machine learni...
Uloženo v:
| Hlavní autor: | |
|---|---|
| Médium: | E-kniha Kniha |
| Jazyk: | angličtina |
| Vydáno: |
London
Elsevier
2020
Academic Press Elsevier Science & Technology |
| Vydání: | 1 |
| Témata: | |
| ISBN: | 0128213795, 9780128213797 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
Obsah:
- Title Page Preface Table of Contents 1. Introduction 2. Data Preprocessing 3. Machine Learning Techniques 4. Classification Examples for Healthcare 5. Other Classification Examples 6. Regression Examples 7. Clustering Examples Index
- 3.5.14 - Random forest -- 3.5.15 - Boosting -- 3.5.15.1 - Adaptive Boosting (AdaBoost) -- 3.5.15.2 - Gradient Boosting -- 3.5.16 - Other ensemble methods -- 3.5.17 - Deep learning -- 3.5.18 - Deep neural networks -- 3.5.19 - Recurrent neural networks -- 3.5.20 - Autoencoders -- 3.5.21 - Long short-term memory (LSTM) networks -- 3.5.22 - Convolutional neural networks -- 3.5.22.1 - Convolution layer -- 3.5.22.2 - Pooling layer -- 3.6 - Unsupervised learning -- 3.6.1 - K-means algorithm -- 3.6.2 - Silhouettes -- 3.6.3 - Anomaly detection -- 3.6.4 - Association rule-mining -- 3.7 - Reinforcement learning -- 3.8 - Instance-based learning -- 3.9 - Summary -- References -- Chapter 4 - Classification examples for healthcare -- 4.1 - Introduction -- 4.2 - EEG signal analysis -- 4.2.1 - Epileptic seizure prediction and detection -- 4.2.2 - Emotion recognition -- 4.2.3 - Classification of focal and nonfocal epileptic EEG signals -- 4.2.4 - Migraine detection -- 4.3 - EMG signal analysis -- 4.3.1 - Diagnosis of neuromuscular disorders -- 4.3.2 - EMG signals in prosthesis control -- 4.3.3 - EMG signals in rehabilitation robotics -- 4.4 - ECG signal analysis -- 4.4.1 - Diagnosis of heart arrhythmia -- 4.5 - Human activity recognition -- 4.5.1 - Sensor-based human activity recognition -- 4.5.2 - Smartphone-based recognition of human activities -- 4.6 - Microarray gene expression data classification for cancer detection -- 4.7 - Breast cancer detection -- 4.8 - Classification of the cardiotocogram data for anticipation of fetal risks -- 4.9 - Diabetes detection -- 4.10 - Heart disease detection -- 4.11 - Diagnosis of chronic kidney disease (CKD) -- 4.12 - Summary -- References -- Chapter 5 - Other classification examples -- 5.1 - Intrusion detection -- 5.2 - Phishing website detection -- 5.3 - Spam e-mail detection -- 5.4 - Credit scoring
- 5.5 - credit card fraud detection -- 5.6 - Handwritten digit recognition using CNN -- 5.7 - Fashion-MNIST image classification with CNN -- 5.8 - CIFAR image classification using CNN -- 5.9 - Text classification -- 5.10 - Summary -- References -- Chapter 6 - Regression examples -- 6.1 - Introduction -- 6.2 - Stock market price index return forecasting -- 6.3 - Inflation forecasting -- 6.4 - Electrical load forecasting -- 6.5 - Wind speed forecasting -- 6.6 - Tourism demand forecasting -- 6.7 - House prices prediction -- 6.8 - Bike usage prediction -- 6.9 - Summary -- References -- Chapter 7 - Clustering examples -- 7.1 - Introduction -- 7.2 - Clustering -- 7.2.1 - Evaluating the output of clustering methods -- 7.2.2 - Applications of cluster analysis -- 7.2.3 - Number of possible clustering -- 7.2.4 - Types of clustering algorithms -- 7.3 - The k-means clustering algorithm -- 7.4 - The k-medoids clustering algorithm -- 7.5 - Hierarchical clustering -- 7.5.1 - Agglomerative clustering algorithm -- 7.5.2 - Divisive clustering algorithm -- 7.6 - The fuzzy c-means clustering algorithm -- 7.7 - Density-based clustering algorithms -- 7.7.1 - The DBSCAN algorithm -- 7.7.2 - OPTICS clustering algorithms -- 7.8 - The expectation of maximization for Gaussian mixture model clustering -- 7.9 - Bayesian clustering -- 7.10 - Silhouette analysis -- 7.11 - Image segmentation with clustering -- 7.12 - Feature extraction with clustering -- 7.13 - Clustering for classification -- 7.14 - Summary -- References -- Index -- Back cover
- 2.2.9 - Incomplete features -- 2.2.10 - Feature extraction methods -- 2.2.11 - Feature extraction using wavelet transform -- 2.2.11.1 - The continuous wavelet transform (CWT) -- 2.2.11.2 - The discrete wavelet transform (DWT) -- 2.2.11.3 - The stationary wavelet transform (SWT) -- 2.2.11.4 - The wavelet packet decomposition (WPD) -- 2.3 - Dimension reduction -- 2.3.1 - Feature construction and selection -- 2.3.2 - Univariate feature selection -- 2.3.3 - Recursive feature elimination -- 2.3.4 - Feature selection from a model -- 2.3.5 - Principle component analysis (PCA) -- 2.3.6 - Incremental PCA -- 2.3.7 - Kernel principal component analysis -- 2.3.8 - Neighborhood components analysis -- 2.3.9 - Independent component analysis -- 2.3.10 - Linear discriminant analysis (LDA) -- 2.3.11 - Entropy -- 2.4 - Clustering for feature extraction and dimension reduction -- References -- Chapter 3 - Machine learning techniques -- 3.1 - Introduction -- 3.2 - What is machine learning? -- 3.2.1 - Understanding machine learning -- 3.2.2 - What makes machines learn? -- 3.2.3 - Machine learning is a multidisciplinary field -- 3.2.4 - Machine learning problem -- 3.2.5 - Goals of learning -- 3.2.6 - Challenges in machine learning -- 3.3 - Python libraries -- 3.3.1 - Scikit-learn -- 3.3.2 - TensorFlow -- 3.3.3 - Keras -- 3.3.4 - Building a model with Keras -- 3.3.5 - The natural language tool kit -- 3.4 - Learning scenarios -- 3.5 - Supervised learning algorithms -- 3.5.1 - Classification -- 3.5.2 - Forecasting, prediction, and regression -- 3.5.3 - Linear models -- 3.5.4 - The perceptron -- 3.5.5 - Logistic regression -- 3.5.6 - Linear discriminant analysis -- 3.5.7 - Artificial neural networks -- 3.5.8 - k-Nearest neighbors -- 3.5.9 - Support vector machines -- 3.5.10 - Decision tree classifiers -- 3.5.11 - Naive Bayes -- 3.5.12 - Ensemble methods -- 3.5.13 - Bagging
- Cover -- Title -- Copyright -- Dedication -- Contents -- Preface -- Acknowledgments -- Chapter 1 - Introduction -- 1.1 - What is machine learning? -- 1.1.1 - Why is machine learning needed? -- 1.1.2 - Making data-driven decisions -- 1.1.3 - Definitions and key terminology -- 1.1.4 - Key tasks of machine learning -- 1.1.5 - Machine learning techniques -- 1.2 - Machine learning framework -- 1.2.1 - Data collection -- 1.2.2 - Data description -- 1.2.3 - Exploratory data analysis -- 1.2.4 - Data quality analysis -- 1.2.5 - Data preparation -- 1.2.6 - Data integration -- 1.2.7 - Data wrangling -- 1.2.8 - Feature scaling and feature extraction -- 1.2.9 - Feature selection and dimension reduction -- 1.2.10 - Modeling -- 1.2.11 - Selecting modeling techniques -- 1.2.12 - Model building -- 1.2.13 - Model assessment and tuning -- 1.2.14 - Implementation and examining the created model -- 1.2.15 - Supervised machine learning framework -- 1.2.16 - Unsupervised machine learning framework -- 1.3 - Performance evaluation -- 1.3.1 - Confusion matrix -- 1.3.2 - F-measure analysis -- 1.3.3 - ROC analysis -- 1.3.4 - Kappa statistic -- 1.3.5 - What is measured -- 1.3.6 - How they are measured -- 1.3.7 - How to interpret estimates -- 1.3.8 - k-Fold cross-validation in scikit-learn -- 1.3.9 - How to choose the right algorithm -- 1.4 - The Python machine learning environment -- 1.4.1 - Pitfalls -- 1.4.2 - Drawbacks -- 1.4.3 - The NumPy library -- 1.4.4 - Pandas -- 1.5 - Summary -- References -- Chapter 2 - Data preprocessing -- 2.1 - Introduction -- 2.2 - Feature extraction and transformation -- 2.2.1 - Types of features -- 2.2.2 - Statistical features -- 2.2.3 - Structured features -- 2.2.4 - Feature transformations -- 2.2.5 - Thresholding and discretization -- 2.2.6 - Data manipulation -- 2.2.7 - Standardization -- 2.2.8 - Normalization and calibration

