Energy Efficiency and Robustness of Advanced Machine Learning Architectures A Cross-Layer Approach

Machine Learning (ML) algorithms have shown a high level of accuracy, and applications are widely used in many systems and platforms. However, developing efficient ML-based systems requires addressing three problems: energy-efficiency, robustness, and techniques that typically focus on optimizing fo...

Celý popis

Uloženo v:
Podrobná bibliografie
Hlavní autoři: Marchisio, Alberto, Shafique, Muhammad
Médium: E-kniha
Jazyk:angličtina
Vydáno: Milton CRC Press 2025
Taylor & Francis
CRC Press LLC
Vydání:1
Edice:Chapman & Hall/CRC Artificial Intelligence and Robotics Series
Témata:
ISBN:1032870133, 9781032855509, 9781032870137, 1032855509, 1040165036, 9781040165065, 9781040165034, 9781003530459, 1003530451, 1040165060
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Obsah:
  • 3.7.5. Evaluation of the Approximate Softmax and Squash Designs -- 3.7.6. Summary -- 3.8. SUMMARY OF HARDWARE AND SOFTWARE OPTIMIZATIONS FOR CAPSULE NETWORK -- CHAPTER 4: Adversarial Security Threats for DNNs and CapsNets -- 4.1. ROBCAPS: EVALUATING THE ROBUSTNESS OF CAPSNETS AGAINST AFFINE TRANSFORMATIONS AND ADVERSARIAL ATTACKS -- 4.1.1. System Overview -- 4.1.2. RobCaps Methodology -- 4.1.3. Experimental Setup -- 4.1.4. Robustness Against Affine Trasformations -- 4.1.5. Robustness Against Adversarial Attacks -- 4.1.6. Analyzing the Contribution of Dynamic Routing to the Robustness of the DeepCaps -- 4.1.7. Summary -- 4.2. CAPSATTACKS: A STUDY ON THE SECURITY VULNERABILITIES OF CAPSNETS AGAINST ADVERSARIAL ATTACKS -- 4.2.1. System Overview -- 4.2.2. Generation of Targeted Imperceptible and Robust Adversarial Examples -- 4.2.3. Evaluation of the CapsAttack Methodology -- 4.2.4. Summary -- 4.3. FAKEWEATHER: ADVERSARIAL ATTACKS FOR DNNS EMULATING WEATHER CONDITIONS ON THE CAMERA LENS OF AUTONOMOUS SYSTEMS -- 4.3.1. System Overview -- 4.3.2. fakeWeather Attacks Design -- 4.3.3. Evaluation of the fakeWeather Attacks -- 4.3.4. Summary -- 4.4. SUMMARY OF ADVERSARIAL SECURITY THREATS FOR DNNS AND CAPSNETS -- CHAPTER 5: Integration of Multiple Design Objectives into NAS Frameworks for CapsNets and DNNs -- 5.1. FLOW FOR DESIGNING INTEGRATED FRAMEWORKS WITH MULTIPLE DESIGN OBJECTIVES -- 5.2. NASCAPS: A FRAMEWORK FOR NEURAL ARCHITECTURE SEARCH FOR OPTIMIZING ACCURACY AND HARDWARE EFFICIENCY OF CONVOLUTIONAL CAPSNETS -- 5.2.1. System Overview -- 5.2.2. NASCaps Framework -- 5.2.3. Evaluation of the NASCaps Framework -- 5.2.4. Summary -- 5.3. ROHNAS: A NAS FRAMEWORK WITH CONJOINT OPTIMIZATION FOR HARDWARE EFFICIENCY AND ADVERSARIAL ROBUSTNESS OF CONVOLUTIONAL AND CAPSNETS -- 5.3.1. System Overview -- 5.3.2. RoHNAS Framework
  • 5.3.3. Evaluation of the RoHNAS Framework -- 5.3.4. Summary -- 5.4. SUMMARY OF INTEGRATION OF MULTIPLE DESIGN OBJECTIVES INTO NAS FRAMEWORKS FOR CAPSNETS AND DNNS -- CHAPTER 6: Efficient Optimizations for Spiking Neural Networks on Neuromorphic Hardware -- 6.1. OVERVIEW OF THE LOIHI NEUROMORPHIC PROCESSOR -- 6.1.1. Neuron Model -- 6.1.2. Chip Architecture -- 6.1.3. Tools to Support Loihi Developers -- 6.2. EFFICIENT SNN FOR RECOGNIZING GESTURES ON LOIHI -- 6.2.1. System Overview -- 6.2.2. DNN-to-SNN Conversion -- 6.2.3. Pre-Processing Method for the DvsGesture Dataset -- 6.2.4. Evaluation of the Accuracy Results -- 6.2.5. Summary -- 6.3. CARSNN: AN EFFICIENT SNN FOR EVENT-BASED AUTONOMOUS CARS ON THE LOIHI NEUROMORPHIC PROCESSOR -- 6.3.1. System Overview -- 6.3.2. Problem Analysis and General Design Decisions -- 6.3.3. CarSNN Methodology -- 6.3.4. Evaluation of our CarSNN Methodology -- 6.3.5. Summary -- 6.4. LANESNNS: SPIKING NEURAL NETWORKS FOR LANE DETECTION ON THE LOIHI NEUROMORPHIC PROCESSOR -- 6.4.1. System Overview -- 6.4.2. Problem Analysis and General Design Decisions -- 6.4.3. LaneSNNs Design -- 6.4.4. Evaluation of LaneSNNs -- 6.4.5. Summary -- 6.5. SUMMARY OF EFFICIENT OPTIMIZATIONS FOR SPIKING NEURAL NETWORKS ON NEUROMORPHIC HARDWARE -- CHAPTER 7: Security Threats for SNNs on Discrete and Event-Based Data -- 7.1. SECURITY EVALUATION OF SNNS VS. DNNS -- 7.1.1. System Overview -- 7.1.2. Analysis: Applying Random Noise to SDBNs -- 7.1.3. Our Novel Methodology to Generate Imperceptible and Robust Adversarial Examples -- 7.1.4. Evaluation of our Attack Methodology -- 7.1.5. Summary -- 7.2. NEUROATTACK: EXTERNALLY TRIGGERED BIT-FLIPS FOR SNNS -- 7.2.1. System Overview -- 7.2.2. Bit-Flip Resilience Analysis of SNNs -- 7.2.3. NeuroAttack Methodology -- 7.2.4. Evaluation of the NeuroAttack Methodology -- 7.2.5. Summary
  • 3.2. CAPSACC: AN EFFICIENT HARDWARE ACCELERATOR FOR CAPSNETS -- 3.2.1. Motivational Analyses of CapsNets Complexity and Execution Time -- 3.2.2. CapsAcc Architecture Design -- 3.2.3. Dataflow Design -- 3.2.4. Synthesis Evaluation of the Complete CapsAcc Architecture -- 3.2.5. Summary -- 3.3. FEECA: A METHODOLOGY TO DESIGN A FAST, ENERGY-EFFICIENT CAPSNET ACCELERATOR -- 3.3.1. Overview of the FEECA Methodology -- 3.3.2. Optimization Problem -- 3.3.3. Search Algorithms: Brute-Force vs. Heuristic Search -- 3.3.4. Set of Internal Primitives -- 3.3.5. Estimation of the Parameters of the Accelerator -- 3.3.6. Evaluation of our FEECA Methodology -- 3.3.7. Summary -- 3.4. DESCNET: DEVELOPING EFFICIENT SCRATCHPAD MEMORIES FOR CAPSNET HARDWARE -- 3.4.1. Overview of DESCNet Methodology -- 3.4.2. Required Architectural Modification and Key Research Question -- 3.4.3. Resource Analysis of CapsNet Inference -- 3.4.4. DESCNet: Scratchpad Memory Design -- 3.4.5. Our Methodology for the DSE of Scratchpad Memories -- 3.4.6. Evaluation of the DESCNet Methodology -- 3.4.7. Summary -- 3.5. Q-CAPSNETS: A SPECIALIZED FRAMEWORK FOR QUANTIZING CAPSNETS -- 3.5.1. System Overview -- 3.5.2. Analysis of Area and Energy Consumption for Reduced Wordlength -- 3.5.3. Rounding Schemes -- 3.5.4. Q-CapsNets Framework -- 3.5.5. Evaluation of our Q-CapsNets Framework -- 3.5.6. Summary -- 3.6. RED-CANE: RESILIENCE ANALYSIS AND DESIGN OF CAPSNETS UNDER APPROXIMATIONS -- 3.6.1. System Overview -- 3.6.2. Modeling the Errors as Injected Noise -- 3.6.3. ReD-CaNe Methodology -- 3.6.4. Evaluation of the ReD-CaNe Methodology -- 3.6.5. Summary -- 3.7. APPROXIMATE SQUASH AND SOFTMAX DESIGNS -- 3.7.1. System Overview -- 3.7.2. Approximate Computing for DNNs Nonlinear Operations -- 3.7.3. Approximate Softmax Designs -- 3.7.4. Approximate Squash Designs
  • 7.3. ROBUST SNN METHODOLOGY THROUGH INHERENT STRUCTURAL PARAMETERS -- 7.3.1. System Overview -- 7.3.2. Case Study Analysis: Comparison DNNs vs. SNNs with the same Architectural Model -- 7.3.3. Threat Model -- 7.3.4. Robustness Exploration Methodology -- 7.3.5. Evaluation of the SNNs' Robustness -- 7.3.6. Summary -- 7.4. R-SNN: A METHODOLOGY FOR ROBUSTIFYING SNNS THROUGH NOISE FILTERS FOR DVS -- 7.4.1. System Overview -- 7.4.2. Case Study Analysis: SNN Robustness against Random Noise -- 7.4.3. R-SNN Methodology -- 7.4.4. Evaluation of the R-SNN Methodology -- 7.4.5. Summary -- 7.5. DVS-ATTACKS: A SET OF ADVERSARIAL ATTACKS ON EVENT-BASED SNNS -- 7.5.1. System Overview -- 7.5.2. Case Study Analysis: SNN Robustness against Random Noise -- 7.5.3. Noise Filters for Dynamic Vision Sensors -- 7.5.4. Threat Model -- 7.5.5. DVS-Attacks Methodologies -- 7.5.6. Evaluation of the DVS-Attacks -- 7.5.7. Summary -- 7.6. SUMMARY OF SECURITY THREATS FOR SNNS -- CHAPTER 8: Conclusion and Outlook -- 8.1. BOOK SUMMARY -- 8.2. ROLE OF THE PROPOSED TECHNIQUES IN THE EVOLVING FIELD OF ML -- 8.3. FUTURE WORKS -- Bibliography -- Index
  • Cover -- Half Title -- Series Page -- Title Page -- Copyright Page -- Contents -- Authors -- CHAPTER 1: Introduction -- 1.1. OPTIMIZATION OBJECTIVES FOR DNN MODELS AND ARCHITECTURES -- 1.1.1. Energy-Efficiency -- 1.1.2. Robustness -- 1.2. SUMMARY OF THE STATE-OF-THE-ART CHALLENGES AND RESEARCH GOALS -- 1.2.1. Limitations of the State-of-the-Art -- 1.2.2. Scientific Objectives and Goals -- 1.3. BOOK CONTRIBUTIONS -- 1.4. BOOK OUTLINE -- CHAPTER 2: Background and Related Work -- 2.1. DEEP NEURAL NETWORKS -- 2.1.1. Layers and Operations -- 2.1.2. Training and Inference -- 2.1.3. DNN Models -- 2.1.4. DNN Hardware Architectures -- 2.1.5. DNN Optimizations for Energy-Efficiency -- 2.2. CAPSULE NETWORKS -- 2.2.1. Traditional DNNs vs. CapsNets -- 2.2.2. CapsNet Models and Applications -- 2.2.3. Summary of Challenges for Capsule Networks -- 2.3. SPIKING NEURAL NETWORKS -- 2.3.1. Spiking Neuron Models -- 2.3.2. Spike Coding Techniques -- 2.3.3. SNN Learning Techniques -- 2.3.4. Neuromorphic Architectures -- 2.3.5. Event-Based Cameras -- 2.3.6. Example of Event-Based Datasets -- 2.3.7. Summary of Challenges for SNNs -- 2.4. VULNERABILITIES OF DL SYSTEMS -- 2.4.1. Privacy Threats -- 2.4.2. Fault Injection and Hardware Trojans -- 2.4.3. Reliability Threats -- 2.4.4. Adversarial Security Threats -- 2.4.5. Vulnerability Studies for CapsNets -- 2.4.6. Vulnerability Studies for SNNs -- 2.5. SUMMARY OF BACKGROUND AND RELATED WORK -- CHAPTER 3: Hardware and Software Optimizations for Capsule Networks -- 3.1. FASTRCAPS: AN INTEGRATED FRAMEWORK FOR FAST YET ACCURATE TRAINING OF CAPSNETS -- 3.1.1. System Overview -- 3.1.2. Overview of Learning Rate Policies -- 3.1.3. Analysis of Learning Rate Policies on CapsNets -- 3.1.4. Overview of FasTrCaps Framework -- 3.1.5. Evaluation of the FasTrCaps Framework -- 3.1.6. Summary