High performance computing : modern systems and practices
High Performance Computing: Modern Systems and Practices is a fully comprehensive and easily accessible treatment of high performance computing, covering fundamental concepts and essential knowledge while provideing key skills training. Because an understanding of HPC is central to achieving advance...
Uloženo v:
| Hlavní autoři: | , , , |
|---|---|
| Médium: | E-kniha Kniha |
| Jazyk: | angličtina |
| Vydáno: |
Cambridge, Mass
Elsevier, Morgan Kaufmann
2018
Elsevier Science & Technology Morgan Kaufmann |
| Vydání: | 1 |
| Témata: | |
| ISBN: | 012420158X, 9780124201583 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
Obsah:
- Front Cover -- High Performance Computing -- High PerformanceComputing: Modern Systems and Practices -- Copyright -- Dedication -- Contents -- Foreword -- Preface -- THE PURPOSE OF THIS TEXTBOOK -- ORGANIZATION OF THIS BOOK -- I INTRODUCTORY AND BASIC IDEAS (CHAPTERS 1 AND 4CHAPTER 1CHAPTER 4) -- II THROUGHPUT COMPUTING FOR JOB-STREAM PARALLELISM (CHAPTERS 5 AND 11CHAPTER 5CHAPTER 11) -- III SHARED-MEMORY MULTITHREADED COMPUTING (CHAPTERS 6 AND 7CHAPTER 6CHAPTER 7) -- IV MESSAGE-PASSING COMPUTING (CHAPTER 8) -- V ACCELERATING GPU COMPUTING (CHAPTERS 15 AND 16CHAPTER 15CHAPTER 16) -- VI BUILDING SIGNIFICANT PROGRAMS (CHAPTERS 9, 10, AND 12-14CHAPTER 9CHAPTER 10CHAPTER 12CHAPTER 13CHAPTER 14) -- VII WORKING WITH THE REAL SYSTEM (CHAPTERS 11 AND 17-23) -- VIII NEXT STEPS -- WHO CAN BENEFIT FROM THIS TEXTBOOK? -- HOW TO USE THIS TEXTBOOK -- Acknowledgments -- DEDICATION TO PAUL MESSINA, WRITTEN BY THOMAS STERLING -- 1 - INTRODUCTION -- 1.1 HIGH PERFORMANCE COMPUTING DISCIPLINES -- 1.1.1 DEFINITION -- 1.1.2 APPLICATION PROGRAMS -- 1.1.3 PERFORMANCE AND METRICS -- 1.1.4 HIGH PERFORMANCE COMPUTING SYSTEMS -- 1.1.5 SUPERCOMPUTING PROBLEMS -- 1.1.6 APPLICATION PROGRAMMING -- 1.2 IMPACT OF SUPERCOMPUTING ON SCIENCE, SOCIETY, AND SECURITY -- 1.2.1 CATALYZING FRAUD DETECTION AND MARKET DATA ANALYTICS -- 1.2.2 DISCOVERING, MANAGING, AND DISTRIBUTING OIL AND GAS -- 1.2.3 ACCELERATING INNOVATION IN MANUFACTURING -- 1.2.4 PERSONALIZED MEDICINE AND DRUG DISCOVERY -- 1.2.5 PREDICTING NATURAL DISASTERS AND UNDERSTANDING CLIMATE CHANGE -- 1.3 ANATOMY OF A SUPERCOMPUTER -- 1.4 COMPUTER PERFORMANCE -- 1.4.1 PERFORMANCE -- 1.4.2 PEAK PERFORMANCE -- 1.4.3 SUSTAINED PERFORMANCE -- 1.4.4 SCALING -- 1.4.5 PERFORMANCE DEGRADATION -- 1.4.6 PERFORMANCE IMPROVEMENT -- 1.5 A BRIEF HISTORY OF SUPERCOMPUTING -- 1.5.1 EPOCH I-AUTOMATED CALCULATORS THROUGH MECHANICAL TECHNOLOGIES
- 8.1 INTRODUCTION -- 8.2 MESSAGE-PASSING INTERFACE STANDARDS -- 8.3 MESSAGE-PASSING INTERFACE BASICS -- 8.3.1 MPI.H -- 8.3.2 MPI_INIT -- 8.3.3 MPI_FINALIZE -- 8.3.4 MESSAGE-PASSING INTERFACE EXAMPLE-HELLO WORLD -- 8.4 COMMUNICATORS -- 8.4.1 SIZE -- 8.4.2 RANK -- 8.4.3 EXAMPLE -- 8.5 POINT-TO-POINT MESSAGES -- 8.5.1 MPI SEND -- 8.5.2 MESSAGE-PASSING INTERFACE DATA TYPES -- 8.5.3 MPI RECV -- 8.5.4 EXAMPLE -- 8.6 SYNCHRONIZATION COLLECTIVES -- 8.6.1 OVERVIEW OF COLLECTIVE CALLS -- 8.6.2 BARRIER SYNCHRONIZATION -- 8.6.3 EXAMPLE -- 8.7 COMMUNICATION COLLECTIVES -- 8.7.1 COLLECTIVE DATA MOVEMENT -- 8.7.2 BROADCAST -- 8.7.3 SCATTER -- 8.7.4 GATHER -- 8.7.5 ALLGATHER -- 8.7.6 REDUCTION OPERATIONS -- 8.7.7 ALLTOALL -- 8.8 NONBLOCKING POINT-TO-POINT COMMUNICATION -- 8.9 USER-DEFINED DATA TYPES -- 8.10 SUMMARY AND OUTCOMES OF CHAPTER 8 -- 8.11 EXERCISES -- REFERENCES -- 9 - PARALLEL ALGORITHMS -- 9.1 INTRODUCTION -- 9.2 FORK-JOIN -- 9.3 DIVIDE AND CONQUER -- 9.4 MANAGER-WORKER -- 9.5 EMBARRASSINGLY PARALLEL -- 9.6 HALO EXCHANGE -- 9.6.1 THE ADVECTION EQUATION USING FINITE DIFFERENCE -- 9.6.2 SPARSE MATRIX VECTOR MULTIPLICATION -- 9.7 PERMUTATION: CANNON'S ALGORITHM -- 9.8 TASK DATAFLOW: BREADTH FIRST SEARCH -- 9.9 SUMMARY AND OUTCOMES OF CHAPTER 9 -- 9.10 EXERCISES -- REFERENCES -- 10 - LIBRARIES -- 10.1 INTRODUCTION -- 10.2 LINEAR ALGEBRA -- 10.2.1 BASIC LINEAR ALGEBRA SUBPROGRAMS -- 10.2.2 LINEAR ALGEBRA PACKAGE -- 10.2.3 SCALABLE LINEAR ALGEBRA PACKAGE -- 10.2.4 GNU SCIENTIFIC LIBRARY -- 10.2.5 SUPERNODAL LU -- 10.2.6 PORTABLE EXTENSIBLE TOOLKIT FOR SCIENTIFIC COMPUTATION -- 10.2.7 SCALABLE LIBRARY FOR EIGENVALUE PROBLEM COMPUTATIONS -- 10.2.8 EIGENVALUE SOLVERS FOR PETAFLOP-APPLICATIONS -- 10.2.9 HYPRE: SCALABLE LINEAR SOLVERS AND MULTIGRID METHODS -- 10.2.10 DOMAIN-SPECIFIC LANGUAGES FOR LINEAR ALGEBRA -- 10.3 PARTIAL DIFFERENTIAL EQUATIONS
- 3.3.3 SECONDARY STORAGE -- 3.3.4 COMMERCIAL SYSTEMS SUMMARY -- 3.4 PROGRAMMING INTERFACES -- 3.4.1 HIGH PERFORMANCE COMPUTING PROGRAMMING LANGUAGES -- 3.4.2 PARALLEL PROGRAMMING MODALITIES -- 3.5 SOFTWARE ENVIRONMENT -- 3.5.1 OPERATING SYSTEMS -- 3.5.2 RESOURCE MANAGEMENT -- 3.5.3 DEBUGGER -- 3.5.4 PERFORMANCE PROFILING -- 3.5.5 VISUALIZATION -- 3.6 BASIC METHODS OF USE -- 3.6.1 LOGGING ON -- 3.6.2 USER SPACE AND DIRECTORY SYSTEM -- 3.6.3 PACKAGE CONFIGURATION AND BUILDING -- 3.6.4 COMPILERS AND COMPILING -- 3.6.5 RUNNING APPLICATIONS -- 3.7 SUMMARY AND OUTCOMES OF CHAPTER 3 -- 3.8 QUESTIONS AND EXERCISES -- REFERENCES -- 4 - BENCHMARKING -- 4.1 INTRODUCTION -- 4.2 KEY PROPERTIES OF AN HPC BENCHMARK -- 4.3 STANDARD HPC COMMUNITY BENCHMARKS -- 4.4 HIGHLY PARALLEL COMPUTING LINPACK -- 4.5 HPC CHALLENGE BENCHMARK SUITE -- 4.6 HIGH PERFORMANCE CONJUGATE GRADIENTS -- 4.7 NAS PARALLEL BENCHMARKS -- 4.8 GRAPH500 -- 4.9 MINIAPPLICATIONS AS BENCHMARKS -- 4.10 SUMMARY AND OUTCOMES OF CHAPTER 4 -- 4.11 EXERCISES -- REFERENCES -- 5 - THE ESSENTIAL RESOURCE MANAGEMENT -- 5.1 MANAGING RESOURCES -- 5.2 THE ESSENTIAL SLURM -- 5.2.1 ARCHITECTURE OVERVIEW -- 5.2.2 WORKLOAD ORGANIZATION -- 5.2.3 SLURM SCHEDULING -- 5.2.3.1 Gang Scheduling -- 5.2.3.2 Preemption -- 5.2.3.3 Generic Resources -- 5.2.3.4 Trackable Resources -- 5.2.3.5 Elastic Computing -- 5.2.3.6 High-Throughput Computing -- 5.2.4 SUMMARY OF COMMANDS -- 5.2.4.1 srun -- 5.2.4.2 salloc -- 5.2.4.3 sbatch -- 5.2.4.4 squeue -- 5.2.4.5 scancel -- 5.2.4.6 sacct -- 5.2.4.7 sinfo -- 5.2.5 SLURM JOB SCRIPTING -- 5.2.5.1 Script Components -- 5.2.5.2 MPI Scripts -- 5.2.5.3 OpenMP Scripts -- 5.2.5.4 Concurrent Applications -- 5.2.5.5 Environment Variables -- 5.2.6 SLURM CHEAT SHEET -- 5.3 THE ESSENTIAL PORTABLE BATCH SYSTEM -- 5.3.1 PORTABLE BATCH SYSTEM OVERVIEW -- 5.3.2 PORTABLE BATCH SYSTEM ARCHITECTURE
- 10.4 GRAPH ALGORITHMS
- 5.3.3 SUMMARY OF PBS COMMANDS -- 5.3.3.1 qsub -- 5.3.3.2 qdel -- 5.3.3.3 qstat -- 5.3.3.3.1 Job Status Query -- 5.3.3.3.2 Queue Status Query -- 5.3.3.3.3 Server Status Query -- 5.3.3.4 tracejob -- 5.3.3.5 pbsnodes -- 5.3.4 PBS JOB SCRIPTING -- 5.3.4.1 OpenMP Jobs -- 5.3.4.2 MPI Jobs -- 5.3.4.3 Environment Variables of Interest -- 5.3.5 PBS CHEAT SHEET -- 5.4 SUMMARY AND OUTCOMES OF CHAPTER 5 -- 5.5 QUESTIONS AND PROBLEMS -- REFERENCES -- 6 - SYMMETRIC MULTIPROCESSOR ARCHITECTURE -- 6.1 INTRODUCTION -- 6.2 ARCHITECTURE OVERVIEW -- 6.3 AMDAHL'S LAW PLUS -- 6.4 PROCESSOR CORE ARCHITECTURE -- 6.4.1 EXECUTION PIPELINE -- 6.4.2 INSTRUCTION-LEVEL PARALLELISM -- 6.4.3 BRANCH PREDICTION -- 6.4.4 FORWARDING -- 6.4.5 RESERVATION STATIONS -- 6.4.6 MULTITHREADING -- 6.5 MEMORY HIERARCHY -- 6.5.1 DATA REUSE AND LOCALITY -- 6.5.2 MEMORY HIERARCHY -- 6.5.3 MEMORY SYSTEM PERFORMANCE -- 6.6 PCI BUS -- 6.7 EXTERNAL I/O INTERFACES -- 6.7.1 NETWORK INTERFACE CONTROLLERS -- 6.7.1.1 Ethernet -- 6.7.1.2 InfiniBand -- 6.7.2 SERIAL ADVANCED TECHNOLOGY ATTACHMENT -- 6.7.3 JTAG -- 6.7.4 UNIVERSAL SERIAL BUS -- 6.8 SUMMARY AND OUTCOMES OF CHAPTER 6 -- 6.9 QUESTIONS AND EXERCISES -- REFERENCES -- 7 - THE ESSENTIAL OPENMP -- 7.1 INTRODUCTION -- 7.2 OVERVIEW OF OPENMP PROGRAMMING MODEL -- 7.2.1 THREAD PARALLELISM -- 7.2.2 THREAD VARIABLES -- 7.2.3 RUNTIME LIBRARY AND ENVIRONMENT VARIABLES -- 7.2.3.1 Environment Variables -- 7.2.3.2 Runtime Library Routines -- 7.2.3.3 Directives -- 7.3 PARALLEL THREADS AND LOOPS -- 7.3.1 PARALLEL THREADS -- 7.3.2 PRIVATE -- 7.3.3 PARALLEL "FOR" -- 7.3.4 SECTIONS -- 7.4 SYNCHRONIZATION -- 7.4.1 CRITICAL SYNCHRONIZATION DIRECTIVE -- 7.4.2 THE MASTER DIRECTIVE -- 7.4.3 THE BARRIER DIRECTIVE -- 7.4.4 THE SINGLE DIRECTIVE -- 7.5 REDUCTION -- 7.6 SUMMARY AND OUTCOMES OF CHAPTER 7 -- 7.7 QUESTIONS AND PROBLEMS -- REFERENCE -- 8 - THE ESSENTIAL MPI
- 1.5.2 EPOCH II-VON NEUMANN ARCHITECTURE IN VACUUM TUBES -- 1.5.3 EPOCH III-INSTRUCTION-LEVEL PARALLELISM -- 1.5.4 EPOCH IV-VECTOR PROCESSING AND INTEGRATION -- 1.5.5 EPOCH V-SINGLE-INSTRUCTION MULTIPLE DATA ARRAY -- 1.5.6 EPOCH VI-COMMUNICATING SEQUENTIAL PROCESSORS AND VERY LARGE SCALE INTEGRATION -- 1.5.7 EPOCH VII-MULTICORE PETAFLOPS -- 1.5.8 NEODIGITAL AGE AND BEYOND MOORE'S LAW -- 1.6 THIS TEXTBOOK AS A GUIDE AND TOOL FOR THE STUDENT -- 1.7 SUMMARY AND OUTCOMES OF CHAPTER 1 -- 1.8 QUESTIONS AND PROBLEMS -- REFERENCES -- 2 - HPC ARCHITECTURE 1: SYSTEMS AND TECHNOLOGIES -- 2.1 INTRODUCTION -- 2.2 KEY PROPERTIES OF HPC ARCHITECTURE -- 2.2.1 SPEED -- 2.2.2 PARALLELISM -- 2.2.3 EFFICIENCY -- 2.2.4 POWER -- 2.2.5 RELIABILITY -- 2.2.6 PROGRAMMABILITY -- 2.3 PARALLEL ARCHITECTURE FAMILIES-FLYNN'S TAXONOMY -- 2.4 ENABLING TECHNOLOGY -- 2.4.1 TECHNOLOGY EPOCHS -- 2.4.2 ROLES OF TECHNOLOGIES -- 2.4.3 DIGITAL LOGIC -- 2.4.4 MEMORY TECHNOLOGIES -- 2.4.4.1 Early Memory Devices -- 2.4.4.2 Modern Memory Technologies -- 2.5 VON NEUMANN SEQUENTIAL PROCESSORS -- 2.6 VECTOR AND PIPELINING -- 2.6.1 PIPELINE PARALLELISM -- 2.6.2 VECTOR PROCESSING -- 2.7 SINGLE-INSTRUCTION, MULTIPLE DATA ARRAY -- 2.7.1 SINGLE-INSTRUCTION, MULTIPLE DATA ARCHITECTURE -- 2.7.2 AMDAHL'S LAW -- 2.8 MULTIPROCESSORS -- 2.8.1 SHARED-MEMORY MULTIPROCESSORS -- 2.8.2 MASSIVELY PARALLEL PROCESSORS -- 2.8.3 COMMODITY CLUSTERS -- 2.9 HETEROGENEOUS COMPUTER STRUCTURES -- 2.10 SUMMARY AND OUTCOMES OF CHAPTER 2 -- 2.11 QUESTIONS AND PROBLEMS -- REFERENCES -- 3 - COMMODITY CLUSTERS -- 3.1 INTRODUCTION -- 3.1.1 DEFINITION OF "COMMODITY CLUSTER" -- 3.1.2 MOTIVATION AND JUSTIFICATION FOR CLUSTERS -- 3.1.3 CLUSTER ELEMENTS -- 3.1.4 IMPACT ON TOP 500 LIST -- 3.1.5 BRIEF HISTORY -- 3.1.6 CHAPTER GUIDE -- 3.2 BEOWULF CLUSTER PROJECT -- 3.3 HARDWARE ARCHITECTURE -- 3.3.1 THE NODE -- 3.3.2 SYSTEM AREA NETWORKS

