Professional CUDA® C programming

Professional CUDA Programming in C provides down to earth coverage of the complex topic of parallel computing, a topic increasingly essential in every day computing. This entry-level programming book for professionals turns complex subjects into easy-to-comprehend concepts and easy-to-follows steps.

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cheng, John, Grossman, Max, McKercher, Ty
Format: E-Book Buch
Sprache:Englisch
Veröffentlicht: Hoboken Wiley 2014
Wrox
John Wiley & Sons, Incorporated
Wiley-Blackwell
Wrox, John Wiley & Sons, Inc
Ausgabe:1
Schriftenreihe:Wrox programmer to programmer
Schlagworte:
ISBN:9781118739273, 9781118739327, 1118739272, 9781118739310, 1118739329, 1118739310
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Professional CUDA Programming in C provides down to earth coverage of the complex topic of parallel computing, a topic increasingly essential in every day computing. This entry-level programming book for professionals turns complex subjects into easy-to-comprehend concepts and easy-to-follows steps.
AbstractList Break into the powerful world of parallel GPU programming with this down-to-earth, practical guide Designed for professionals across multiple industrial sectors, Professional CUDA C Programming  presents CUDA -- a parallel computing platform and programming model designed to ease the development of GPU programming -- fundamentals in an easy-to-follow format, and teaches readers how to think in parallel and implement parallel algorithms on GPUs. Each chapter covers a specific topic, and includes workable examples that demonstrate the development process, allowing readers to explore both the "hard" and "soft" aspects of GPU programming. Computing architectures are experiencing a fundamental shift toward scalable parallel computing motivated by application requirements in industry and science. This book demonstrates the challenges of efficiently utilizing compute resources at peak performance, presents modern techniques for tackling these challenges, while increasing accessibility for professionals who are not necessarily parallel programming experts. The CUDA programming model and tools empower developers to write high-performance applications on a scalable, parallel computing platform: the GPU. However, CUDA itself can be difficult to learn without extensive programming experience. Recognized CUDA authorities John Cheng, Max Grossman, and Ty McKercher guide readers through essential GPU programming skills and best practices in Professional CUDA C Programming, including: * CUDA Programming Model * GPU Execution Model * GPU Memory model * Streams, Event and Concurrency * Multi-GPU Programming * CUDA Domain-Specific Libraries * Profiling and Performance Tuning The book makes complex CUDA concepts easy to understand for anyone with knowledge of basic software development with exercises designed to be both readable and high-performance. For the professional seeking entrance to parallel computing and the high-performance computing community, Professional CUDA C Programming is an invaluable resource, with the most current information available on the market.
Break into the powerful world of parallel GPU programming with this down-to-earth, practical guideDesigned for professionals across multiple industrial sectors, Professional CUDA C Programming presents CUDA -- a parallel computing platform and programming model designed to ease the development of GPU programming -- fundamentals in an easy-to-follow format, and teaches readers how to think in parallel and implement parallel algorithms on GPUs. Each chapter covers a specific topic, and includes workable examples that demonstrate the development process, allowing readers to explore both the "hard" and "soft" aspects of GPU programming.Computing architectures are experiencing a fundamental shift toward scalable parallel computing motivated by application requirements in industry and science. This book demonstrates the challenges of efficiently utilizing compute resources at peak performance, presents modern techniques for tackling these challenges, while increasing accessibility for professionals who are not necessarily parallel programming experts. The CUDA programming model and tools empower developers to write high-performance applications on a scalable, parallel computing platform: the GPU. However, CUDA itself can be difficult to learn without extensive programming experience. Recognized CUDA authorities John Cheng, Max Grossman, and Ty McKercher guide readers through essential GPU programming skills and best practices in Professional CUDA C Programming, including:CUDA Programming ModelGPU Execution ModelGPU Memory modelStreams, Event and ConcurrencyMulti-GPU ProgrammingCUDA Domain-Specific LibrariesProfiling and Performance TuningThe book makes complex CUDA concepts easy to understand for anyone with knowledge of basic software development with exercises designed to be both readable and high-performance. For the professional seeking entrance to parallel computing and the high-performance computing community, Professional CUDA C Programming is an invaluable resource, with the most current information available on the market.
This book presents CUDA, a parallel computing platform and programming model designed to ease the development of GPU programming. It demonstrates the challenges of efficiently utilizing compute resources at peak performance, presents modern techniques for tackling these challenges, while increasing accessibility for professionals who are not necessarily parallel programming experts. The CUDA programming model and tools empower developers to write high-performance applications on a scalable, parallel computing platform: the GPU. Topics include: CUDA programming model; GPU execution Model; GPU Memory model; Streams, event and concurrency; multi-GPU programming; CUDA domain-specific libraries; profiling and performance tuning. --
Break into the powerful world of parallel GPU programming with this down-to-earth, practical guide Designed for professionals across multiple industrial sectors, Professional CUDA C Programming presents CUDA -- a parallel computing platform and programming model designed to ease the development of GPU programming -- fundamentals in an easy-to-follow format, and teaches readers how to think in parallel and implement parallel algorithms on GPUs. Each chapter covers a specific topic, and includes workable examples that demonstrate the development process, allowing readers to explore both the "hard" and "soft" aspects of GPU programming. Computing architectures are experiencing a fundamental shift toward scalable parallel computing motivated by application requirements in industry and science. This book demonstrates the challenges of efficiently utilizing compute resources at peak performance, presents modern techniques for tackling these challenges, while increasing accessibility for professionals who are not necessarily parallel programming experts. The CUDA programming model and tools empower developers to write high-performance applications on a scalable, parallel computing platform: the GPU. However, CUDA itself can be difficult to learn without extensive programming experience. Recognized CUDA authorities John Cheng, Max Grossman, and Ty McKercher guide readers through essential GPU programming skills and best practices in Professional CUDA C Programming, including: CUDA Programming Model GPU Execution Model GPU Memory model Streams, Event and Concurrency Multi-GPU Programming CUDA Domain-Specific Libraries Profiling and Performance Tuning The book makes complex CUDA concepts easy to understand for anyone with knowledge of basic software development with exercises designed to be both readable and high-performance. For the professional seeking entrance to parallel computing and the high-performance computing community, Professional CUDA C Programming is an invaluable resource, with the most current information available on the market.
Professional CUDA Programming in C provides down to earth coverage of the complex topic of parallel computing, a topic increasingly essential in every day computing. This entry-level programming book for professionals turns complex subjects into easy-to-comprehend concepts and easy-to-follows steps.
Break into the powerful world of parallel GPU programming with this down-to-earth, practical guide Designed for professionals across multiple industrial sectors, Professional CUDA C Programming  presents CUDA -- a parallel computing platform and programming model designed to ease the development of GPU programming -- fundamentals in an easy-to-follow format, and teaches readers how to think in parallel and implement parallel algorithms on GPUs. Each chapter covers a specific topic, and includes workable examples that demonstrate the development process, allowing readers to explore both the "hard" and "soft" aspects of GPU programming. Computing architectures are experiencing a fundamental shift toward scalable parallel computing motivated by application requirements in industry and science. This book demonstrates the challenges of efficiently utilizing compute resources at peak performance, presents modern techniques for tackling these challenges, while increasing accessibility for professionals who are not necessarily parallel programming experts. The CUDA programming model and tools empower developers to write high-performance applications on a scalable, parallel computing platform: the GPU. However, CUDA itself can be difficult to learn without extensive programming experience. Recognized CUDA authorities John Cheng, Max Grossman, and Ty McKercher guide readers through essential GPU programming skills and best practices in Professional CUDA C Programming, including: CUDA Programming Model GPU Execution Model GPU Memory model Streams, Event and Concurrency Multi-GPU Programming CUDA Domain-Specific Libraries Profiling and Performance Tuning The book makes complex CUDA concepts easy to understand for anyone with knowledge of basic software development with exercises designed to be both readable and high-performance. For the professional seeking entrance to parallel computing and the high-performance computing community, Professional CUDA C Programming is an invaluable resource, with the most current information available on the market.
Author McKercher, Ty
Cheng, John
Grossman, Max
Author_xml – sequence: 1
  fullname: Cheng, John
– sequence: 2
  fullname: Grossman, Max
– sequence: 3
  fullname: McKercher, Ty
BackLink https://cir.nii.ac.jp/crid/1130000795643622272$$DView record in CiNii
BookMark eNqNkc9OGzEQxo0oiCTNG3AIEqLqIZLtWf87hm3aIkWihxbUk2VvvMHEWYd1SOlL8RA8WR0SkQOXXsYa-advZr6viz40sXEHqEsIkQIUFfQQ9ZWQbz0coS7FpFAgiCyOUUcqjAXFjJ-gfkr3GOdPUmApO-jsRxtrl5KPjQmD8teX0cvzoBws2zhrzWLhm9lHdFSbkFx_9_bQzdfxz_L7cHL97aocTYaGSYbJECrJqayMotbCtHISuC3I1GCigFRQU8Ysk5USjgGXRcXYdGqB104xVzPLoYcutsJp7kNIsV5pG-M80eJJaDtPeW0Okm7Az1vQpLn7k-5iWCW9Du6V1nsngOD_ZrNrmT3fLWBq0_rtdL2m7yQ_bbFs0sOjSyv9qla5ZtWaoMeXJRGCA90Inu5I1wY3iztFKjgvFNvPa7zXld9UQiCfiYVivABOaQ53f4KfLR9t8Okup6KXrV-Y9q--vZqMf19uvMlxwz8xo5fU
ContentType eBook
Book
DBID WIIVT
RYH
YSPEL
OHILO
OODEK
DEWEY 005.275
DatabaseName Wiley
CiNii Complete
Perlego
O'Reilly Online Learning: Corporate Edition
O'Reilly Online Learning: Academic/Public Library Edition
DatabaseTitleList





DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 1118739272
9781118739273
9781118739327
1118739329
9781118739310
1118739310
Edition 1
Editor Zhang, Wei
Zhao, Chao
Editor_xml – sequence: 1
  fullname: Zhang, Wei
– sequence: 2
  fullname: Zhao, Chao
ExternalDocumentID bks00063826
9781118739310
9781118739273
EBC1776323
2766495
BB17311505
WILEYB0006937
Genre Electronic books
GroupedDBID 20A
38.
AABBV
AALIM
ABARN
ABBFG
ABIAV
ABQPQ
ACBYE
ACGYG
ACLGV
ACNUM
ADVEM
AERYV
AFOJC
AHWGJ
AJFER
AKHYG
AKQZE
ALMA_UNASSIGNED_HOLDINGS
AMYDA
AZZ
BBABE
CZZ
GEOUK
J-X
JJU
MYL
OHILO
OODEK
PQQKQ
UZ6
WCYEB
WIIVT
WLZGU
WZT
YSPEL
RYH
6XM
DRU
IVK
ALTAS
ID FETCH-LOGICAL-a58501-3c8628ca92bb3dce836b41da01931c3f255b58c97e53684c55ddb36fe95ef5b63
ISBN 9781118739273
9781118739327
1118739272
9781118739310
1118739329
1118739310
IngestDate Thu Oct 05 03:29:46 EDT 2023
Fri Mar 28 10:36:51 EDT 2025
Fri Nov 08 01:59:44 EST 2024
Fri Dec 05 22:00:08 EST 2025
Wed Nov 26 06:03:49 EST 2025
Tue Dec 02 18:05:52 EST 2025
Thu Jun 26 21:04:46 EDT 2025
Tue Oct 07 10:44:04 EDT 2025
IsPeerReviewed false
IsScholarly false
LCCN 2014937184
LCCallNum QA76.9.A73
LCCallNum_Ident QA76.9.A73
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-a58501-3c8628ca92bb3dce836b41da01931c3f255b58c97e53684c55ddb36fe95ef5b63
Notes Includes bibliographical references (p. [477]-480) and index
OCLC 890072056
PQID EBC1776323
PageCount 527
ParticipantIDs skillsoft_books24x7_bks00063826
askewsholts_vlebooks_9781118739310
askewsholts_vlebooks_9781118739273
safari_books_v2_9781118739310
proquest_ebookcentral_EBC1776323
perlego_books_2766495
nii_cinii_1130000795643622272
igpublishing_primary_WILEYB0006937
ProviderPackageCode J-X
PublicationCentury 2000
PublicationDate 2014.
c2014
2014
2014-09-09T00:00:00
2014-08-28
2014-09-08
[2014]
♭2014
PublicationDateYYYYMMDD 2014-01-01
2014-09-09
2014-08-28
2014-09-08
PublicationDate_xml – year: 2014
  text: 2014
PublicationDecade 2010
PublicationPlace Hoboken
PublicationPlace_xml – name: Hoboken
– name: Indianapolis
– name: Newark
– name: Indianapolis, Indiana
PublicationSeriesTitle Wrox programmer to programmer
PublicationYear 2014
Publisher Wiley
Wrox
John Wiley & Sons, Incorporated
Wiley-Blackwell
Wrox, John Wiley & Sons, Inc
Publisher_xml – name: Wiley
– name: Wrox
– name: John Wiley & Sons, Incorporated
– name: Wiley-Blackwell
– name: Wrox, John Wiley & Sons, Inc
SSID ssj0001414088
Score 1.9233336
Snippet Professional CUDA Programming in C provides down to earth coverage of the complex topic of parallel computing, a topic increasingly essential in every day...
Break into the powerful world of parallel GPU programming with this down-to-earth, practical guide Designed for professionals across multiple industrial...
Break into the powerful world of parallel GPU programming with this down-to-earth, practical guideDesigned for professionals across multiple industrial...
Break into the powerful world of parallel GPU programming with this down-to-earth, practical guide Designed for professionals across multiple industrial...
Break into the powerful world of parallel GPU programming with this down-to-earth, practical guide Designed for professionals across multiple industrial...
This book presents CUDA, a parallel computing platform and programming model designed to ease the development of GPU programming. It demonstrates the...
SourceID skillsoft
askewsholts
safari
proquest
perlego
nii
igpublishing
SourceType Aggregation Database
Publisher
SubjectTerms Application software
Application software -- Development
COMPUTERS
CUDA (Computer architecture)
Graphics processing units
Parallel
Parallel processing (Electronic computers)
Parallel programming (Computer science)
Programming
SubjectTermsDisplay COMPUTERS
CUDA (Computer architecture)
Electronic books.
Parallel
Parallel programming (Computer science)
Programming
TableOfContents Professional CUDA® C programming -- Credits -- About the Authors -- About the Technical Editors -- Acknowledgments -- Contents -- Chapter 1: Heterogeneous Parallel Computing with CUDA -- Chapter 2: CUDA Programming Model -- Chapter 3: CUDA Execution Model -- Chapter 4: Global Memory -- Chapter 5: Shared Memory and Constant Memory -- Chapter 6: Streams and Concurrency -- Chapter 7: Tuning Instruction-Level Primitives -- Chapter 8: GPU-Accelerated CUDA Libraries and OpenACC -- Chapter 9: Multi-GPU Programming -- Chapter 10: Implementation Considerations -- Appendix: Suggested Readings -- Index.
Cover -- Title Page -- Copyright -- Contents -- Chapter 1 Heterogeneous Parallel Computing with CUDA -- Parallel Computing -- Sequential and Parallel Programming -- Parallelism -- Computer Architecture -- Heterogeneous Computing -- Heterogeneous Architecture -- Paradigm of Heterogeneous Computing -- CUDA: A Platform for Heterogeneous Computing -- Hello World from GPU -- Is CUDA C Programming Difficult? -- Summary -- Chapter 2 CUDA Programming Model -- Introducing the CUDA Programming Model -- CUDA Programming Structure -- Managing Memory -- Organizing Threads -- Launching a CUDA Kernel -- Writing Your Kernel -- Verifying Your Kernel -- Handling Errors -- Compiling and Executing -- Timing Your Kernel -- Timing with CPU Timer -- Timing with nvprof -- Organizing Parallel Threads -- Indexing Matrices with Blocks and Threads -- Summing Matrices with a 2D Grid and 2D Blocks -- Summing Matrices with a 1D Grid and 1D Blocks -- Summing Matrices with a 2D Grid and 1D Blocks -- Managing Devices -- Using the Runtime API to Query GPU Information -- Determining the Best GPU -- Using nvidia-smi to Query GPU Information -- Setting Devices at Runtime -- Summary -- Chapter 3 CUDA Execution Model -- Introducing the CUDA Execution Model -- GPU Architecture Overview -- The Fermi Architecture -- The Kepler Architecture -- Profile-Driven Optimization -- Understanding the Nature of Warp Execution -- Warps and Thread Blocks -- Warp Divergence -- Resource Partitioning -- Latency Hiding -- Occupancy -- Synchronization -- Scalability -- Exposing Parallelism -- Checking Active Warps with nvprof -- Checking Memory Operations with nvprof -- Exposing More Parallelism -- Avoiding Branch Divergence -- The Parallel Reduction Problem -- Divergence in Parallel Reduction -- Improving Divergence in Parallel Reduction -- Reducing with Interleaved Pairs -- Unrolling Loops
Introducing Streams and Events -- CUDA Streams -- Stream Scheduling -- Stream Priorities -- CUDA Events -- Stream Synchronization -- Concurrent Kernel Execution -- Concurrent Kernels in Non-NULL Streams -- False Dependencies on Fermi GPUs -- Dispatching Operations with OpenMP -- Adjusting Stream Behavior Using Environment Variables -- Concurrency-Limiting GPU Resources -- Blocking Behavior of the Default Stream -- Creating Inter-Stream Dependencies -- Overlapping Kernel Execution and Data Transfer -- Overlap Using Depth-First Scheduling -- Overlap Using Breadth-First Scheduling -- Overlapping GPU and CPU Execution -- Stream Callbacks -- Summary -- Chapter 7 Tuning Instruction-Level Primitives -- Introducing CUDA Instructions -- Floating-Point Instructions -- Intrinsic and Standard Functions -- Atomic Instructions -- Optimizing Instructions for Your Application -- Single-Precision vs. Double-Precision -- Standard vs. Intrinsic Functions -- Understanding Atomic Instructions -- Bringing It All Together -- Summary -- Chapter 8 GPU-Accelerated CUDA Libraries and OpenACC -- Introducing the CUDA Libraries -- Supported Domains for CUDA Libraries -- A Common Library Workflow -- The CUSPARSE Library -- cuSPARSE Data Storage Formats -- Formatting Conversion with cuSPARSE -- Demonstrating cuSPARSE -- Important Topics in cuSPARSE Development -- cuSPARSE Summary -- The cuBLAS Library -- Managing cuBLAS Data -- Demonstrating cuBLAS -- Important Topics in cuBLAS Development -- cuBLAS Summary -- The cuFFT Library -- Using the cuFFT API -- Demonstrating cuFFT -- cuFFT Summary -- The cuRAND Library -- Choosing Pseudo- or Quasi- Random Numbers -- Overview of the cuRAND Library -- Demonstrating cuRAND -- Important Topics in cuRAND Development -- CUDA Library Features Introduced in CUDA 6 -- Drop-In CUDA Libraries -- Multi-GPU Libraries
Reducing with Unrolling -- Reducing with Unrolled Warps -- Reducing with Complete Unrolling -- Reducing with Template Functions -- Dynamic Parallelism -- Nested Execution -- Nested Hello World on the GPU -- Nested Reduction -- Summary -- Chapter 4 Global Memory -- Introducing the CUDA Memory Model -- Benefits of a Memory Hierarchy -- CUDA Memory Model -- Memory Management -- Memory Allocation and Deallocation -- Memory Transfer -- Pinned Memory -- Zero-Copy Memory -- Unified Virtual Addressing -- Unified Memory -- Memory Access Patterns -- Aligned and Coalesced Access -- Global Memory Reads -- Global Memory Writes -- Array of Structures versus Structure of Arrays -- Performance Tuning -- What Bandwidth Can a Kernel Achieve? -- Memory Bandwidth -- Matrix Transpose Problem -- Matrix Addition with Unified Memory -- Summary -- Chapter 5 Shared Memory and Constant Memory -- Introducing CUDA Shared Memory -- Shared Memory -- Shared Memory Allocation -- Shared Memory Banks and Access Mode -- Configuring the Amount of Shared Memory -- Synchronization -- Checking the Data Layout of Shared Memory -- Square Shared Memory -- Rectangular Shared Memory -- Reducing Global Memory Access -- Parallel Reduction with Shared Memory -- Parallel Reduction with Unrolling -- Parallel Reduction with Dynamic Shared Memory -- Effective Bandwidth -- Coalescing Global Memory Accesses -- Baseline Transpose Kernel -- Matrix Transpose with Shared Memory -- Matrix Transpose with Padded Shared Memory -- Matrix Transpose with Unrolling -- Exposing More Parallelism -- Constant Memory -- Implementing a 1D Stencil with Constant Memory -- Comparing with the Read-Only Cache -- The Warp Shuffle Instruction -- Variants of the Warp Shuffle Instruction -- Sharing Data within a Warp -- Parallel Reduction Using the Warp Shuffle Instruction -- Summary -- Chapter 6 Streams and Concurrency
Parallelizing crypt -- Optimizing crypt -- Deploying Crypt -- Summary of Porting crypt -- Summary -- Appendix: Suggested Readings -- Index -- Advertisement -- EULA
A Survey of CUDA Library Performance -- cuSPARSE versus MKL -- cuBLAS versus MKL BLAS -- cuFFT versus FFTW versus MKL -- CUDA Library Performance Summary -- Using OpenACC -- Using OpenACC Compute Directives -- Using OpenACC Data Directives -- The OpenACC Runtime API -- Combining OpenACC and the CUDA Libraries -- Summary of OpenACC -- Summary -- Chapter 9 Multi-GPU Programming -- Moving to Multiple GPUs -- Executing on Multiple GPUs -- Peer-to-Peer Communication -- Synchronizing across Multi-GPUs -- Subdividing Computation across Multiple GPUs -- Allocating Memory on Multiple Devices -- Distributing Work from a Single Host Thread -- Compiling and Executing -- Peer-to-Peer Communication on Multiple GPUs -- Enabling Peer-to-Peer Access -- Peer-to-Peer Memory Copy -- Peer-to-Peer Memory Access with Unified Virtual Addressing -- Finite Difference on Multi-GPU -- Stencil Calculation for 2D Wave Equation -- Typical Patterns for Multi-GPU Programs -- 2D Stencil Computation with Multiple GPUs -- Overlapping Computation and Communication -- Compiling and Executing -- Scaling Applications across GPU Clusters -- CPU-to-CPU Data Transfer -- GPU-to-GPU Data Transfer Using Traditional MPI -- GPU-to-GPU Data Transfer with CUDA-aware MPI -- Intra-Node GPU-to-GPU Data Transfer with CUDA-Aware MPI -- Adjusting Message Chunk Size -- GPU to GPU Data Transfer with GPUDirect RDMA -- Summary -- Chapter 10 Implementation Considerations -- The CUDA C Development Process -- APOD Development Cycle -- Optimization Opportunities -- CUDA Code Compilation -- CUDA Error Handling -- Profile-Driven Optimization -- Finding Optimization Opportunities Using nvprof -- Guiding Optimization Using nvvp -- NVIDIA Tools Extension -- CUDA Debugging -- Kernel Debugging -- Memory Debugging -- Debugging Summary -- A Case Study in Porting C Programs to CUDA C -- Assessing crypt
Title Professional CUDA® C programming
URI http://portal.igpublish.com/iglibrary/search/WILEYB0006937.html
https://cir.nii.ac.jp/crid/1130000795643622272
https://www.perlego.com/book/2766495/professional-cuda-c-programming-pdf
https://ebookcentral.proquest.com/lib/[SITE_ID]/detail.action?docID=1776323
https://learning.oreilly.com/library/view/~/9781118739310/?ar
https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9781118739273&uid=none
https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9781118739310
http://www.books24x7.com/marc.asp?bookid=63826
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3Pb9MwFLagIMEu_Bqig7IIcZsiLbYT20caypAGY4dt7BbZjjNFLWmVdFX473lO3CQrh4kDF6tJHcfyF_l937P9HkIfNeWBBOnjB5lSPjUh9SUVkS91KLTmgqrmIO3VN3Z2xq-vxbnLl1Y16QRYUfC6Fqv_CjXcA7Dt0dl_gLtrFG7AbwAdSoAdyh1G3F22iJ8PomwcxZefPx3F2-1Xv7YGqlnIN3_vwj2xxtI5Q7_LuvfWnZqyg_V330R3aMGtVwydBwHdcR78LJf1HUEZ2OzjBDgd6-1Dt2tvOg2YDc5jQ8Y-BJkyQo9OZj8uT3ufFgW5xnmTkMm1I1xYra7dPbQnqzlM4DC5rysbIPZm1TncwMAXeQ5SZGXKhblZ3qX9lcxkCf8-qeb5YlGBmRpQgYvnaGSPh7xAD0zxEj3bJsXw3Bz5Ch0OYfAsDF7sDWDYR1dfZhfxV99lofAlSCkYL6JB9XEtBVaKpNpwEikapBLIMQk0yUCUqZBrwUxIIk51GKapIlFmRGiyUEXkNRoVy8K8Qd6xJhof20V4CcoMM56KDKdMcQ0VjRFj9GEwOMlm0ayYV0k_gsA2768ElB0qDQc2WbWxS5ImUvvUkhTgpWM0geFOdG7LwC5sAmMEwUyB1WDM8BjtOyCS9hWYRREo6jHytrAkzcvdRuJkNo0DBhYLQycnLVzuyQ3e7d9hh2JbBdOaJWpetSwaRwf3dO4tetp_0e_QaF3emgl6rDfrvCrfu0_zD82wXXk
linkProvider ProQuest Ebooks
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.title=Professional+CUDA+C+programming&rft.au=Cheng%2C+John&rft.au=Grossman%2C+Max&rft.au=McKercher%2C+Ty&rft.au=Chapman%2C+Barbara&rft.date=2014-01-01&rft.pub=Wrox&rft.isbn=9781118739327&rft.externalDocID=BB17311505
thumbnail_l http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fwww.perlego.com%2Fbooks%2FRM_Books%2Fwiley_hlvwyirv%2F9781118739273.jpg
thumbnail_m http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fwww.safaribooksonline.com%2Flibrary%2Fcover%2F9781118739310
http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97811187%2F9781118739273.jpg
http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97811187%2F9781118739310.jpg
thumbnail_s http://cvtisr.summon.serialssolutions.com/2.0.0/image/custom?url=http%3A%2F%2Fportal.igpublish.com%2Figlibrary%2Famazonbuffer%2FWILEYB0006937_null_0_320.png