Programming Massively Parallel Processors: A Hands-on Approach
D**V
Five Stars
Bought for a programmer by trade so guessing it's good
A**R
Perfect CUDA Companion
If anyone has any interest in CUDA or GPU / Parallel programming in general, this book is a must.The sub-title of the book is "A Hands on Approach" and I didn't get it until a third of the way through the book, that that's exactly what it is. The pairing of Kirk, a NVIDIA Fellow, outgoing NVIDIA Chief Scientist and generally world-weary technologist and all round `hardware guru' with Hwu, a well-heeled educator and researcher at the University of Illinois provide a practical but in-depth look at not just the pure `programming' to deal with massivly parallel processing, but instead assumes that the reader can work out for instance how to do Matrix Multiplication the `basic' way from looking at the NVIDIA CUDA API's, and looks at how to take advantage of the hardware to give sometimes incredible speed increases.Only thing that would have made it better was an easier way to get the code samples online instead of having to manually input code.
E**.
A fine book
Simply put, this book is an insight into various optimization techniques required to really make use of the available computing power in an NVIDIA GPU. This is done at first with a matrix multiplication example; the first draft is improved in several steps each of them presenting a different CUDA concept or optimization technique. Two case studies (Advanced MRI Reconstruction and Molecular Visualization and Analysis) introduce even more ways to improve an application written for CUDA.The examples provided in the book are very clear and easily applicable in real life situations.
F**S
Excellent Book
This book is really very good. It covers a very difficult topic and introduces what for many readers will be a completely new programming environment thoroughly and clearly.It's also great fun to read and worth a look for the professional and the curious alike.Well recommended.
R**O
A good yet disappointing book
* The GoodThe authors, who are clearly involved directly in GPU computing and not simply tech writers, explain CUDA programming with a keen introspection into the actual hardware mechanisms, and go into the details of the various (mostly memory) performance bottlenecks, and ways to work around them.Code efficiency, scheduling choices and optimizations are all presented from an enlightening hardware-wise perspective, which the reader can readily adapt to her/his own scenarios: on this topic the book succeeds in making easier a subject that would be way harder to learn from the reference manuals provided by NVIDIA.Two application case studies in the middle of the book, directly taken from the on-field experience of the authors, give a taste of what coding and porting to CUDA actually looks like in real-life applications, with increasing levels of optimizations along with detailed benchmarking.* The BadThe first negative aspect, which strikes when opening the book, is the typographical sloppiness: the example code looks too tiny and dim, and the images look like cut and paste from the slides the authors admittedly started this book from: nothing unreadable, but for 31 pound I expected more care.The content accuracy doesn't fare way better either: along with some errors (not many, you may refeer to the book homepage for the list) there are some "pearls", like the number of points assigned to the author's students for solving the questions at the end of one of the chapters, which let you wonder if the book was actually proof-read before being sent to print.Honorable mention also to the very interesting questions the authors added in the "Exercises" section at the end of each chapter, which could have much contributed to the reader understanding, but whose answers are nowhere to be found...In two words: amateur hour.It isn't however in the presentation that this title actually disappoints the most: regardless the (over-hyped) claims on the covers of "completeness and awesomeness", the content is unacceptably outdated for a text that has been published in 2010: no clue of native multiple-GPU support and stream processing (which is worsened by a section with a custom solution to exploit the configuration, now outdated and misleading), nothing on texture memory and the Fermi architecture is seen as a "future outlook".The book is also over-ambitious: despite the efforts to be both an introduction and a guide to parallel programming and thinking, and despite two slim chapters on floating point numbers and OpenCL (which are good but leave more to be wanted), this title can't be considered more than a guide that helps better understand the CUDA architecture, and a valuable introduction to optimization (which isn't a bad thing by itself).* ConclusionsA book for people who are already familiar to CUDA computing and want to get serious about performances and optimization: altough it will probably fail from becoming a classic, it is yet a good introduction to the hardware and an interesting compendium of optimization techniques. Other people might want to wait for a 2th edition.
S**D
Five Stars
Nice
A**O
Ottimo
Perfetto libro di riferimento per chi vuole approcciarsi alla programmazione parallela completo ed esaustivo. Un classico che va sempre bene (per ora)
S**Y
great way to learn cuda
One of the problems with many parallel programming books is that they take too general of an approach, which can leave the reader to figure out how to implement the ideas using the library of his/her choosing. There's certainly a place for such a book in the world, but not if you want to get up and running quickly.Programming Massively Parallel Processors presents parallel programming from the perspective of someone programming an NVIDIA GPU using their CUDA platform. From matrix multiplication (the "hello world" of the parallel computing world) to fine-tuned optimization, this book walks the reader through step by step not only how to do it, but how to think about it for any arbitrary problem.The introduction mentions that this book does not require a background in computer architecture or C/C++ programming experience, and while that's largely true, I think it would be extremely helpful to come into a topic like this with at least some exposure in those areas.Summary: this book is the best reference I've found for learning parallel programming "the CUDA way". Many of the concepts will carry over to other approaches (OpenMP, MPI, etc.), but this is by and large a CUDA book. Highly recommended.
D**R
Solide Einführung
Das Buch entstand aus mehreren Vorlesungen/Kursen der Autoren zur CUDA. Die Autoren verwenden nicht die übliche Copy&Paste Methode der SDK-Dokumentation. Sie geben dem Leser stattdessen den klassischen Ratschlag RTFM. Sie konzentrieren sich auf die konzeptionelle Seite. An Hand einer Matrizenmultiplikation wird schrittweise gezeigt, wie man die maximale Performance aus einer GPU herausholen kann. Die unmittelbare Transformation des Problems in die CUDA ist sehr einfach. Allerdings wird dieser naive Ansatz durch die Latenz und die Bandweite des globalen Grafikkarten Memory's ausgebremst. Ein klassisches Problem in praktisch allen massiv-parallelen Techniken mit shared memory (bei distributed memory ist dafür die Kommunikation der Flaschenhals). Die Autoren zeigen, wie man durch diverse Tricks den globalen Memory-Zugriff verringert und lokales Memory besser ausnützt. Sie gehen auch detailliert auf den dadurch erreichbaren Speedup ein. Die einzelnen Schritte sind didaktisch sehr gut aufgebaut. Man bekommt ein gutes Gefühl für die Stärken und Schwächen der GPU.Ich habe bereits eine HPC (High-Performance-Computing) Anwendung mit FPGAs gebaut. Die FPGA Community machte sich Hoffnungen, in diesen lukrativen Markt am Kuchen mitnaschen zu können. Für rein numerische (floating-point) HPC-Anwendungen sind diese Pläne m.E. mit dem Erscheinen der CUDA gestorben. Man muss auch mit der CUDA einiges Hirnschmalz aufwenden um einen Algorithmus effektiv zu implementieren. Aber im Verhältnis zum Aufwand für eine FPGA-Implementation ist das noch immer nix. Auch preislich liegen zwischen HPC-FPGA Karten und Grafikkarten Welten. Ich kenn auch keine mit diesem Buch vergleichbare Einführung in das HPC-Computing mit FPGAs.Die Sache wurde in den Kinderzimmern entschieden.Es schwebt mir vor, eine finanzmathematische Monte-Carlo Simulation auf die CUDA zu portieren. Allerdings habe ich das Problem, dass die Simulation auch am Pentium schnell genug ist. Ich muss wohl das Model komplexer machen um mich mit guten Gewissen mit der CUDA spielen zu können. Es war noch nie so leicht eine massiv-parallele Anwendung zu schreiben. Es ist aber auch nicht zu leicht.
S**V
a little odd but good enough for first pass
This book is a much better introduction to programming GPUs via CUDA than CUDA manual, or some presentation floating on the web. It is a little odd in coverage and language. You can tell it is written by two people with different command of English as well as passion. One co-author seems to be trying very hard to be colorful and looking for idiot-proof analogies but is prone to repetition. The other co-author sounds like a dry marketing droid sometimes. There are some mistakes in the codes in the book, but not too many since they don't dwell too long on code listings. In terms of coverage, I wish they'd cover texture memories, profiling tools, examples beyond simple matrix multiplication, and advice on computational thinking for codes with random access patterns. Chapters 6, 8, 9, and 10 are worth reading several times as they are full of practical tricks to use to trade one performance limiter for another in the quest for higher performance.
Trustpilot
1 day ago
1 month ago