Junichiro Makino - For High Performance Computing, Deep Neural Networks and Data Science
Here you can read online Junichiro Makino - For High Performance Computing, Deep Neural Networks and Data Science full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2021, publisher: Springer International Publishing, genre: Children. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:
Romance novel
Science fiction
Adventure
Detective
Science
History
Home and family
Prose
Art
Politics
Computer
Non-fiction
Religion
Business
Children
Humor
Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.
- Book:For High Performance Computing, Deep Neural Networks and Data Science
- Author:
- Publisher:Springer International Publishing
- Genre:
- Year:2021
- Rating:5 / 5
- Favourites:Add to favourites
- Your mark:
- 100
- 1
- 2
- 3
- 4
- 5
For High Performance Computing, Deep Neural Networks and Data Science: summary, description and annotation
We offer to read an annotation, description, summary or preface (depends on what the author of the book "For High Performance Computing, Deep Neural Networks and Data Science" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.
Junichiro Makino: author's other books
Who wrote For High Performance Computing, Deep Neural Networks and Data Science? Find out the surname, the name of the author of the book and a list of all author's works by series.
For High Performance Computing, Deep Neural Networks and Data Science — read online for free the complete book (whole text) full work
Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "For High Performance Computing, Deep Neural Networks and Data Science" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.
Font size:
Interval:
Bookmark:
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
The future cannot be predicted, but futures can be invented. Dennis Gabor
In this book, I tried to theorize what I have learned from my experience of developing special- and general-purpose processors for scientific computing. I started my research career as a graduate student in astrophysics, studying the dynamical evolution of globular clusters. The research tool was the N-body simulation, and it was (and still is) important to make simulations faster so that we can handle larger number of stars. I used vector supercomputers such as Hitac S-810, Fujitsu VP-400, NEC SX-2, Cray X-MP, Cyber 205, and ETA-10, and also tried parallel computers such as TMC CM-2 and PAX. Around the end of my Ph.D. course, my supervisor, Daiichiro Sugimoto, started the GRAPE project to develop special-purpose computers for astrophysical N-body simulations, and I was deeply involved in the development of numerical algorithms, hardware, and software. The GRAPE project is a great success, with hardware achieving 10100 times better price- and watt-performance compared to general-purpose computers at the same time and used by many researchers. However, as semiconductor technology advanced into deep-submicron range, the initial cost of development of ASICs had become too high for special-purpose processors. In fact, it has become too high for most general-purpose processors, and that was clearly the reason why first the development of parallel computers with custom processors and then the development of almost all RISC processors were terminated. Only x86 processors from Intel and AMD had survived. (Right now, we might be seeing the shift from x86 to Arm, though) The x86 processors in the 2000s were not quite efficient in the use of transistors or electricity. Nowadays, we have processors with very different architectures such as GPGPUs and Google TPU, which are certainly more efficient compared to general-purpose x86 or Arm processors, at least for a limited range of applications. I also was involved in the development of a programmable SIMD processor, GRAPE-DR, in 2000s, and more recently a processor for deep learning, MN-Core, which was ranked #1 in the June 2020 and June 2021 Green500 lists.
In this book, I discuss how we can make efficient processors for high-performance computing. I realized that we did not have a widely accepted definition of the efficiency of a general-purpose computer architecture. Therefore, in the first three chapters of this book, I tried to give one possible definition, the ratio between the minimum possible energy consumption and the actual energy consumption for a given application using a given semiconductor technology. In Chapter 4, I overview general-purpose processors in the past and present from this viewpoint. In Chapter 5, I discuss how we can actually design processors with near-optimal efficiencies, and in Chapter 6 how we can program such processors. I hope this book will give a new perspective to the field of high-performance processor design.
This book is the outcome of collaborations with many people in many projects throughout my research career. The following is an incomplete list of collaborators: Daiichiro Sugimoto, Toshikazu Ebisuzaki, Yoshiharu Chikada, Tomoyoshi Ito, Sachiko Okumura, Shigeru Ida, Toshiyuki Fukushige, Yoko Funato, Hiroshi Daisaka, and many others (GRAPE, GRAPE-DR, and related activities); Piet Hut, Steve McMillan, Simon Portegies Zwart, and many others (stellar dynamics and numerical methods); Kei Hiraki (GRAPE-DR and MN-Core); Ken Namura (GRAPE-6, GRAPE-DR, and MN-Core); Masaki Iwasawa, Ataru Tanikawa, Keigo Nitadori, Natsuki Hosono, Daisuke Namekata, and Kentaro Nomura (FDPS and related activities); Yutaka Ishikawa, Mitsuhisa Sato, Hirofumi Tomita, and many others (Fugaku development); Michiko Fujii, Takayuki Saito, Junko Kominami, and many others (stellar dynamics, galaxy formation, and planetary formation simulation on large-scale HPC platforms); Takayuki Muranushi and Youhei Ishihara (Formura DSL); many people from PFN (MN-Core); many people from PEZY Computing; and ExaScaler (PEZY-SC). I would like to thank all the people above. In addition, Id like to thank Miyuki Tsubouchi, Yuko Wakamatsu, Yoshie Yamaguchi, Naoko Nakanishi, Yukiko Kimura, and Rika Ogawa for managing the projects I was involved. I would also like to thank the folks at Springer for making this book a reality. Finally, I thank my family, and in particular my partner, Yoko, for her continuous support.
Application-specific integrated circuit
B/FBytes per flop
BBBroadcast block
BMBroadcast memory
CISCComplex instruction-set computer
CGConjugate gradient (method)
CNNConvolutional neural network
CPEComputing processing element of sunway SW26010
DCTLDirect coupled transistor logic
DEMDistinct (or discrete) element method
DDMDomain decomposition method
DDRDouble data rate (DRAM)
DMADirect memory access
DSLDomain-specific language
EFGMElement-free Galerkin method
FEMFinite element method
FLOPSFloating-point operations per second
FMAFloating-point multiply and add
FMMFast multipole method
FPGAField-programmable gate array
FPUFloating-point arithmetic unit
GaAsGallium arsenide
GPGPUGeneral-purpose computing on graphics processing units
Font size:
Interval:
Bookmark:
Similar books «For High Performance Computing, Deep Neural Networks and Data Science»
Look at similar books to For High Performance Computing, Deep Neural Networks and Data Science. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.
Discussion, reviews of the book For High Performance Computing, Deep Neural Networks and Data Science and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.