• Complain

Junichiro Makino - For High Performance Computing, Deep Neural Networks and Data Science

Here you can read online Junichiro Makino - For High Performance Computing, Deep Neural Networks and Data Science full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2021, publisher: Springer International Publishing, genre: Children. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Junichiro Makino For High Performance Computing, Deep Neural Networks and Data Science
  • Book:
    For High Performance Computing, Deep Neural Networks and Data Science
  • Author:
  • Publisher:
    Springer International Publishing
  • Genre:
  • Year:
    2021
  • Rating:
    5 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 100
    • 1
    • 2
    • 3
    • 4
    • 5

For High Performance Computing, Deep Neural Networks and Data Science: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "For High Performance Computing, Deep Neural Networks and Data Science" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Junichiro Makino: author's other books


Who wrote For High Performance Computing, Deep Neural Networks and Data Science? Find out the surname, the name of the author of the book and a list of all author's works by series.

For High Performance Computing, Deep Neural Networks and Data Science — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "For High Performance Computing, Deep Neural Networks and Data Science" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Contents
Landmarks
Book cover of Principles of High-Performance Processor Design Junichiro - photo 1
Book cover of Principles of High-Performance Processor Design
Junichiro Makino
Principles of High-Performance Processor Design
For High Performance Computing, Deep Neural Networks and Data Science
1st ed. 2021
Logo of the publisher Junichiro Makino Kobe University Kobe Hyogo Japan - photo 2
Logo of the publisher
Junichiro Makino
Kobe University, Kobe, Hyogo, Japan
ISBN 978-3-030-76870-6 e-ISBN 978-3-030-76871-3
https://doi.org/10.1007/978-3-030-76871-3
Springer Nature Switzerland AG 2021
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG

The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

The future cannot be predicted, but futures can be invented. Dennis Gabor

Preface

In this book, I tried to theorize what I have learned from my experience of developing special- and general-purpose processors for scientific computing. I started my research career as a graduate student in astrophysics, studying the dynamical evolution of globular clusters. The research tool was the N-body simulation, and it was (and still is) important to make simulations faster so that we can handle larger number of stars. I used vector supercomputers such as Hitac S-810, Fujitsu VP-400, NEC SX-2, Cray X-MP, Cyber 205, and ETA-10, and also tried parallel computers such as TMC CM-2 and PAX. Around the end of my Ph.D. course, my supervisor, Daiichiro Sugimoto, started the GRAPE project to develop special-purpose computers for astrophysical N-body simulations, and I was deeply involved in the development of numerical algorithms, hardware, and software. The GRAPE project is a great success, with hardware achieving 10100 times better price- and watt-performance compared to general-purpose computers at the same time and used by many researchers. However, as semiconductor technology advanced into deep-submicron range, the initial cost of development of ASICs had become too high for special-purpose processors. In fact, it has become too high for most general-purpose processors, and that was clearly the reason why first the development of parallel computers with custom processors and then the development of almost all RISC processors were terminated. Only x86 processors from Intel and AMD had survived. (Right now, we might be seeing the shift from x86 to Arm, though) The x86 processors in the 2000s were not quite efficient in the use of transistors or electricity. Nowadays, we have processors with very different architectures such as GPGPUs and Google TPU, which are certainly more efficient compared to general-purpose x86 or Arm processors, at least for a limited range of applications. I also was involved in the development of a programmable SIMD processor, GRAPE-DR, in 2000s, and more recently a processor for deep learning, MN-Core, which was ranked #1 in the June 2020 and June 2021 Green500 lists.

In this book, I discuss how we can make efficient processors for high-performance computing. I realized that we did not have a widely accepted definition of the efficiency of a general-purpose computer architecture. Therefore, in the first three chapters of this book, I tried to give one possible definition, the ratio between the minimum possible energy consumption and the actual energy consumption for a given application using a given semiconductor technology. In Chapter 4, I overview general-purpose processors in the past and present from this viewpoint. In Chapter 5, I discuss how we can actually design processors with near-optimal efficiencies, and in Chapter 6 how we can program such processors. I hope this book will give a new perspective to the field of high-performance processor design.

This book is the outcome of collaborations with many people in many projects throughout my research career. The following is an incomplete list of collaborators: Daiichiro Sugimoto, Toshikazu Ebisuzaki, Yoshiharu Chikada, Tomoyoshi Ito, Sachiko Okumura, Shigeru Ida, Toshiyuki Fukushige, Yoko Funato, Hiroshi Daisaka, and many others (GRAPE, GRAPE-DR, and related activities); Piet Hut, Steve McMillan, Simon Portegies Zwart, and many others (stellar dynamics and numerical methods); Kei Hiraki (GRAPE-DR and MN-Core); Ken Namura (GRAPE-6, GRAPE-DR, and MN-Core); Masaki Iwasawa, Ataru Tanikawa, Keigo Nitadori, Natsuki Hosono, Daisuke Namekata, and Kentaro Nomura (FDPS and related activities); Yutaka Ishikawa, Mitsuhisa Sato, Hirofumi Tomita, and many others (Fugaku development); Michiko Fujii, Takayuki Saito, Junko Kominami, and many others (stellar dynamics, galaxy formation, and planetary formation simulation on large-scale HPC platforms); Takayuki Muranushi and Youhei Ishihara (Formura DSL); many people from PFN (MN-Core); many people from PEZY Computing; and ExaScaler (PEZY-SC). I would like to thank all the people above. In addition, Id like to thank Miyuki Tsubouchi, Yuko Wakamatsu, Yoshie Yamaguchi, Naoko Nakanishi, Yukiko Kimura, and Rika Ogawa for managing the projects I was involved. I would also like to thank the folks at Springer for making this book a reality. Finally, I thank my family, and in particular my partner, Yoko, for her continuous support.

Junichiro Makino
Kobe, Japan
Acronyms
ASIC

Application-specific integrated circuit

B/F

Bytes per flop

BB

Broadcast block

BM

Broadcast memory

CISC

Complex instruction-set computer

CG

Conjugate gradient (method)

CNN

Convolutional neural network

CPE

Computing processing element of sunway SW26010

DCTL

Direct coupled transistor logic

DEM

Distinct (or discrete) element method

DDM

Domain decomposition method

DDR

Double data rate (DRAM)

DMA

Direct memory access

DSL

Domain-specific language

EFGM

Element-free Galerkin method

FEM

Finite element method

FLOPS

Floating-point operations per second

FMA

Floating-point multiply and add

FMM

Fast multipole method

FPGA

Field-programmable gate array

FPU

Floating-point arithmetic unit

GaAs

Gallium arsenide

GPGPU

General-purpose computing on graphics processing units

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «For High Performance Computing, Deep Neural Networks and Data Science»

Look at similar books to For High Performance Computing, Deep Neural Networks and Data Science. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «For High Performance Computing, Deep Neural Networks and Data Science»

Discussion, reviews of the book For High Performance Computing, Deep Neural Networks and Data Science and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.