• Complain

Jiawei Jiang - Distributed Machine Learning and Gradient Optimization

Here you can read online Jiawei Jiang - Distributed Machine Learning and Gradient Optimization full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. publisher: Springer Singapore, genre: Politics. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Jiawei Jiang Distributed Machine Learning and Gradient Optimization
  • Book:
    Distributed Machine Learning and Gradient Optimization
  • Author:
  • Publisher:
    Springer Singapore
  • Genre:
  • Rating:
    3 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 60
    • 1
    • 2
    • 3
    • 4
    • 5

Distributed Machine Learning and Gradient Optimization: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Distributed Machine Learning and Gradient Optimization" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Jiawei Jiang: author's other books


Who wrote Distributed Machine Learning and Gradient Optimization? Find out the surname, the name of the author of the book and a list of all author's works by series.

Distributed Machine Learning and Gradient Optimization — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Distributed Machine Learning and Gradient Optimization" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Contents
Landmarks
Book cover of Distributed Machine Learning and Gradient Optimization Big - photo 1
Book cover of Distributed Machine Learning and Gradient Optimization
Big Data Management
Editor-in-Chief
Xiaofeng Meng
School of Information, Renmin University of China, Beijing, Beijing, China
Editorial Board
Daniel Dajun Zeng
University of Arizona, Tucson, AZ, USA
Hai Jin
School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, China
Haixun Wang
Facebook Research, USA
Huan Liu
Arizona State University, Tempe, AZ, USA
X. Sean Wang
Fudan University, Shanghai, China
Weiyi Meng
Binghamton University, Binghamton, NY, USA
Advisory Editors
Jiawei Han
Dept Comp Sci, University Illinois at Urbana-Champaign, Urbana, IL, USA
Masaru Kitsuregawa
National Institute of Informatics, University of Tokyo, Chiyoda, Tokyo, Japan
Philip S. Yu
University of Illinois at Chicago, Chicago, IL, USA
Tieniu Tan
Chiense Academy of Sciences, Bejing, China
Wen Gao
Room 2615, Science Buildings, Peking University Room 2615, Science Buildings, Beijing, China

The big data paradigm presents a number of challenges for university curricula on big data or data science related topics. On the one hand, new research, tools and technologies are currently being developed to harness the increasingly large quantities of data being generated within our society. On the other, big data curricula at universities are still based on the computer science knowledge systems established in the 1960s and 70s. The gap between the theories and applications is becoming larger, as a result of which current education programs cannot meet the industry's demands for big data talents.

This series aims to refresh and complement the theory and knowledge framework for data management and analytics, reflect the latest research and applications in big data, and highlight key computational tools and techniques currently in development. Its goal is to publish a broad range of textbooks, research monographs, and edited volumes that will:
  • Present a systematic and comprehensive knowledge structure for big data and data science research and education

  • Supply lectures on big data and data science education with timely and practical reference materials to be used in courses

  • Provide introductory and advanced instructional and reference material for students and professionals in computational science and big data

  • Familiarize researchers with the latest discoveries and resources they need to advance the field

  • Offer assistance to interdisciplinary researchers and practitioners seeking to learn more about big data

The scope of the series includes, but is not limited to, titles in the areas of database management, data mining, data analytics, search engines, data integration, NLP, knowledge graphs, information retrieval, social networks, etc. Other relevant topics will also be considered.

More information about this series at https://link.springer.com/bookseries/15869

Jiawei Jiang , Bin Cui and Ce Zhang
Distributed Machine Learning and Gradient Optimization
Logo of the publisher Jiawei Jiang ETH Zurich Zrich Switzerland Bin - photo 2
Logo of the publisher
Jiawei Jiang
ETH Zurich, Zrich, Switzerland
Bin Cui
School of Electronics Engineering and Computer Science, Peking University, Beijing, China
Ce Zhang
Department of Computer Science, ETH Zurich, Zurich, Switzerland
ISSN 2522-0179 e-ISSN 2522-0187
Big Data Management
ISBN 978-981-16-3419-2 e-ISBN 978-981-16-3420-8
https://doi.org/10.1007/978-981-16-3420-8
The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.

The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

In recent years, with the rapid development of technologies in many industrial applications, such as online shopping, social networks, intelligent healthcare, and IOTs, we have witnessed an explosive increase of generated data. To mine buried knowledge from raw data, machine learning technology is a widely used tool and has become the de facto technique for data analytics in both academia and industry, especially for unstructured data that is beyond the understanding of human beings. Training machine learning models using a single machine has been well-studied, either in a sequential way or in a multicore manner. However, in the data-intensive applications listed earlier, many real datasets can be up to hundreds of terabytes or even petabytes, which have far overwhelmed the computation and storage capability of a single physical machine. Traditional stand-alone machine learning training methods have encountered great challenges accordingly. In order to meet the trend of big data and solve the bottleneck of a stand-alone system, many researchers and practitioners resort to training machine learning models over a set of distributed machines, yielding a new research area in academiadistributed machine learning. Specifically, a class of supervised machine learning algorithms is often solved by a series of iterative gradient-based optimization algorithms, such as stochastic gradient descent (SGD), due to their convergence guarantee and ease of parallelization. These machine learning models include linear regression, logistic regression, support vector machine, gradient boosting decision tree, neural networks, and so on. For these machine learning models, the key to training them in a distributed way is to execute the gradient optimization algorithm in parallel rather than the original sequential execution. Although there is a rich literature on gradient optimization algorithms proposed for distributed machine learning, there is a lack of work that presents a comprehensive overview that elaborates the concepts, principles, basic building blocks, methodology, taxonomy, and recent progress.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Distributed Machine Learning and Gradient Optimization»

Look at similar books to Distributed Machine Learning and Gradient Optimization. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Distributed Machine Learning and Gradient Optimization»

Discussion, reviews of the book Distributed Machine Learning and Gradient Optimization and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.