• Complain

Murty M N - Representation in Machine Learning

Here you can read online Murty M N - Representation in Machine Learning full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: Singapore, year: 2023, publisher: Springer, genre: Children. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Murty M N Representation in Machine Learning
  • Book:
    Representation in Machine Learning
  • Author:
  • Publisher:
    Springer
  • Genre:
  • Year:
    2023
  • City:
    Singapore
  • Rating:
    3 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 60
    • 1
    • 2
    • 3
    • 4
    • 5

Representation in Machine Learning: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Representation in Machine Learning" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

This book provides a concise but comprehensive guide to representation, which forms the core of Machine Learning (ML). State-of-the-art practical applications involve a number of challenges for the analysis of high-dimensional data. Unfortunately, many popular ML algorithms fail to perform, in both theory and practice, when they are confronted with the huge size of the underlying data. Solutions to this problem are aptly covered in the book.In addition, the book covers a wide range of representation techniques that are important for academics and ML practitioners alike, such as Locality Sensitive Hashing (LSH), Distance Metrics and Fractional Norms, Principal Components (PCs), Random Projections and Autoencoders. Several experimental results are provided in the book to demonstrate the discussed techniques effectiveness.

Murty M N: author's other books


Who wrote Representation in Machine Learning? Find out the surname, the name of the author of the book and a list of all author's works by series.

Representation in Machine Learning — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Representation in Machine Learning" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Book cover of Representation in Machine Learning SpringerBriefs in Computer - photo 1
Book cover of Representation in Machine Learning
SpringerBriefs in Computer Science
Series Editors
Stan Zdonik
Brown University, Providence, RI, USA
Shashi Shekhar
University of Minnesota, Minneapolis, MN, USA
Xindong Wu
University of Vermont, Burlington, VT, USA
Lakhmi C. Jain
University of South Australia, Adelaide, SA, Australia
David Padua
University of Illinois Urbana-Champaign, Urbana, IL, USA
Xuemin Sherman Shen
University of Waterloo, Waterloo, ON, Canada
Borko Furht
Florida Atlantic University, Boca Raton, FL, USA
V. S. Subrahmanian
University of Maryland, College Park, MD, USA
Martial Hebert
Carnegie Mellon University, Pittsburgh, PA, USA
Katsushi Ikeuchi
University of Tokyo, Tokyo, Japan
Bruno Siciliano
Universit di Napoli Federico II, Napoli, Italy
Sushil Jajodia
George Mason University, Fairfax, VA, USA
Newton Lee
Institute for Education, Research and Scholarships, Los Angeles, CA, USA

SpringerBriefs present concise summaries of cutting-edge research and practical applications across a wide spectrum of fields. Featuring compact volumes of 50 to 125 pages, the series covers a range of content from professional to academic.

Typical topics might include:

  • A timely report of state-of-the art analytical techniques

  • A bridge between new research results, as published in journal articles, and a contextual literature review

  • A snapshot of a hot or emerging topic

  • An in-depth case study or clinical example

  • A presentation of core concepts that students must understand in order to make independent contributions

Briefs allow authors to present their ideas and readers to absorb them with minimal time investment. Briefs will be published as part of Springers eBook collection, with millions of users worldwide. In addition, Briefs will be available for individual print and electronic purchase. Briefs are characterized by fast, global electronic dissemination, standard publishing contracts, easy-to-use manuscript preparation and formatting guidelines, and expedited production schedules. We aim for publication 812 weeks after acceptance. Both solicited and unsolicited manuscripts are considered for publication in this series.

**Indexing: This series is indexed in Scopus, Ei-Compendex, and zbMATH **

M. N. Murty and M. Avinash
Representation in Machine Learning
Logo of the publisher M N Murty Department of CS and Automation Indian - photo 2
Logo of the publisher
M. N. Murty
Department of CS and Automation, Indian Institute of Science Bangalore, Bangalore, India
M. Avinash
Indian Institute of Technology Madras, Chennai, India
ISSN 2191-5768 e-ISSN 2191-5776
SpringerBriefs in Computer Science
ISBN 978-981-19-7907-1 e-ISBN 978-981-19-7908-8
https://doi.org/10.1007/978-981-19-7908-8
The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.

The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface
Overview

This book deals with the most important issue of representation in machine learning (ML). While learning class/cluster abstractions from the data using a machine, it is important to represent the data in a form suitable for effective and efficient machine learning. In this book, we propose to cover a wide variety of representation techniques that are important in both theory and practice.

In practical applications of current interest, the data typically is high dimensional. These applications include image classification, information retrieval, problem solving in AI, biological and chemical structure analysis, and social network analysis. A major problem with such high-dimensional data analysis is that most of the popular tools like the k-nearest neighbor classifier, decision tree classifier, and several clustering algorithms that depend on interpattern distance computations fail to work well. So, representing the data in a lower-dimensional space is inevitable.

Popularly used dimensionality reduction techniques may be categorized as follows:
  1. Feature selection schemes: Here an appropriate subset of the given feature set is identified and used in learning.

  2. Feature extraction schemes: Here linear or nonlinear combinations of the given features are used in learning.

Some of the popular linear feature extractors are based on principal components, random projections, and nonnegative matrix factorization. We cover all these techniques in the book. There are some misconceptions in the literature on representing the data using a subset of principal components. It is typically believed that the first few principal components make the right choice for classifying the data. We argue and show practically, in the book, how such a practice may not be correct.

It is argued in the literature that deep learning tools are the ideal choices for nonlinear feature selection; also they can learn the representations automatically. These tools include autoencoders and convolutional neural networks. We discuss these tools in the book. Further, we argue that it is difficult even for the deep learners to automatically learn the representations.

We present experimental results on some benchmark data sets to illustrate various ideas.

Audience

The coverage is meant for both students and teachers and helps practitioners in implementing ML algorithms. It is intended for senior undergraduate and graduate students and researchers working in machine learning, data mining, and pattern recognition. We present material in this book so that it is accessible to a wide variety of readers with some basic exposure to undergraduate-level mathematics. The presentation is intentionally made simpler to make the reader feel comfortable.

Organization
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Representation in Machine Learning»

Look at similar books to Representation in Machine Learning. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Representation in Machine Learning»

Discussion, reviews of the book Representation in Machine Learning and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.