• Complain

Warr - Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery

Here you can read online Warr - Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: Sebastopol, year: 2019, publisher: OReilly Media, genre: Computer. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Warr Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery
  • Book:
    Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery
  • Author:
  • Publisher:
    OReilly Media
  • Genre:
  • Year:
    2019
  • City:
    Sebastopol
  • Rating:
    4 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 80
    • 1
    • 2
    • 3
    • 4
    • 5

Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

As Deep Neural Networks (DNNs) become increasingly common in real-world applications, the potential to fool them presents a new attack vector. In this book, author Katy Warr examines the security implications of how DNNs interpret audio and images very differently to humans. Youll learn about the motivations attackers have for exploiting flaws in DNN algorithms and how to assess the threat to systems incorporating neural network technology. Through practical code examples, this book shows you how DNNs can be fooled and demonstrates the ways they can be hardened against trickery. Learn the basic principles of how DNNs think and why this differs from our human understanding of the world Understand adversarial motivations for fooling DNNs and the threat posed to real-world systems Explore approaches for making software systems that incorporate DNNs less susceptible to trickery Peer into the future of Artificial Neural Networks to learn how these algorithms may evolve to become more robust.

Warr: author's other books


Who wrote Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery? Find out the surname, the name of the author of the book and a list of all author's works by series.

Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Strengthening Deep Neural Networks

by Katy Warr

Copyright 2019 Katy Warr. All rights reserved.

Printed in the United States of America.

Published by OReilly Media, Inc. , 1005 Gravenstein Highway North, Sebastopol, CA 95472.

OReilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com .

  • Acquisitions Editor: Jonathan Hassell
  • Development Editor: Michele Cronin
  • Production Editor: Deborah Baker
  • Copy Editor: Sonia Saruba
  • Proofreader: Rachel Head
  • Indexer: WordCo Indexing Services
  • Interior Designer: David Futato
  • Cover Designer: Karen Montgomery
  • Illustrator: Rebecca Demarest
  • July 2019: First Edition
Revision History for the First Edition
  • 2019-07-02: First Release

See http://oreilly.com/catalog/errata.csp?isbn=9781492044956 for release details.

The OReilly logo is a registered trademark of OReilly Media, Inc. Strengthening Deep Neural Networks, the cover image, and related trade dress are trademarks of OReilly Media, Inc.

While the publisher and the author have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the author disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.

978-1-492-04495-6

[GP]

Preface

Artificial intelligence (AI) is prevalent in our lives. Every day, machines make sense of complex data: surveillance systems perform facial recognition, digital assistants comprehend spoken language, and autonomous vehicles and robots are able to navigate the messy and unconstrained physical world.AI not only competes with human capabilities in areas such as image, audio, and text processing, but often exceeds human accuracy and speed.

While we celebrate advancements in AI, deep neural networks (DNNs)the algorithms intrinsic to much of AIhave recently been proven to be atrisk from attack through seemingly benign inputs. It is possible to fool DNNs by making subtle alterations to input data that often either remain undetected or are overlooked if presented to a human.For example, alterations to images that are so small as to remain unnoticed by humans can cause DNNs to misinterpret the image content.As many AI systems take their input from external sourcesvoice recognition devices or social media upload, forexamplethis ability to be tricked by adversarial inputopens a new, often intriguing, security threat. This book is aboutthis threat, what it tells us about DNNs, and how we can subsequently make AI more resilient to attack.

By considering real-world scenarios where AI isexploited in our daily lives to process image, audio, and video data,this book considers the motivations, feasibility, and risks posed by adversarial input.It provides both intuitive and mathematical explanations for the topic and explores how intelligent systems can bemade more robust against adversarial input.

Understanding how to fool AI also provides us with insights intothe often opaque deep learning algorithms, and discrepancies between how these algorithms and the human brainprocess sensory input. This book considers these differences and how artificial learning may move closerto its biological equivalent in the future.

Who Should Read This Book

The target audiences of this book are:

  • Data scientists developing DNNs. You will gain greater understanding of how to create DNNs that are more robust against adversarial input.

  • Solution and security architects incorporating deep learning into operational pipelines that take image, audio, or video data from untrusted sources. After reading this book, you will understand the risks of adversarial input to your organizations information assurance andpotential risk mitigation strategies.

  • Anyone interested in the differences between artificial and biological perception. If you fall into this category, thisbook will provide you with an introduction to deep learning and explanations as to why algorithms that appear toaccurately mimic human perception can get it very wrong. Youll also get an insight into where and how AI is beingused in our society and how artificial learning may become better at mimicking biological intelligence in the future.

This book is written to be accessible to people from all knowledge backgrounds, while retaining thedetail that some readers may be interested in. The content spans AI, human perception of audio and image, and information assurance. It is deliberately cross-disciplinary to capturedifferent perspectives of this fascinating and fast-developing field.

To read this book, you dont need prior knowledge of DNNs. All you need to know is in an introductory chapter on DNNs (). Likewise, if you are a data scientist familiar with deep learningmethods, you may wish to skip that chapter.

The explanations arepresented to be accessible to both mathematicians and non-mathematicians.Optional mathematics is includedfor those who are interested in seeing the formulae that underpinsome of the ideas behind deep learning and adversarial input.Just in case you have forgotten your high school mathematics and require a refresher, key notations are included in the appendix.

The code samples are also optional and provided for those software engineers or data scientists who like to puttheoretical knowledge into practice. The code is written in Python, using Jupyter notebooks. Code snippets that areimportant to the narrative are included in the book, but all the code is located in an associated GitHub repository. Full details on how to run the code are also included in the repository.

This is not a book about security surrounding the broader topic of machine learning; its focus is specifically DNNtechnologies for image and audio processing, and the mechanisms by which they may be fooled without misleading humans.

How This Book Is Organized

This book is split into four parts:

, An Introduction to Fooling AI

This group of chapters provides an introduction to adversarial input and attack motivations and explains the fundamental concepts of deep learning for processing image and audio data:

  • begins by introducing adversarial AI and the broader topic of deep learning.

  • considers potential motivations behind the generation of adversarial image, audio, and video.

  • provides a short introduction to DNNs. Readers with an understanding of deep learning concepts may choose to skip this chapter.

  • then provides a high-level overview of DNNs used in image, audio, and video processing to provide a foundation for understanding the concepts in the remainder of this book.

, Generating Adversarial Input

Following the introductory chapters of , these chapters explain adversarial input and how it is created in detail:

  • provides a conceptual explanation of the ideas that underpin adversarial input.

  • then goes into greater depth, explaining computational methods for generatingadversarial input.

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery»

Look at similar books to Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery»

Discussion, reviews of the book Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.