Volume 865
Studies in Computational Intelligence
Series Editor
Janusz Kacprzyk
Polish Academy of Sciences, Warsaw, Poland
The series Studies in Computational Intelligence (SCI) publishes new developments and advances in the various areas of computational intelligencequickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution, which enable both wide and rapid dissemination of research output.
The books of this series are submitted to indexing to Web of Science, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink.
More information about this series at http://www.springer.com/series/7092
Editors
Witold Pedrycz and Shyi-Ming Chen
Deep Learning: Algorithms and Applications
Editors
Witold Pedrycz
Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
Shyi-Ming Chen
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
ISSN 1860-949X e-ISSN 1860-9503
Studies in Computational Intelligence
ISBN 978-3-030-31759-1 e-ISBN 978-3-030-31760-7
https://doi.org/10.1007/978-3-030-31760-7
Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Deep learning has entered a period of designing, implementing, and deploying intensive and diverse applications, which are now visible in numerous areas. Successful case studies became a consequence of the prudent, carefully drafted fundamental concept that transformed ways in such real-world problems are perceived, formalized, and solved at the increased level of machine-centered and automatic fashion. Central to all pursuits are algorithms that help realize the principles of deep learning in an efficient way. Algorithms deliver a sound alignment with the specificity of the applied nature of the practical problem by addressing the computing requirements and selecting/adjusting overall algorithmic settings.
The volume is composed of 11 chapters and reflects the wealth of algorithms of deep learning and their application studies in the plethora of areas including imaging, seismic tomography, power series forecasting, smart grids, surveillance, security, health care, environmental engineering, and marine sciences.
We would like to express our thanks to the Series Editor, Prof. Janusz Kacprzyk. He has always been enthusiastic and highly supportive of this project. We are indebted to the professionals at Springer; the team has made the overall production process highly efficient and completed in a timely manner.
Witold Pedrycz
Shyi-Ming Chen
Edmonton, Canada Taipei, Taiwan
Contents
Mohit Goyal , Rajan Goyal , P. Venkatappa Reddy and Brejesh Lall
Emilio Rafael Balda , Arash Behboodi and Rudolf Mathar
Janosch Henze , Jens Schreiber and Bernhard Sick
Abdulaziz Almalaq and Jun Jason Zhang
Mauricio Araya-Polo , Amir Adler , Stuart Farris and Joseph Jennings
Jian-Gang Wang and Lu-Bing Zhou
Miguel Martin-Abadal , Ana Ruiz-Frau , Hilmar Hinz and Yolanda Gonzalez-Cid
Juha Niemi and Juha T. Tanttu
Swathi Jamjala Narayanan , Boominathan Perumal , Sangeetha Saman and Aditya Pratap Singh
Omar Costilla-Reyes , Ruben Vera-Rodriguez , Abdullah S. Alharthi , Syed U. Yunas and Krikor B. Ozanyan
Zhenghua Chen , Chaoyang Jiang , Mustafa K. Masood , Yeng Chai Soh , Min Wu and Xiaoli Li
Activation functions lie at the core of deep neural networks allowing them to learn arbitrarily complex mappings. Without any activation, a neural network learn will only be able to learn a linear relation between input and the desired output. The chapter introduces the reader to why activation functions are useful and their immense importance in making deep learning successful. A detailed survey of several existing activation functions is provided in this chapter covering their functional forms, original motivations, merits as well as demerits. The chapter also discusses the domain of learnable activation functions and proposes a novel activation SLAF whose shape is learned during the training of a neural network. A working model for SLAF is provided and its performance is experimentally shown on XOR and MNIST classification tasks.
Mohit Goyal and Rajan Goyal have Equally Contributed.