• Complain

Fuchen Sun Kar-Ann Toh Manuel Grana Romay - Extreme Learning Machines 2013: Algorithms and Applications

Here you can read online Fuchen Sun Kar-Ann Toh Manuel Grana Romay - Extreme Learning Machines 2013: Algorithms and Applications full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: Cham, publisher: Springer International Publishing, genre: Home and family. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Fuchen Sun Kar-Ann Toh Manuel Grana Romay Extreme Learning Machines 2013: Algorithms and Applications

Extreme Learning Machines 2013: Algorithms and Applications: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Extreme Learning Machines 2013: Algorithms and Applications" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

Fuchen Sun Kar-Ann Toh Manuel Grana Romay: author's other books


Who wrote Extreme Learning Machines 2013: Algorithms and Applications? Find out the surname, the name of the author of the book and a list of all author's works by series.

Extreme Learning Machines 2013: Algorithms and Applications — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Extreme Learning Machines 2013: Algorithms and Applications" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Fuchen Sun , Kar-Ann Toh , Manuel Grana Romay and Kezhi Mao (eds.) Adaptation, Learning, and Optimization Extreme Learning Machines 2013: Algorithms and Applications 2014 10.1007/978-3-319-04741-6_1
Springer International Publishing Switzerland 2014
Stochastic Sensitivity Analysis Using Extreme Learning Machine
David Becerra-Alonso 1 , Mariano Carbonero-Ruz 1, Alfonso Carlos Martnez-Estudillo 1 and Francisco Jos Martnez-Estudillo 1
(1)
Department of Management and Quantitative Methods, AYRNA Research Group, Universidad Loyola Andaluca, Escritor CastillaAguayo 4, Crdoba, Spain
David Becerra-Alonso
Email:
Abstract
The Extreme Learning Machine classifier is used to perform the perturbative method known as Sensitivity Analysis. The method returns a measure of class sensitivity per attribute. The results show a strong consistency for classifiers with different random input weights. In order to present the results obtained in an intuitive way, two forms of representation are proposed and contrasted against each other. The relevance of both attributes and classes is discussed. Class stability and the ease with which a pattern can be correctly classified are inferred from the results. The method can be used with any classifier that can be replicated with different random seeds.
Keywords
Extreme learning machine Sensitivity analysis ELM feature space ELM solutions space Classification Stochastic classifiers
Introduction
Sensitivity Analysis (SA) is a common tool to rank attributes in a dataset in terms how much they affect a classifiers output. Assuming an optimal classifier, attributes that turn out to be highly sensitive are interpreted as being particularly relevant for the correct classification of the dataset. Low sensitivity attributes are often considered irrelevant or regarded as noise. This opens the possibility of discarding them for the sake of a better classification. But besides an interest in an improved classification, SA is a technique that returns a rank of attributes. When expert information about a dataset is available, researchers can comment on the consistency of certain attributes being high or low in the scale of sensitivity, and what it says about the relationship between those attributes and the output that is being classified.
In this context, the difference between a deterministic and a stochastic classifier is straightforward. Provided a good enough heuristics, a deterministic method will return only one ranking for the sensitivity of each one of the attributes. With such a limited amount of information it cannot be known if the attributes are correctly ranked, or if the ranking is due to a limited or suboptimal performance of the deterministic classifier. This resembles the long standing principle that applies to accuracy when classifying a dataset (both deterministic and stochastic): it cannot be known if a best classifier has reached its topmost performance due to the very nature of the dataset, or if yet another heuristics could achieve some extra accuracy. Stochastic methods are no better here, since returning an array of accuracies instead of just one (like in the deterministic case) and then choosing the best classifier is not better than simply giving a simple good deterministic classification. Once a better accuracy is achieved, the question remains: is the classifier at its best? Is there a better way around it?
On the other hand, when it comes to SA, more can be said about stochastic classifiers. In SA, the method returns a ranked array, not a single value such as accuracy. While a deterministic method will return just a simple rank of attributes, a stochastic method will return as many as needed. This allows us to claim a probabilistic approach for the attributes ranked by a stochastic method. After a long enough number of classifications and their corresponding SAs, an attribute with higher sensitivity will most probably be placed at the top of the sensitivity rank, while any attribute clearly irrelevant to the classification will eventually drop to the bottom of the list, allowing for a more authoritative claim about its relationship with the output being classified.
Section introduces two ways of interpreting sensitivity. The article ends with conclusions about the methodology.
Sensitivity Analysis
2.1 General Approach
For any given methodology, SA measures how the output is affected by perturbed instances of the methods input []. Any input/output method can be tested in this way, but SA is particularly appealing for black box methods, where the inner complexity hides the relative relevance of the data introduced. The relationship between a sensitive input attribute and its relevance amongst the other attributes in dataset seems intuitive, but remains unproven.
In the specific context of classifiers, SA is a perturbative method for any classifier dealing with charted datasets []:
(1)
Let us consider the training set given by patterns A classifier with as many outputs as class-labels in - photo 1 patterns A classifier with as many outputs as class-labels in is trained for the - photo 2A classifier with as many outputs as class-labels in is trained for the - photo 3 . A classifier with as many outputs as class-labels in Picture 4 is trained for the dataset. The highest output determines the class assigned to a certain pattern. A validation used on the trained classifier shows a good generalization, and the classifier is accepted as valid for SA.
(2)
The average of all patterns by attribute Extreme Learning Machines 2013 Algorithms and Applications - image 5 results in an average pattern The maximum pattern is defined as the vector containing the maximum values - photo 6 . The maximum pattern is defined as the vector containing the maximum values of the dataset for each - photo 7is defined as the vector containing the maximum values of the dataset for each - photo 8 is defined as the vector containing the maximum values of the dataset for each attribute. The the minimum pattern is obtained in an analogous way 3 A perturbed pattern is defined as an average pattern where one of the - photo 9 .
(3)
A perturbed pattern is defined as an average pattern where one of the attributes has been swapped either with its corresponding attribute in the maximum or minimum pattern. Thus, for attribute we have and 4 These - photo 10 , we have and 4 These pairs of perturbed patterns are then processed by the - photo 11 and 4 These pairs of perturbed patterns are then processed by the validated - photo 12
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Extreme Learning Machines 2013: Algorithms and Applications»

Look at similar books to Extreme Learning Machines 2013: Algorithms and Applications. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Extreme Learning Machines 2013: Algorithms and Applications»

Discussion, reviews of the book Extreme Learning Machines 2013: Algorithms and Applications and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.