• Complain

James C. Bezdek - Elementary Cluster Analysis: Four Basic Methods that (Usually) Work

Here you can read online James C. Bezdek - Elementary Cluster Analysis: Four Basic Methods that (Usually) Work full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. City: Gistrup, year: 2022, publisher: River Publishers, genre: Science. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

James C. Bezdek Elementary Cluster Analysis: Four Basic Methods that (Usually) Work
  • Book:
    Elementary Cluster Analysis: Four Basic Methods that (Usually) Work
  • Author:
  • Publisher:
    River Publishers
  • Genre:
  • Year:
    2022
  • City:
    Gistrup
  • Rating:
    4 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 80
    • 1
    • 2
    • 3
    • 4
    • 5

Elementary Cluster Analysis: Four Basic Methods that (Usually) Work: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Elementary Cluster Analysis: Four Basic Methods that (Usually) Work" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

The availability of packaged clustering programs means that anyone with data can easily do cluster analysis on it. But many users of this technology dont fully appreciate its many hidden dangers. In todays world of grab and go algorithms, part of my motivation for writing this book is to provide users with a set of cautionary tales about cluster analysis, for it is very much an art as well as a science, and it is easy to stumble if you dont understand its pitfalls. Indeed, it is easy to trip over them even if you do! The parenthetical word usually in the title is very important, because all clustering algorithms can and do fail from time to time.

Modern cluster analysis has become so technically intricate that it is often hard for the beginner or the non-specialist to appreciate and understand its many hidden dangers. Heres how Yogi Berra put it, and he was right:

In theory theres no difference between theory and practice. In practice, there is ~Yogi Berra

This book is a step backwards, to four classical methods for clustering in small, static data sets that have all withstood the tests of time. The youngest of the four methods is now almost 50 years old:

  • Gaussian Mixture Decomposition (GMD, 1898)
  • SAHN Clustering (principally single linkage (SL, 1909))
  • Hard c-means (HCM, 1956, also widely known as (aka) k-means)
  • Fuzzy c-means (FCM, 1973, reduces to HCM in a certain limit)

The dates are the first known writing (to me, anyway) about these four models. I am (with apologies to Marvel Comics) very comfortable in calling HCM, FCM, GMD and SL the Fantastic Four.

Cluster analysis is a vast topic. The overall picture in clustering is quite overwhelming, so any attempt to swim at the deep end of the pool in even a very specialized subfield requires a lot of training. But we all start out at the shallow end (or at least thats where we should start!), and this book is aimed squarely at teaching toddlers not to be afraid of the water. There is no section of this book that, if explored in real depth, cannot be expanded into its own volume. So, if your needs are for an in-depth treatment of all the latest developments in any topic in this volume, the best I can do - what I will try to do anyway - is lead you to the pool, and show you where to jump in.

James C. Bezdek: author's other books


Who wrote Elementary Cluster Analysis: Four Basic Methods that (Usually) Work? Find out the surname, the name of the author of the book and a list of all author's works by series.

Elementary Cluster Analysis: Four Basic Methods that (Usually) Work — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Elementary Cluster Analysis: Four Basic Methods that (Usually) Work" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Bookmarks
Pagelist
Guide
References
  • (2003). Database-friendly random projections: Johnson Lindenstrauss with binary coins, J. Comp. and Sys. Sciences, 66, 2003, 671687.
  • Acevedo, M. F. (2017). Real-Time Environmental Monitoring: Sensors and Systems. CRC Press.
  • Aggarwal C. C. (ed., 2007). Data Streams: Models and Algorithms, Springer.
  • (2016). Data Clustering: Algorithms and Applications, CRC press, Boca Raton.
  • Aggarwal, C. C., Han, J., Wang, J. and Yu, P. (2003). A framework for clustering evolving data streams, Proc. VLDB, 8192.
  • (2004). A Framework for Projected Clustering of High Dimensional Data Streams, Proc. VLDB, 852863.
  • Aggarwal, C. C., Han, J., Wang, J. and Yu, P. S. (2007). On Clustering Massive Data Streams: A Summarization Paradigm, in Data Streams: Models and Algorithms, ed. C. C. Aggarwal, 938, Springer, NY.
  • Aharon, M., Elad, M. and A. Bruckstein, A. (2006). k-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation, IEEE Trans. Sig. Proc., 54(11), 43114322.
  • (2002), A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data, IEEE Trans. Med. Imaging 21, 193199
  • (2009). Streaming k-means approximation. Proc. NIPS, 22, 1018.
  • (1926). On Bernoullis solution of algebraic equations, Proc. Royal Soc. Edinburgh, 46, 289305.
  • (1974). A new look at the statistical model identification. IEEE Trans. Automatic Control, 19(6), 716723.
  • (1995). A tabu search approach to the clustering problem. Patt. Recog., 28, 14431451
  • (2006). On similarity indices and correctionfor chance agreement. J. Classification, 23(2), 301313.
  • (1984). Cluster Analysis, Sage U. Paper #44, Sage, London.
  • (1999). Efficient fuzzy clustering of multi-spectral images, Proc. IGARSS, 3, 15941596.
  • (1965). Iterative procedures for nonlinear integral equations, JACM, 12, 547560.
  • (1935). The IRISes of the Gaspe peninsula, Bull. Amer. IRIS Soc., 59, 25.
  • (1982). Polygonal shape description of plane boundaries, in Syst. Science and Science, ed. Len Troncale, SGSR Publ., Louisville, KY, 1, 295301.
  • (2008). Speedup of fuzzy clustering through stream processing on graphics processing units, IEEE TFS., 16(4), 11011106.
  • (1969). Calculus, Ginn Blaisdell, Waltham, MA.
  • (1993). Segmentation of thermal images using the fuzzy c-means algorithm, Proc. FUZZIEEE, 719724.
  • (2013). An extensive comparative study of cluster validity indices, Patt. Recog., 446, 243256.
  • (2007). k-means ++: The advantages of careful seeding, Proc. SODA, 10271035.
  • Asrodia P. and Patel, H. (2012), Network traffic analysis using packet sniffer. Int. Jo. of Engineering Research and Applications, 2(3), 854856,
  • (2012). AnyOut: Anytime Outlier Detection on Streaming Data, Proc. DASFAA, Part I, LNCS 7238, S.-g. Lee et al. (Eds.), 228242.
  • (1994). Simulated annealing for selecting optimal initial seeds in the K-means algorithm, Indian J. of Pure and Applied Math., 25, 8594.
  • (2013). UCI machine learning repository, [Online]. Available @ http://archive.ics.uci.edu/ml
  • (2016). Fast and provably good seedings for k-means. Advances in Neural Information Processing Systems, 5563.
  • (2018). Scalable k-means clustering via lightweight coresets. Proc. KDD, 11191127.
  • (2013). Distributed k-Means and k-Median Clustering on General Topologies,Cornell U. Library, arXiv:1306.0604 [cs.LG].
  • (1967). A clustering technique for summarizing multivariate data, Behav. Sci., 12, 153155.
  • (2000). Using the fractal dimension to cluster datasets. In: Proc. SIGKDD, 260264.
  • (1993). Detection of abrupt changes; Theory and Application, Prentice- Hall.
  • (1969). Method for location of clusters of patterns to initialize a learning machine, Electronics Letters, 5(20), 481483.
  • Baum, L. E., Petrie, T., Soules, G. and Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Stat., 41, 164171.
  • (1966). Men of Mathematics, 5th printing, Simon and Schuster, NY.
  • (2007). On a General Transformation Making a Dissimilarity Matrix Euclidean, J. Classification, 24, 3351.
  • (2009). Learning deep architectures for AI, Foundations and Trends in Machine Learning, 2(1), 1127.
  • (1996). Validity-guided (re)clustering with applications to image segmentation, IEEE TFS., 4(2), 112123.
  • (1973). Fuzzy mathematics in pattern classification, PhD thesis, Cornell U., Ithaca, NY.
  • (1974). Cluster validity with fuzzy sets, J. Cyber., 3(3), 5872.
  • (1976). A Physical interpretation of fuzzy ISODATA, IEEE Trans. SMC, 6(5), 387389.
  • (1980). A Convergence Theorem for the Fuzzy ISODATA Clustering Algorithms, IEEE Trans. PAMI, PAMI-2(1), 18.
  • (1981). Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum, NY.
  • Bezdek, J. C. (1992). On the Relationship between Neural Networks, Pattern Recognition, and Intelligence, Int. J. Approximate Reasoning, 6(2), 85107.
  • (1994). The thirsty traveller visits Gamont: A rejoinder to Comments on Editorial: Fuzzy Models - what are they and why?, IEEE TFS., 2(1), 43.
  • (2015). The history, philosophy and development of computational intelligence (how a simple tune became a monster hit), Ch. 1 in Computational Intelligence, ed. H. Ishibuchi, EOLSS, Oxford, UK, 122.
  • (2016). [Computational] intelligence: whats in a name? IEEE SMC Magazine, 2(2), 414.
  • (1977). Prototype classification and feature selection with fuzzy sets, IEEE Trans. SMC, 7(2), 8792.
  • (1975). Optimal fuzzy partitions: A heuristic for estimating the parameters in a mixture of normal distributions, IEEE Trans. Computers, 24(8), 835838.
  • (1992). Numerical convergence and interpretation of the fuzzy c-shells clustering algorithms, IEEE Trans. Neural Networks, 3, 787793.
  • (1979). Convex decomposition of fuzzy partitions, JMAA, 67, 490512.
  • (2002). VAT: A tool for visual assessment of (cluster) tendency, Proc. IJCNN, IEEE Press, 22252230.
  • (2002b). Some notes on alternating optimization, Adv. In Soft Computing - AFSS 2002, eds. N.R. Pal and M. Sugeno, Springer, New York, 288300.
  • (2003). Convergence of alternating optimization, J. Neural, Parallel and Scientific Computation, 11(4), 351368.
  • (2021). Streaming data analysis: clustering or classification?, IEEE Trans. SMC, 51(1), 91102.
  • (eds, 1992). Fuzzy Models for Pattern Recognition, IEEE Press, Piscataway, NJ.
  • (1998). Some new indexes of cluster validity, IEEE Trans. SMC, 28(3), 301315.
  • (1981a). Detection and characterization of cluster substructure: I. Linear structure: fuzzy c-Lines, SIAM J. Appl. Math, 40(2), 339357.
  • (1981b). Detection and characterization of cluster substructure: II. Fuzzy c-varieties and convex combinations thereof, SIAM J. Appl. Math, 40(2), 358372.
  • (1978). On the extension of fuzzy k-means algorithms for the detection of linear clusters, Proc. IEEE Conf. on Decision and Control, 14381443.
  • (1985). Parametric estimation for normal mixtures, Patt. Recog. Lett., 3, 7984.
  • (1995). Norm induced shell prototype (NISP) clustering, Neural, Parallel and Sci. Comp., 3, 431450.
  • (2006). Approximate clustering in very large relational data, Int. J. Intell. Sys., 21, 817841.
  • (1987). Convergence theory for fuzzy c-means: Counterexamples and repairs, IEEE Trans. SMC, 17(5), 873877.
  • (1999a). Fuzzy models and algorithms for Pattern Recognition and Image Processing, Springer, NY.
  • (1999b). Will the real Iris data please stand up?, IEEE TFS., 7(3), 368369.
  • (1997). A geometric approach to cluster validity for normal mixtures, Soft Computing, 1, 166179.
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Elementary Cluster Analysis: Four Basic Methods that (Usually) Work»

Look at similar books to Elementary Cluster Analysis: Four Basic Methods that (Usually) Work. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Elementary Cluster Analysis: Four Basic Methods that (Usually) Work»

Discussion, reviews of the book Elementary Cluster Analysis: Four Basic Methods that (Usually) Work and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.