### Item Clustering

**Cluster analysis** or **clustering** is the task of grouping a set of objects in such a way that objects in the same group (called a **cluster**) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data analysis, and a common technique for statistical data analysis, used in many fields, including

- pattern recognition,
- image analysis,
- information retrieval,
- bioinformatics,
- data compression,
- computer graphics
- and machine learning.

Cluster analysis itself is not one specific algorithm, but the general task to be solved. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with small distances between cluster members, dense areas of the data space, intervals or particular statistical distributions. Clustering can therefore be formulated as a multi-objective optimization problem.

- The appropriate clustering algorithm and parameter settings (including parameters such as the distance function to use, a density threshold or the number of expected clusters) depend on the individual data set and intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process of knowledge discovery or interactive multi-objective optimization that involves trial and failure. It is often necessary to modify data preprocessing and model parameters until the result achieves the desired properties.
- Besides the term
*clustering*, there are a number of terms with similar meanings, including*automatic classification*,*numerical taxonomy*,*botryology*(from Greek βότρυς “grape”),*typological analysis*, and*community detection*. The subtle differences are often in the use of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest.

Understanding these “cluster models” is key to understanding the differences between the various algorithms. Typical cluster models include:

: for example, hierarchical clustering builds models based on distance connectivity.**Connectivity model****s****Centroid model****s****:**for example, the k-means algorithm represents each cluster by a single mean vector.**Distribution model****s****:**clusters are modeled using statistical distributions, such as multivariate normal distributions used by the expectation-maximization algorithm.**Density model****s****:**for example, DBSCAN and OPTICS defines clusters as connected dense regions in the data space.**Subspace model****s****:**in biclustering (also known as co-clustering or two-mode-clustering), clusters are modeled with both cluster members and relevant attributes.**Group model****s****:**some algorithms do not provide a refined model for their results and just provide the grouping information.**Graph-based model****s****:**a clique, that is, a subset of nodes in a graph such that every two nodes in the subset are connected by an edge can be considered as a prototypical form of cluster. Relaxations of the complete connectivity requirement (a fraction of the edges can be missing) are known as quasi-cliques, as in the HCS clustering algorithm.Every path in a signed graph has a sign from the product of the signs on the edges. Under the assumptions of balance theory, edges may change sign and result in a bifurcated graph. The weaker “clusterability axiom” (no cycle has exactly one negative edge) yields results with more than two clusters, or subgraphs with only positive edges.*Signed graph models*:^{[6]}**Neural model****s****:**the most well known unsupervised neural network is the self-organizing map and these models can usually be characterized as similar to one or more of the above models, and including subspace models when neural networks implement a form of Principal Component Analysis or Independent Component Analysis.

There are also finer distinctions possible, for example:

**Strict partitioning clustering****:**each object belongs to exactly one cluster**Strict partitioning clustering with outliers****:**objects can also belong to no cluster, and are considered outliers(also:**Overlapping clustering***alternative clustering*,*multi-view clustering*): objects may belong to more than one cluster; usually involving hard clusters**Hierarchical clustering****:**objects that belong to a child cluster also belong to the parent cluster**Subspace clustering****:**while an overlapping clustering, within a uniquely defined subspace, clusters are not expected to overlap

## Add Comment