We review similarity and distance measures used in Statistics for clustering and classification. We are motivated by the lack of most measures to adequately utilize a non uniform distribution defined on the data or sample space.
Such measures are mappings from
is either a finite set of objects or vector space like
is the set of non-negative real numbers. In most cases those mappings fulfil conditions like symmetry and reflexivity. Moreover, further characteristics like transitivity or the triangle equation in case of distance measures are of concern.
We start with Hartigan’s list of proximity measures which he compiled in 1967. It is good practice to pay special attention to the type of scales of the variables involved, i.e. to nominal (often binary), ordinal and metric (interval and ratio) types of scales. We are interested in the algebraic structure of proximities as suggested by (
) and (
), information-theoretic measures as discussed by (
), and the probabilistic W-distance measure as proposed by (
). The last measure combines distances of objects or vectors with their corresponding probabilities to improve overall discrimination power. The idea is that rare events, i.e. set of values with a very low probability of observing, related to a pair of objects may be a strong hint to strong similarity of this pair.