Scientific studies often require assessment of similarity between ordered sets of values. Each set, containing one value for every dimension or class of data, can be conveniently represented as a vector. The commonly used metrics for vector similarity include angle-based metrics, such as cosine similarity or Pearson correlation, which compare the relative patterns of values, and distance-based metrics, such as the Euclidean distance, which compare the magnitudes of values. Here we evaluate a newly proposed metric, pairwise relative distance (PRED), which considers both relative patterns and magnitudes to provide a single measure of vector similarity. PRED essentially reveals whether the vectors are so similar that their values across the classes are separable. By comparing PRED to other common metrics in a variety of applications, we show that PRED provides a stable chance level irrespective of the number of classes, is invariant to global translation and scaling operations on data, has high dynamic range and low variability in handling noisy data, and can handle multi-dimensional data, as in the case of vectors containing temporal or population responses for each class. We also found that PRED can be adapted to function as a reliable metric of class separability even for datasets that lack the vector structure and simply contain multiple values for each class.