Comparing representational geometries using the unbiased distance correlation
Document Type
Article
Publication Date
2020
Abstract
Representational similarity analysis (RSA) tests models of brain computation by investigating how neural activity patterns change in response to different experimental conditions. Instead of predicting activity patterns directly, the models predict the geometry of the representation, i.e. to what extent experimental conditions are associated with similar or dissimilar activity patterns. RSA therefore first quantifies the representational geometry by calculating a dissimilarity measure for all pairs of conditions, and then compares the estimated representational dissimilarities to those predicted by the model. Here we address two central challenges of RSA: First, dissimilarity measures such as the Euclidean, Mahalanobis, and correlation distance, are biased by measurement noise, which can lead to incorrect inferences. Unbiased dissimilarity estimates can be obtained by crossvalidation, at the price of increased variance. Second, the pairwise dissimilarity estimates are not statistically independent. Ignoring the dependency makes model comparison with RSA statistically suboptimal. We present an analytical expression for the mean and (co)variance of both biased and unbiased estimators of Euclidean and Mahalanobis distance, allowing us to exactly quantify the bias-variance trade-off. We then use the analytical expression of the covariance of the dissimilarity estimates to derive a simple method correcting for this covariance. Combining unbiased distance estimates with this correction leads to a novel criterion for comparing representational geometries, the unbiased distance correlation, which, as we show, allows for near optimal model comparison.