Assigning shares of “relative importance” to each of a set of regressors is one of the key goals of researchers applying linear regression, particularly in sciences that work with observational data. Although the topic is quite old, advances in computational capabilities have led to increased applications of computer-intensive methods like averaging over orderings that enable a reasonable decomposition of the model variance. This article serves two purposes: to reconcile the large and somewhat fragmented body of recent literature on relative importance and to investigate the theoretical and empirical properties of the key competitors for decomposition of model variance.