Node similarity scores constitute a foundation for machine learning in graphs. Besides clustering, node classification, and anomaly detection, they are a basis for link prediction with critical applications in biological systems, information networks, and recommender systems. Recent works on link prediction use vector space embeddings to calculate node similarities. While these methods can provide good performance in undirected networks, they have several disadvantages: limited interpretability, problem-specific hyperparameter tuning, manual model fitting through dimensionality reduction, and poor performance of symmetric similarities in directed link prediction. To address these issues, we propose MapSim, a novel information-theoretic approach to assess node similarities based on modular compression of network flows. Different from vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities and yields asymmetric similarities suitable to predict directed and undirected links in an unsupervised fashion. The resulting similarities can be explained based on a network's hierarchical modular organisation, facilitating interpretability. MapSim naturally accounts for Occam's razor, leading to parsimonious representations of clusters at multiple scales. Addressing unsupervised link prediction, we compare MapSim to popular embedding-based algorithms across 47 data sets of networks from a few hundred to hundreds of thousands of nodes and millions of links. Our analysis shows that MapSim's average performance across all networks is more than 7% higher than its closest competitor, outperforming all embedding methods in 14 of the 47 networks, and a more than 33% better worst-case performance. Our method demonstrates the potential of compression-based approaches in graph representation learning, with promising applications in other graph learning tasks.