Motivated by the successes in the field of deep learning, the scientific community has been increasingly interested in neural networks that are able to reason about physics. As neural networks are universal approximators, they could in theory learn representations that are more efficient than traditional methods whenever improvements are theoretically possible. This thesis, done in collaboration with Algoryx, serves both as a review of the current research in this area and as an experimental investigation of a subset of the proposed methods. We focus on how useful these methods are as \textit{learnable simulators} of mechanical systems that are possibly constrained and multiscale. The experimental investigation considers low-dimensional problems with training data generated by either custom numerical integration or by use of the physics engine AGX Dynamics. A good learnable simulator should express some important properties such as being stable, accurate, generalizable, and fast. Importantly, a generalizable simulator must be able to represent reconfigurable environments, requiring a model known as a graph neural network (GNN). The experimental results show that black-box neural networks are limited to approximate physics in the states it has been trained on. The results also suggest that traditional message-passing GNNs have a limited ability to represent more challenging multiscale systems. This is currently the most widely used method to realize GNNs and thus raises concern as there is not much to be gained by investing time into a method with fundamental limitations.