This paper describes the contribution by participants from Umeå University, Sweden, in collaboration with the University of Bern, Switzerland, for the Medical Domain Visual Question Answering challenge hosted by ImageCLEF 2019. We proposed a novel Visual Question Answering approach that leverages a bilinear model to aggregateand synthesize extracted image and question features. While we did not make use of any additional training data, our model used an attention scheme to focus on the relevant input context and was further boosted by using an ensemble of trained models. We show here that the proposed approach performs at state-of-the-art levels, and provides an improvement over several existing methods. The proposed method was ranked 3rd in the Medical Domain Visual Question Answering challenge of ImageCLEF 2019.