Artificial intelligence has acted as an essential driver of emerging technologies by employing many sophisticated Machine Learn- ing (ML) models, while lack of model transparency and results expla- nation limits its effectiveness in real decision-making. The eXplainable AI (XAI) has bridged this gap by providing the explanation of outcomes made by these complex ML model. In this paper, we classify the function- ing of an air handling unit (AHU) using the neural network and utilise contextual importance and contextual utility (CIU) as an XAI mod- ule for explaining outcome of the neural Network. Here, we prove that CIU (XAI module) can generate transparent and human-understandable explanations, which the end-user can therefore utilize for making deci- sions proving the overall applicability of the method in a novel use-case. Visual and textual explanations for the causes of an individual prediction have been derived from the CIU that are numeric values calculated from the machine learning module results. We also have provided contrasting explanations against some causes that were not involved in the decision. We provide both in our proposed approach.