This work aims to utilize quantitative bipolar argumentation to detect deception in machine learning models. We explore the concept of deception in the context of interactions of a party developing a machine learning model with potentially malformed data sources. The objective is to identify deceptive or adversarial data and assess the effectiveness of comparative analysis during different stages of model training. By modeling disagreement and agreement between data points as arguments and utilizing quantitative measures, this work proposes techniques for detecting outliers in data. We discuss further applications in clustering and uncertainty modelling.