A key intuition in symbolic artificial intelligence is that an intelligent system should be non-monotonic, but cautiously so: previous conclusions should only be revised if a compelling reason for doing so exists. In this paper, I trace the evolution of this intuition, which emerged from Dov Gabbay’s seminal 1985 paper and gained additional prominence as cautious monotonicity in the 1990 KLM paper, as well as in an earlier paper by Makinson. I introduce the term cautious nonmonotonicity for the general idea of assuring that monotonicity is satisfied given some condition, thus highlighting that it is the violation, and not the satisfaction, of monotonicity that we need to be careful about. Also, I discuss why cautious nonmonotonicity still is an open problem in theory and practice, and present some results that highlight the intricacy of cautious nonmonotonicity in the simple case of abstract argumentation, where inferences are drawn from directed graphs without further structure.
Special Issue to Celebrate Dov Gabbay's 80th Birthday.
ISBN: 978-1-84890-492-7