State-of-the-art intelligent and interactive agents, such as Alexa or Siri, often present overly conforming behaviour during interactions with humans. This can result in a misalignment between end-user expectations and agent behaviour. To overcome this barrier in human-AI interactions, we introduce the Critical Friend (CF), a conceptual idea that guides critical behaviour in human-human interactions. We present our results as a formal understanding that can be described through description logic and utilised for reasoning capabilities, enabling implementations of the CF as an intelligent interactive agent.