Collaborative intelligence between humans and intelligent systems relies heavily on the skills of humans and intelligent systems for reaching agreements. This requires complex dialogue processes, which include human reasoning based on common sense and goal-oriented decision-making performed by the intelligent systems, considering the human's dynamic goals and changing beliefs. This project aims to approach these challenges by studying non-monotonic reasoning techniques in the setting of strategic interaction between intelligent systems and humans. To capture the underlying logic of human reasoning, cognitive theories in logical formalizations are explored, e.g., in abstract argumentation or answer set programming. These reasoning architectures will support the decision-making process of rational agents that aim to join a given dialogue-based interaction with humans. With a particular focus on applications of persuasive technology, we see strategic argumentation as a process of decision-making for changing mental states of human agents.