If a technological agent treats people unequally, it may be perceived as “being unfair.” But in what sense can fairness be considered an attribute of a nonhuman entity – a thing? This paper addresses this question through an exploratory study combining an experiment and a focus group. In the experiment, implemented as a quiz game hosted by an agent, two levels of participants’ Treatment by the agent (Fair/Unfair) were combined with two levels of agents’ Anthropomorphism (High/Low). Data about participants’ perceptions of the agents were collected through Likert scales and post-session interviews. A subset of participants took part in a follow-up focus group study, in which they shared their thoughts and reflections on intelligent agents’ fairness, grounded in their prior quiz game experience. The results suggest that while perceived fairness of an agent is a key aspect of human-agent interaction, operationalizing it is complicated by its ambiguity, context dependence, and entanglement with other aspects of interaction.