It is becoming more common to replace or augment human-based decisions with algorithmic calculations and evaluations using artificial intelligence (AI). Facial analysis systems (FA) are examples of how AI in particular is intertwined with both the most mundane and the most critical aspects of human life. Exploring images for the purposes of face detection, recognition and/or classification, FA shows an entanglement between human identity, self-presentation and computation. In this chapter, we discuss automated facial analysis technology from a queer theoretical standpoint, focusing on the concerns and risks when systems like FA are used in a binary way to categorize, measure and make decisions based on computerized assumptions about gender and sexuality. Further, we discuss issues of privacy, bias and fairness related to FA technology as well as potential improvements, for example, by using participatory design. Finally, this chapter suggests that a queer perspective on FA can create new ways to relate to technology.