In the half-decade since the 2016 election, misinformation bots have been prominent in research agendas in law, political science, and other fields, as scholars have increasingly documented the extent to which such forces were both operational and impactful. The perceived threat, one that was closely watched in the 2020 election, was to the proper functioning of our democratic systems, both at the polls and in the tenor and quality of public discourse. In short, algorithms that created seemingly human entities had threatened much that democratic societies hold dear. But this was not the only instance of machine imposters, as deceptive technologies present in manifold forms, from misinformation bots and fake news to deepfakes and robot “performance videos”: countless technologies that function to conflate truth and falsity, mentation and computation, authenticity and falsehood, or that function not to deceive but rather to manipulate. Moreover, technologies are being developed to deceive the deceivers, and those in the behavioral sciences are working to understand trust and deception and the cognitions that facilitate them.