Dear colleagues and friends,
A dear colleague shares with us this article written by Julius Maximilians, published on May 2, 2024 in the Universität Würzburg newsletter and translated by us for this space. Let's see what it's all about...
Artificial intelligence (AI) could soon help identify lies and deception. However, a research team from the universities of Marburg and Würzburg warns of the risks of premature use.
Oh, if it were as easy as with Pinocchio. It was easy to tell when he was telling a lie: his nose grew a little longer each time. In reality, it is much more difficult to recognize lies and it is understandable that scientists have been trying for a long time to develop valid methods for detecting deception.
Now a lot of hope has been raised in artificial intelligence (AI) to achieve this goal, for example in the attempt to identify travelers with criminal intent at the borders of the European Union (AU) in Hungary, Greece and Lithuania.
A valuable tool for basic research.
Researchers from the universities of Marburg and Würzburg warn against the premature use of AI to detect lies. In his view, technology is a potentially valuable tool for basic research to gain a better understanding of the psychological mechanisms that underlie deception. However, they are more than skeptical about its application, right now, in real-life contexts
Kristina Suchotzki and Matthias Gamer are responsible for the study, which is now published in the journal Trends in Cognitive Sciences.
Kristina Suchotzki is a professor at the University of Marburg; her research focuses on lies and how to detect them. Matthias Gamer is a professor at the University of Würzburg; one of his main areas of research is the diagnosis of credibility.
Three central problems for applied use
In their publication, Suchotzki and Gamer identify three main problems in current research on AI-based deception detection: the lack of explicability (degree of precision in interpretation) and transparency of the tested algorithms, and the risk of biased results with deficits in the theoretical base. The reason is clear: “Unfortunately, current approaches have focused primarily on technical aspects, to the detriment of a solid methodological and theoretical base,” they write.
In their article, they explain that many AI algorithms suffer from a “lack of explicability and transparency”. It's often not clear how the algorithm arrives at its result. In some AI applications, there comes a time when even developers can't clearly understand how a trial is reached. This makes it impossible to critically evaluate decisions and discuss the reasons for misclassifications.
Another problem they describe is the appearance of “biases” in the decision-making process. The original hope was that machines could overcome human prejudices such as stereotypes or prejudices. In reality, however, this assumption often fails due to an incorrect selection of the variables that humans enter into the model, as well as the small size and lack of representation of the data used. Not to mention the fact that the data used to create such systems is often already biased.
The third problem is fundamental: “The use of artificial intelligence in lie detection is based on the assumption that it is possible to identify a valid signal or combination of unique signals for deception,” Suchotzki explains. However, even decades of research have not been able to identify such unique signs. Nor is there any theory that can convincingly predict its existence.
High susceptibility to errors in mass exams
However, Suchotzki and Gamer don't want to discourage working on AI-based deception detection. Ultimately, it's an empirical question whether this technology has the potential to deliver sufficiently valid results. However, in his opinion, several conditions must be met before considering its use in real life.
“We strongly recommend that decision makers carefully check if basic quality standards have been met in the development of algorithms,” they say. Prerequisites include controlled laboratory experiments, large and diverse data sets without systematic biases, and the validation of algorithms and their accuracy on a large, independent data set.
The objective should be to avoid unnecessary false positives, that is, cases in which the algorithm mistakenly believes that it has detected a lie. There is a big difference between the use of AI as a mass control tool, for example, at airports, and the use of AI for specific incidents, such as the interrogation of a suspect in a criminal case.
“Mass detection applications often involve very unstructured and uncontrolled evaluations. This dramatically increases the number of false positive results,” explains Gamer.
Warning to politicians
Finally, the two researchers advise that AI-based deception detection should only be used in highly structured and controlled situations. Although there are no clear indicators of lies, it is possible to minimize the number of alternative explanations in such situations. This increases the likelihood that differences in behavior or in the content of statements can be attributed to an attempt to deceive.
Suchotzki and Gamer complement their recommendations with a warning to politicians: “History teaches us what happens if we don't meet strict research standards before methods to detect deception are introduced into real life.”
The polygraph example shows very clearly how difficult it is to get rid of such methods, even if evidence later accumulates of their low validity and systematic discrimination against innocent suspects.