By comparing thousands of emails containing lies and truths, researchers created a text analytic algorithm that can pick up on the “linguistic cues” and patterns that liars use to cover up and present themselves as legitimate to their victims.
And by looking at the types of words, structures and context people use when lying in emails, the researchers said it was possible to use the algorithm to detect lies in other online content, whether dating profiles or visa applications.
Speaking about the study, researcher & senior marketing lecturer at City University London, Tom van Laer, said: “Humans are startlingly bad at consciously detecting deception. Indeed, human accuracy when it comes to spotting a lie is just 54%, hardly better than chance.
“Our digital lie detector, meanwhile, is 70% accurate. It can be put to work to fight fraud wherever it occurs in computerised content and as the technology evolves, its Pinocchio warnings can be wholly automated and its accuracy will increase even further. Just as Pinocchio’s nose reflexively signalled falsehood, so does our digital lie detector.”
How the lie detector algorithm works
The researchers’ findings show that liars tend to use fewer personal pronouns, such as “I” and “he/she”, as a way to disassociate themselves from the lie.
Instead, they use more second-person pronouns like “you” and “your”, alongside achievement words and adjectives, to make users feel included and flattered, rather than part of a deception ploy.
This method also aims to throw users off the scent of a lie, by providing as much positive information around the subject as possible, particularly with the use of adjectives.
In contrast, they tend to avoid spontaneity in their messages – something that was concluded from the lack of variety in cognitive process words such as “cause”, “because” and “know” within messages.
This is because well thought out messages are likely to avoid lies from being detected.
Another method liars tend to use are function words – which are words the receiver uses on a regular basis when in an ongoing conversation with someone.
Speaking about possible use cases, van Laer said: “Consumer watchdogs can use this technology to assign a “possibly lying” score to advertisements of a dubious nature. Security companies and national border forces can use the algorithm to assess documents, such as visa applications and landing cards, to better monitor compliance with access and entry rules and regulations.
“In fact, the potential applications go on and on. Political blogs can successfully monitor their social media interactions for textual anomalies, while dating and review sites can classify messages submitted by users on the basis of their “possibly lying” score.”
To find out more about it, please click here.