Skip to main content

Verified by Psychology Today

Artificial Intelligence

What Are the Best Nationalities and Personalities for Detecting AI Text?

Linguistic and psychological characteristics can help detect fake text.

Computer-based tools can now write almost anything. At a basic level, a topic is chosen, some parameters are set for searching the previously published literature, and the tool is asked to compose the piece – texts, posts, student essays, review articles, even blog posts! Of course, this is not the use that was envisioned, but, as with almost everything else the digital world has produced, the intended use has been subverted for nefarious ends in some cases. The initial idea was that such tools could summarise vast amounts of information quickly, giving professionals executive summaries of a field in which to ground their decisions. However, such tools may fake scientific papers and student essays – both of which are bad enough; or can be used to generate millions of fake social media posts that impact public opinion and subvert political processes. Thus, these can be tools of deception, as well as information. The questions are: how is it possible to recognise fake text; and who would be best at doing so?

Although the term ‘artificial intelligence’ is often employed in this context, it is a misnomer. Such tools are not ‘intelligent’ in any meaningful sense, nor do they mimic the processes of biological learning and behaviour. The tools are based on algorithms that attempt to predict what word or phrase is most likely to occur next in natural speech. Such predictions are based on exposure to large quantities of text. This is not how biological learning occurs, which is not statistical or algorithmic in nature – this is a shortcut to make it easier to approximate that process. Given this, ‘fake behaviour machines’ may be better terminology. In any case, differences in behavioural patterns between the machine and the biological open possibilities for spotting fake text. Many may remember the ways in which blade runners spot a replicant in Philip K. Dick’s novel, Do Androids Dream of Electric Sheep?

The failures of fake behaviour machines in producing human text are two-fold. Firstly, they are conspicuously ignorant of the wealth of information in any given field that resides in institutional folklore, rather than in written form. As a result, they tend to make simple mistakes. Secondly, they tend to write in quite short, rather bland, somewhat repetitive, sentences, that use a lot of common words. Although these behavioural differences raise possibilities for 21st-century digital-text blade runners, it must be noted that these two flaws are also about the closest fake behaviour machines get to being human – humans make mistakes, and can be linguistically quite boring, too. Given this, spotting a fake behaviour machine by eye can be problematic for many.

There are two options for developing techniques to spot fake text: use another machine, or train humans to do this efficiently. The former has been quite extensively investigated in terms of machine-learning tools spotting fake behaviour texts by employing algorithms to predict what a fake would say – a thief to catch a thief, as it were. However, unlike Cary Grant in To Catch a Thief, the results are mixed, and not confidence-inspiring1. The second option has not been explored as much, and is usually tested as a control to see if machines are better2. However, if pursued, an important question is: which people might have a ‘natural’ edge in spotting digital fakes? What backgrounds and characteristics could they have to optimise their chances of being successful?

Spotting mistakes takes an expert, and people are rather good at spotting mistakes in fake textual behaviour when they are experts in that area2. Scientists, who write a lot of papers, can see simple errors very quickly in a fake science text, like getting the references wrong, etc. This suggests that experts can be used in fake spotting but may mean a lot of people are needed, as the computer fake texts cover a lot of domains. Alternatively, a very well-educated all-rounder may be a good bet as a fake behaviour machine spotter.

Another, so far unexplored possibility is premised on the hypothesis that those who do not, themselves, write text like a machine may be better at spotting fake behaviour text. The jarring unexpected nature of text produced by the fake may stand out more to those who do not produce such textual forms. Certainly, living organisms typically pay more attention to something that is unexpected or novel3. This produces two questions: which languages are least like fake behaviour, and which personality types use language least like fake behaviour machines? There isn’t a great deal of literature to go on, but there are some indications.

In terms of which language speakers may most easily spot a fake, there are some suggestions that a UK-English speaking person, and potentially Spaniards, would be good at the job1,4. With apologies to US-English users, that variety of English is now just too simplistic to have a hope in this area5. However, language complexity is a complex thing to measure. Language has multiple dimensions, and there is no universally accepted metric of complexity distinguishing complex from simple languages to point to potential blade runners for fake text.

One possibility is to apply Kolmogorov complexity to establish a metric for language complexity4, and allow selection of fake spotters from those using complex languages. This is a mathematical method that can assess morphological (the form) and syntactical (the structure and grammar) aspects of language. One study assessed a range of languages by translating the Bible, Alice in Wonderland, and newspaper articles4, and found English was morphologically simple, but syntactically the most complex language. Many European languages were somewhere in the middle, and West Saxon and Finnish were morphologically complex but syntactically simple. When machine fake behaviour spotters are tested, they spot Spanish and French fakes easier than German fakes1. The same study has not been done for human fake spotters. Sadly, fakes come in all languages, and there are problems having a Spanish-speaking person looking for English fakes, however good their English as a second language.

What of the personality of the individual? Which personalities use language in the most complex way, and may be able to spot differences from their usage? Sadly, there is little on this topic, and what there is available is not tremendously well developed. Of all the personality types, extroverts use more words per sentence6,7, and tend to use more complex words once they have acquired them7; while this is good, they are generally more repetitive – as you will know if you’ve ever been cornered by the ‘life and soul’ of the party! Introverts have a broader vocabulary, and use words more unexpectedly8. On this basis, perhaps an introvert (with some hidden extrovert tendencies) may have an advantage – certainly, they tend to be more conscientious than extroverts, so they may spot errors more effectively, and not burn out as quickly8.

This all suggests that an introverted, conscientious, UK-English speaker, who has had a well-rounded education, may be the best bet for spotting machine fake behaviour. This sounds, for all the world, like the stereotype of a traditional British military intelligence officer of the ‘old school’. Perhaps the best defence against the pernicious new is to conserve the old.

References

1. Chaka, C. (2023). Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools. Journal of Applied Learning and Teaching, 6(2).

2. Ma, Y., Liu, J., & Yi, F. (2023). Is this abstract generated by ai? a research for the gap between ai-generated scientific text and human-written scientific text. arXiv preprint arXiv:2301.10416.

3. Pearce, J.M., & Hall, G. (2014). Stimulus significance, conditionability, and the orienting response in rats. In Attention and information processing in infants and adults (pp. 137-160). Psychology Press.

4. Szmrecsanyi, B. (2016). An informationtheoretic approach to assess linguistic complexity. Complexity, Isolation, and Variation, 57, 71.

5. Algeo, J. (1991). A meditation on the varieties of English. English Today, 7(3), 3-6.

6. Mairesse, F., & Walker, M.A. (2011). Controlling user perceptions of linguistic style: Trainable generation of personality traits. Computational Linguistics, 37(3), 455-488.

7. Spitzley, L. A., Wang, X., Chen, X., Burgoon, J.K., Dunbar, N.E., & Ge, S. (2022). Linguistic measures of personality in group discussions. Frontiers in Psychology, 13, 887616.

8. Furnham, A. (1990). Language and personality. In Giles, H., & Robinson, W. (Eds.), Handbook of Language and Social Psychology. Winley.

9. Hagen, T., De Caluwé, E., & Bogaerts, S. (2023). Personality moderators of the cross-sectional relationship between job demands and both burnout and work engagement in judges: The boosting effects of conscientiousness and introversion. International Journal of Law and Psychiatry, 89, 101902.

advertisement
More from Phil Reed D.Phil.
More from Psychology Today
More from Phil Reed D.Phil.
More from Psychology Today