Understanding harms caused by generative A.I. in mass deceptions

This research priority encourages the exploration of psychological impact of AI-driven deception.

Research Priority 2 (RP2)

This research priority encourages the exploration of psychological impact of AI-driven deception, such as deepfakes, on receiver behaviours, beliefs, and mental wellbeing. As AI-driven content creation tools advance, they enable threat actors to orchestrate sophisticated campaigns of misinformation and deception on a large scale. These technologies, leveraging machine learning advancements, also allow for highly realistic manipulations of visual content such as deepfakes, posing significant risks to individuals’ belief systems, susceptibility to cyber-crime, and resulting mental health impacts. Addressing these challenges and mitigating harmful consequences requires an evidence-based approach that explores the intricate social, cognitive, and affective mechanisms involved in the psychology of AI-driven deception.

By uncovering the psychological mechanisms behind AI-driven deception and its effects on mental wellbeing, memory, and belief systems, this research can inform the redesign of support systems for cybercrime victims and contribute to the development of robust legislation and policies to combat deepfake misuse and other AI-driven deceptions. Additionally, educational programs and frameworks for identifying credible information will empower individuals and organisations to mitigate the risks associated with AI technologies, fostering a safer online environment.

This research priority encourages proposals that focus work in examining one or more of the following questions:

RP1.1 What is known in the research and grey literatures about the key factors that contribute to computer-mediated deception over time? What are the mediating processes and boundary conditions?

RP1.2 Can humans tell when they are conversing with an LLM? Can humans tell when they are being deceived by an LLM? What cognitive, demographic, situational and contextual factors affect this?

RP1.3 What are the most effective and efficient ways to interrupt and disrupt these processes at multiple levels and modalities (e.g.,human-to-human target; machine-to-human target, machine-to-machine target levels)?

RP1.4 Are there unique short-term and long-term effects on individuals that are deceived by deceptive tools that rely upon generative A.I.?

How will this research benefit community resilience and specialist treatments?

By uncovering the psychological mechanisms behind AI-driven deception and its effects on mental wellbeing, memory, and belief systems, this research can inform the redesign of support systems for cybercrime victims and contribute to the development of robust legislation and policies to combat deepfake misuse and other AI-driven deceptions. Additionally, educational programs and frameworks for identifying credible information will empower individuals and organisations to mitigate the risks associated with AI technologies, fostering a safer online environment.