top of page

AI Persuasion: More Effective Than Humans?

Persuasion can be considered a complex and uniquely human ability, the subtle choice of wording that lands just right, an inkling of empathy in an argument, unspoken understanding between the speaker and the receiver. However, it appears that machines have this ability, too. Recent research published in Science Advances on the AI language model - GPT- 3 – has established that it is more effective, in some cases, at informing (and misinforming) people than humans are. Readers believed statements about various social or scientific topics to be equally as credible, if not more persuasive, than human written versions. The most interesting finding is that when readers were informed that pieces they had read were written by AI, their trust didn’t collapse, but shifted, depending on the context, expectation and bias. Perhaps the machine does persuade.


 

This study suggests that there is a lack of differentiation between what is human and what is machine, despite years of what could be considered intuitive behaviour and learning to trust the source through tone, authenticity and credentials. Bias in the use of AI can affect the belief in credibility,  both positively and negatively, it depends on what a person believes and how accepting someone is of AI use being integrated in daily life. But what of persuasion itself, if it does not rely on human intent, what becomes of trust and authenticity?

A leader may believe themselves to persuade through authenticity, the idea that a following can be built through sharing truth or personality, but machines may have the ability to create a balanced, rational and potentially more reassuring messages that may reduce the impact of authenticity. It may be the case that ‘authenticity’ is a style of human connection that is a pattern in and of itself, that can be learned and replicated; optimised. Persuasion and leadership communication may not be exclusive to humans any more, but instead available to machines for more efficiency, learning empathy faster than humans can practice it and creating the illusion of leadership through the same persuasion tools that humans use.

There may be an easy solution to the use of artificial intelligence used in persuasive contexts – label it as AI-generated, or inform the reader of the tool. Yet this creates more issues, trust of the source won’t necessarily be restored, sometimes it damages it further. The discussion becomes a case of learning how to notice and read AI, to be aware of when something has been created using an artificial voice and not expressed by a human one, to sense when persuasion is happening and to question who it is that is doing the persuading. The competency in reading AI-generated work comes first from a self-awareness, knowing the impact of the machine, being transparent and humble about personal use and being aware that it’s there, remembering that AI has the ability to persuade and questioning how it works.

AI has often been used as a tool, as something to be controlled, but persuasion flips that narrative. If a tool can convince others for us, it begins to act more like an agent, not a device. If it can shape beliefs, emotions and decisions, does that imply it is more like a participant than a servant? This is applicable to leadership situations; using artificial intelligence to draft a company announcement, or create a marketing post, or summarise feedback is more than delegating a task; it’s delegating influence.

This poses some questions about the use of AI in influential spaces. Does it harm trust between people? If it’s better at persuading than a human, is that a threat or opportunity? If authenticity can be generated, how can realness be measured in the future? How much of each individuals ‘voice’ has already been shared with machines?

 


Perhaps the most prominent action to take is not resisting or worshiping AI, but instead staying awake to the nature of its capabilities and its presence in society. While machines may be able to persuade, this may only be effective to those who stop questioning credibility. AI can feel ‘too smooth’, offering an uncanny valley tone to writing, and the more frequently it is challenged, the easier it may be to spot. Within depth psychology, relationships and environments serve as a mirror to the self, revealing aspects of the unconscious mind in order to understand one’s own motivations and emotions at a deeper level. AI presents to humanity as that archetype of the double, the mirror that speaks back. The patterns of AI project back the patterns in human nature, the similarities can alter the perception of which voice is real and which is fabricated. Being influenced by a machine is the connection to the parts of the self that have been reflected in the words of the artificially generated,   



Find the full research article, published in 2023, here: AI model GPT-3 (dis)informs us better than humans | Science Advances

 

Follow The Heretic for more discussions about the use of AI and it’s ever growing influence on daily life.


 
 
 

Comments


All rights reserved by Heresy Consulting Ltd 2023. Copyright is either owned by or licensed to The Heretic, or permitted by the original copyright holder. Reproduction in whole or in part without written permission is strictly prohibited. Heresy Consulting Ltd recognises all copyright contained in this issue and we have made every effort to seek permission and to acknowledge the copyright holder. The Heretic tries to ensure that all information is correct at the time of publishing but cannot be held responsible for any errors or omissions. The views expressed by authors are not necessarily thoseof the publisher. Registered in England and Wales No.8528304. Registered Office: The Ashridge Business Centre, 121 High St Berkhamsted, Herts, HP4 2DJ

bottom of page