top of page

Harmful AI? Drawing the Line: How AI May Create Technology-Mediated Trauma

Technology is increasingly used in many professional fields, with the underpinning idea that it has the power to aid by freeing up time, enhancing creativity, cure inefficiency and create a smarter, fairer world. The discourse, however, is that the technological revolution, especially when considering artificial intelligence, has its set of shadows. These shadows may fall heaviest on the people already standing in the dark, leading the question – ‘Is generative AI increasing the risk of technology-mediated trauma among vulnerable populations?’. In Abdulai’s paper (2024) the damaging nature of AI-generated health advice is discussed, raising questions about whether the use of technology in healthcare is as beneficial as it sounds.


 

Abdulai’s paper points to the ways in which AI systems, no matter how well designed or how ‘good’ the intentions, can re-trigger or create forms of psychological harm to those already vulnerable or susceptible. The potential for misinterpretation of information can have damaging effects. For example, a refugee that is being assessed by an AI-driven mental health bot that is offered a care plan written by an algorithm that diminishes their humanity into data is not being helped, but facing discrimination and dehumanisation. Alternatively, what happens when system cannot comprehend the tone, silences and tremors behind a survivor of abuse? Perhaps these are not just hypothetical situations, but real concerns behind delegating mental health responsibility to machines while people face the most human parts of their lives. While there is a universal embrace of AI as a creative partner, it is equally as capable of recreating harm as well as good.


Generative AI encodes the worldview of its creators, reflecting data and the perception of the world through the eyes of the designers. Vulnerability not only comes from life’s trying times, but also when care is outsourced to systems that can’t understand on a human level, and systems that people cannot fully understand either. When an algorithm writes a plan for ‘good care’, who is accountable when those plans go awry and a person is harmed? It cannot be assumed that technological progress and moral progress are synonymous, even if efficiency may feel safer than immeasurable empathy.


Technology has the power to be assumed as a ‘neutral’ tool, but if hundreds of datasets are teaching AI to become what it is, neutrality is impossible to be programmed. It instead, becomes a mirror of its creators and its users. In healthcare, perhaps this emphasises the marginalisation that is already present in current systems. In institutions, this may be the priority of production rate over tenderness. In leaders, there is innovation without introspection. Through generative AI and the ease of the perfect answer may be a reflection into our own complicity, with more focus on precision than the human ability to care.


The question is not about what technology does but what it reveals about people. Generative AI reveals a distrust for slowness and ambiguity, allowing data to make meaning neat and orderly. The human experience, however, is not supposed to be ‘tidy’, it cannot be, especially those who are living in the marginalised communities, so it is near impossible to expect a tidy answer for such a varied world. It is viable that technology is used as a defence against the inability to sit with suffering, be it personal or that of others. AI can have data for most scenarios, but when does it show the ability to understand what is an isn’t appropriate?

 


Generative AI does have potential for professional environments like healthcare and education, but without intentional trauma-informed oversight, it risks amplifying the inequalities it has been designed to solve. Maybe these systems require a combination of considered use and ethical literacy from the creator and the user.  It does not mean that AI should not be utilised, but instead questioned why it is being used and how it is possible to mitigate technology-mediated trauma.

Through the lens of depth psychology, what people repress will return eventually and often in a distorted form. There is a case to be made that the creation of machines that cannot feel is a reflection of humanity being afraid to feel too much. In the cultural desire to control complexity and outsmart vulnerability, allowing technology to offer healthcare advice allows people to avoid the ache of being human. The psyche cannot be overridden. Exiling empathy may surface as alienation in social groups, burnout in leaders or mistrust in patients. It is not a case of rejecting the machine, but being aware of what elements of life do not need to be optimised, or rather, cannot be optimised. With all developments in AI technology, it becomes a mirror to the self, often the shadow side, how many cycles of a negative feedback loop results in the removal of human application all together?

 

 

 Read the full paper by Abdulai, published in Nursing Inquiry, 2025, by following this link:

 

Follow The Heretic for more deep reflection on the impact of AI and it's influence in day-to-day professional and personal experience.

 
 
 

Comments


All rights reserved by Heresy Consulting Ltd 2023. Copyright is either owned by or licensed to The Heretic, or permitted by the original copyright holder. Reproduction in whole or in part without written permission is strictly prohibited. Heresy Consulting Ltd recognises all copyright contained in this issue and we have made every effort to seek permission and to acknowledge the copyright holder. The Heretic tries to ensure that all information is correct at the time of publishing but cannot be held responsible for any errors or omissions. The views expressed by authors are not necessarily thoseof the publisher. Registered in England and Wales No.8528304. Registered Office: The Ashridge Business Centre, 121 High St Berkhamsted, Herts, HP4 2DJ

bottom of page