Words as Weapons: Dehumanisation and the End of Restraint
18/02/2026

At this year’s Munich Security Conference (MSC), the European Institute of Peace and the International Committee of the Red Cross tackled the issue of dehumanisation in a closed-door discussion ‘Words as Weapons: Dehumanisation and the End of Restraint.’ The Institute’s Executive Director, Michael Keating moderated a discussion with speakers Helga Maria Schmid, President of the European Institute of Peace and Vice President of the MSC; Mirjana Spoljaric Egger, President of the International Committee of the Red Cross; Archbishop Paul Gallagher, Secretary for Relations with States for the Vatican; and Loubna Hadid, AI expert and CEO of Decenture.
The discussion explored how the impact of dehumanising narratives is exacerbated by social media, AI and algorithms, increasing the vulnerability of civilians, aid workers and journalists, eroding international humanitarian law (IHL) and obstructing peace making efforts.


Dehumanisation fosters a toxic “us vs. them” mentality that is then exploited to mobilise populations, fuel hatred, and legitimise systemic violence. It is far from being a ‘new’ phenomenon, and stems from fear, discrimination, xenophobia, and extreme nationalism. Historically, dehumanisation has been a precursor to mass violence, from the Holocaust to the genocides in Rwanda and beyond. Many contemporary conflicts including Sudan, Israel-Palestine and Myanmar are theatres of active, unrestrained and devastatingly deadly dehumanising narratives.
Dehumanisation has a particularly negative impact on the values and norms underpinning IHL and, as a direct result, peace negotiations, which rely on developing empathy and building trust: as long as certain groups are seen as a threat or as undeserving of justice and dignity, efforts at reconciliation and peacemaking will be undermined. Unless and until the drivers and agents of dehumanisation are addressed, any peace agreement risks being fragile, as the underlying tensions and perceptions of the “other” remain unresolved, and different groups will continue to see each other as existential threats rather than potential partners in peace.



Dehumanising narratives are amplified through social media platforms, where algorithms and AI create a megaphone effect. Once a narrative spreads, it becomes extremely difficult to stop.
These systems are not designed neutrally: social media algorithms are trained to elicit a reaction from their users, and when outrage can be monetised, there is little incentive for the platforms to counter dehumanising narratives. By design, social media facilitates polarised and extreme views and hostility, spreading them like wildfire.
Responsibility for combating dehumanising narratives and the practices of platforms that propagate them rests not just with states but also the private sector, not least tech companies, the defence sector and religious leaders. A different approach to moderating content on social media is both necessary and feasible – for example, by strengthening safeguards against the infiltration of ‘bots’, which allow hate speech to spread rapidly. Other steps include education and greater awareness raising of how AI, algorithms and social media platforms operate and the damage they can inflict if used irresponsibly, as well as initiatives focused on accountability, consensus-building, and creating space for personal encounters.
Photo credits: MSC/Stephan Goerlich




