Go back

Integrating Technology and the Human Dimension in Peacebuilding

01/08/2024

By Edoardo Camilli

When we talk about integrating technology with the human dimension of peacebuilding and conflict, it’s important to recognise that the current generation experiences conflict differently from previous ones. This is not only due to technological advancements but also because the dynamics of conflicts themselves have changed. We now see the emergence of new actors and different forms of disinformation. Young people are often the first victims of these conflicts but also the first to take to the streets, creating activist movements and demanding peace from their governments.

The conflict in Ukraine illustrates how dynamic and hybrid modern conflicts can be. Understanding these conflicts deeply and leveraging technology to do so is crucial. Artificial intelligence (AI) and large language models can assist analysts by collecting and analysing data more efficiently, providing policymakers with better insights at tactical and strategic levels, and understanding the narratives driving the conflicts.

However, while AI and technology are useful, they also pose significant risks, such as the spread of disinformation. A notable example is the incident in June 2022, when the mayors of Berlin, Madrid, and Vienna had separate video calls with someone they believed to be the mayor of Kyiv, Vitali Klitschko. It took the mayor of Berlin 15 minutes to realise she was speaking to a deep fake. Following the incident, the office of Berlin’s Mayor, Franziska Giffey, released a statement saying that: “There were no signs that the video conference call wasn’t being held with a real person,”. This incident highlights the ease with which technology can be used to deceive and influence narratives, affecting conflict resolution processes.

There is no single solution to prevent the spread of hate speech and disinformation. It requires a multifaceted approach, including legal measures, technological developments, and, most importantly, education in critical thinking. Social media providers must step up to provide censorship where necessary, and AI can help flag content for review by human fact-checkers. But the most crucial component is education. People must learn to use media responsibly, verify information, understand the agenda behind it, and distinguish between what is real and what is fake.

For instance, AI currently struggles with accurately reproducing human hands. If you see an AI-generated image where the hands are blurred or have an unusual number of fingers, it’s a sign that something is off. Understanding these details helps in identifying fake content.

The future of AI is uncertain. Even the most advanced developers in Silicon Valley express concerns about AI’s capabilities and where it might lead. The rapid advancements in large language models, robotics, and autonomous weapons are creating a landscape where the implications are not fully understood.

Initiatives like the European AI Act are crucial but also create a global divide. Countries with stringent regulations might fall behind in certain technological capabilities compared to those without such regulations. This competition complicates efforts in conflict resolution and understanding.

In these dark times, it requires concerted efforts from politicians, tech experts, and civil society to navigate these challenges. Educating the public about how AI works, its biases, and its potential for discrimination is essential. For example, AI used in HR could exclude people of colour if the training data is biased toward white Americans or Europeans.

Transparency in AI development and data usage is vital to ensure it helps us find better solutions.

Integrating technology with the human dimension of peacebuilding is complex but necessary. It demands a balanced approach, combining legal, technological, and educational strategies to manage the risks and leverage the benefits of AI and other technologies in promoting peace and resolving conflicts.

Edoardo Camilli is CEO and Co-Founder, Horizon Intelligence (Hozint) and Friends of Europe’s European Young Leader (EYL40)