Organising Politically Salient Visual Disinformation
On 23rd March, Kristina Hook (Kennesaw State), Walter Scheirer, Ernesto Verdeja and Tim Weninger (Notre Dame) addressed Hub members on the topic of “Organising Politically Salient Visual Disinformtion.” Walter began by explaining his research background in artificial intelligence and the analysis of visual media on the internet. Kristina approaches the topic with a perspective in anthropology and more specifically, civilian protection. Similarly, Ernesto has a research interest in the prevention of genocide and mass atrocities. Tim’s interest in disinformation follows a background in machine learning and network science.
The group set out three key takeaway points: first, that disinformation extends beyond text. Images and videos are more evocative and effective. Second, only AI systems can parse and organise the overwhelming volume of material at scale. Third, a system of visual disinformation detection has long been underdeveloped. This pilot is leading to the expansion of tools available to combat disinformation. This progress has significant policy implications.
In the first section, Ernesto set out the existing early warning model of disinformation. He also briefly described why actors may actively promote disinformation. Examples of early warning indicators include the fragmentation and radicalization of elites, coupled with the erosion of civil and political rights, the expanded use of emergency powers and a general increase in dangerous and dehumanizing discourse. Reasons for promoting disinformation include: ‘existential othering’ an opposition/perceived political threat, to spread doubt on specific issues within a polity and more broadly, fuel epistemic insecurity. Tim used the 2019 Indonesian election as an example of how online disinformation in the form of memes can stoke political violence. Joko Widodo was re-elected with over 55% of the popular vote, but this did not stop opposition candidate Prabowo Subianto from making unfounded claims of electoral fraud. The violent protests resulted in the death of eight people.
Walter introduced the proof-of-concept system (MEWS) in the second section. MEWS is used for forecasting, monitoring and evaluation of information. It is useful for identifying patterns amid the chaos and manipulations within content that humans may not notice. The system itself is made up of three components: the data ingestion platform, the AI analysis engine and the user interface. The data ingestion platform scraps the data and collects surrounding contextual information which may be helpful in later stages. The AI analysis engine component seeks to identify when an image has been deliberately adjusted, manipulated and spliced from its original form. Finally, the user interface is focused on making the system simple for those without a background in computer science or who are not comfortable with the back-end code. Walter continued by explaining that MEWS has five levels of analysis of content. These are motifs, manipulations, objects, faces and texts.
Finally, Kristina spoke about what academics and practitioners can expect next. She warned that when these AI systems become more effective, online promoters of disinformation may pivot away from the West and instead, focus on driving narratives in Africa. That said, Kristina emphasised that fully functioning AI systems can prove invaluable to policy makers because they will be able to solve the “filling a teacup with a firehose” quandary, realise near-real time pattern identification of material and flag changing surges and spikes. Together, these benefits can contribute to a more effective system of resource allocation.
We thank Kristina, Walter, Ernesto and Tim for their time and look forward to welcoming Hub members to other events in the future.
Digital MAPS: combining practice-led and academic-led research
On 15th March, Dr Catherine Arthur, Dr Stef Pukallus and Michael Bush (British Council) spoke with Hub members to discuss the Digital MAPS project. Digital MAPS is a program funded by the British Council that supports partners to conduct social media mapping, analysis, and the design and implementation of pilot interventions that counter polarization and promote inclusivity and openness in the networked public sphere. The DMAPS program is of interest to both practitioners and academics engaged in and researching local peacebuilding, online conflict, the arts and communication.
Michael Bush began by introducing the Digital MAPS program as focused on creating “shared understanding across cultural divides.” The program examines how ongoing discourse within the networked public sphere contributes to conflict drivers and what those in the arts can do to help bridge these major divides. Michael described the project as a “very intensive six months of [establishing the] theoretical foundations…and also the participatory action-research of external partners”.
Catherine then went into further detail about the theoretical foundations of the project. Digital MAPS seeks to promote openness and inclusivity in the digital public sphere and provide an intervention focused on decreasing polarisation in online spaces. The project supports the work of organisations across eight countries (Libya, Palestine, Jordan, Yemen, Iraq, OPT, Tunisia and Syria). The project adopts a broad definition of communication, going beyond the factual and textual content to include elements such as performance and visual art. Following this, Catherine explained that the project addressed four points of communication. These were communication that divides (hate speech, misinformation and disinformation), the consequences of weaponised communication, the peacebuilding potential of communication and peacebuilding and arts as communication.
The discussion then turned to consider how practitioners can repair a breakdown in civility and the practice of civil norms. Stef set out the hybrid approach used by those on the team to collect the necessary data. The team advocates eight possible responses to digital conflict drivers. These are the establishment of platform rules and terms of service, ethical rules and controls on platforms, changes to platform structures, content moderation, digital literacy, depolarisation and behaviour change programs, personal and collective transformation and non-violent communication. The first four responses can be defined as techno-mechanic, whilst the latter are socio-psychological. Stef then described the three principles of discursive civility, how they can be enacted by the individual and how they relate to the research aims of the Digital MAPS project. Stef said, “these principles act as a guarantor for safety in communicative exchanges and therefore, are the building blocks for safe spaces.”
Catherine moved on to explain why those working on the project considered art an essential aspect of communication both broadly and in relation to the research aims. Examples of art as communication include theatre, music, dance, cinema, poetry and sculpture. Artistic practices also encourage alternative perspectives, creative thinking and imagination. Catherine explained how art can help whole communities move on from violence, conflict and prejudice. To finish, Stef made clear the final positions of the project. First, academic and practitioner-produced theoretical research can serve as a vital source of knowledge to shape interventions and approaches in future. Second, all civil societies – whether cooperative or polarised – are deeply communicative in nature. Third, that online polarisation can have a direct impact on the offline capacity for peaceful co-operation. Fourth, that communicative peacebuilding, if it is to be carried out successfully, demands a hybrid approach. Fifth, the integration of participatory action-research is also key to effective peacebuilding efforts.
At the end of the presentation, Hub members were able to discuss and ask questions about the project and communicative peacebuilding more widely. We thank Michael, Catherine and Stef for their time and look forward to welcoming Hub members to other events in the future.