Understanding Memetic Warfare Tactics: Social Media Tactics in the Russo-Ukrainian Warby@memeology
107 reads

Understanding Memetic Warfare Tactics: Social Media Tactics in the Russo-Ukrainian War

tldt arrow

Too Long; Didn't Read

Explore the data-driven methods used to analyze memetic engagement during the Russo-Ukrainian conflict, uncovering trends, narrative preferences, and geographic insights into social media strategies during wartime. Gain valuable insights into memetic warfare tactics and their impact on public opinion. TLDR Summary: The methods section details data collection from Twitter, manual annotation of tweets, and the construction of popularity models to understand memetic engagement in the Ukrainian conflict. It covers variables like visual elements, narrative types, emotional appeals, and intent behind social media content, shedding light on how these factors influence content virality and geographic preferences in retweeting.
featured image - Understanding Memetic Warfare Tactics: Social Media Tactics in the Russo-Ukrainian War
Memeology: Leading Authority on the Study of Memes HackerNoon profile picture


(1) Yelena Mejova, ISI Foundation, Italy;

(2) Arthur Capozzi, Università degli Studi di Torino, Italy;

(3) Corrado Monti, CENTAI, Italy;

(4) Gianmarco De Francisci Morales, CENTAI, Italy.


Background and Related Work



Discussion and References


3.1 Data collection

To examine the use of memetic engagement by Ukraine during the first 8 months of the 2022 conflict, we begin with a large collection of tweets matching war-related keywords (see Appendix A) spanning from 27 February 2022 to 12 October 2022. By examining this collection, we identify three prominent accounts:

(1) @Ukraine: self-described as “the official Twitter account of Ukraine”, this verified account is labeled as an “Ukraine government organization” by Twitter. It posts war-related and political updates about Ukraine.

(2) @uamemesforces: self-described as the “Source of the best Ukrainian memes”, this unverified account states as its goal ”to convey the truth about what is happening now in Ukraine with the help of memes”.[8]

(3) @DefenceU: or “Defense of Ukraine”, is the “Official page of the Ministry of Defense of Ukraine”. Like @Ukraine, this verified account is described by Twitter as a “Ukraine government organization”. Its posts concern battlefield updates and news.

We select the accounts @Ukraine and @DefenceU because of their large following (at the end of the collection period), because they are official accounts not associated with individuals or specific media outlets, and because of the focus on the conflict (many popular Ukrainian accounts are by non-relevant celebrities). Additionally, we select @uamemesforces, the most popular Ukrainian unofficial account that posts memes about the conflict. Note that we are limited in the number of accounts we can study due to the labor-intensive labeling process.

We retrieve all of the posts for these accounts via Twitter Timeline API. We treat each post with media content as a potential meme, no matter the format. To ensure preserving content posted by these accounts, we also download the images associated with the posts and the users who retweeted or liked the post (see Table 1 for summary statistics). We associate each user with a country in the GeoNames database via their location field (a free-text field optionally filled by the user). Considering only tweets with media, out of 586 042 users, 44% are successfully mapped to a country. We retain only countries with at least 1000 geolocated retweeting users for geographic analysis.

In order to explore the relationship between Ukrainian memes and each country’s actions towards the conflict, we use the Ukraine Support Tracker data from the Kiel Institute for the World Economy (downloaded on 20 June 2023), which “quantifies military, financial, and humanitarian aid transferred by governments to Ukraine since the end of diplomatic relations between Russia and Ukraine on January 24, 2022”.[9] The dataset provides a quantitative measure of a country’s aid to Ukraine. Finally, to examine the attitude of each country’s population towards Ukraine, we use the Eurobarometer STD98 survey which took place in winter 2022-2023 in 30 European countries [22]. Specifically, we use use QE2, answer 6: “To what extent do you agree or disagree with: providing financial support to Ukraine.”

3.2 Annotation

To understand the visual and narrative attributes of the captured data, all authors perform a manual annotation of a random selection of the tweets. We use all of @Ukraine tweets and a random sample of roughly 40% of those from the other two accounts are selected for analysis, for a total of 1063 tweets (last column of Table 1). We remove self-replies (an account replying to its own post) which have no retweets of their own.

Guided by previous literature, we compose a codebook to capture visual and narrative aspects of the content. First, we adopt the significant features of the content characterization framework proposed by Ling et al. [41], which is used to model the virality of political memes. These attributes include:

(1) Number of panels: multiple or single;

(2) Type of image: photo, screenshot, or illustration;

(3) Scale: close-up, medium shot, long shot, other;

(4) Type of subject: object, character, scene, creature, text, other;

(5) Main attribute of the subject (if character): facial expression, posture, poster

(6) Character emotion (if character): positive, negative, or neutral;

(7) Contains words: whether image itself (not tweet caption) contains words.

Second, we employ the narrative framework by Karpman [35] later extended by de Saint Laurent et al. [18]. It defines two key dimensions to characterize a narrative: moral quality (benevolent or malevolent) and power (strong or weak), thus resulting in four character archetypes: heroes (benevolent, strong), victims (benevolent, weak), villains (malevolent, strong), and fools (malevolent, weak). We also label the actors mentioned in these narratives (as an open category, guided by an initial set of main actors in the conflict, such as Zelensky and Putin). We further contextualize the narratives by their emotional appeal [8], in particular humor, pride, fear, outrage, and compassion (the label was originally open, but the labelers coalesced on this set). Finally, we record whether there is a specific intent in terms of a call to action or information sharing. Note that, as much as possible, we try to label the narratives as intended from the point of view of the posting account (although some biases may creep in, as we discuss in Section 5). Appendix B reports the full codebook.

Using this literature-driven codebook, all authors of this manuscript perform a deductive coding of the selected posts (for more on such process see Linneberg and Korsgaard [42]). We rely on Google Translate to translate non-English language content. All coders first annotated a selection of 50 posts, discussed insights and disagreements, and came up with standard labels for emotional appeal, intent, and actor (though the label remained open). Subsequently, the rest of the posts are annotated by one author each, with uncertain cases discussed collectively. Appendix C reports the inter-annotator agreement on a selection of 48 posts (sampled from each account). On average, the agreement is 0.750 for @Ukraine, 0.604 for @uamemesforces, and 0.667 for @DefenceU (with a Krippendorff’s alpha of around 0.65 for all accounts and tasks), which is high considering the subjective nature of the tasks, and that some features were open and could have multiple labels. The annotated dataset is available at

3.3 Popularity model

Using the data collected and annotated as per the previous sections, we construct models to find whether there are trends in (𝑖) what kind of content becomes more popular, and (𝑖𝑖) who it reaches. Specifically, we use linear regression to model the log-transformed number of retweets with the content variables outlined above as independent variables. As the three accounts have posted a vastly different number of tweets, we weigh the data points (tweets) inverseproportionally to their representation in the dataset, so that each account weighs equally on the outcomes. We build several versions of the model to ascertain the benefit of each class of variables: visual [41], actor, emotion, intent, and narrative. A baseline model simply uses the identity of the author (i.e., group fixed effects), the number of followers of that account (log transformed) as a proxy for the potential audience, and the number of days from the invasion to account for changing interest in the topic over time. We then choose the best model by the Bayesian Information Criterion (BIC)—a model selection metric that captures both unexplained variation in the dependent variable and the number of explanatory variables. The coefficients of this model reveal the associations between these variables and the virality of the content.

Next, we group retweets by the country of the user (those who can be geolocated) to understand the preferences of users who spread these messages. We enrich our characterization of the relationships between each country and Ukraine by considering the amount of financial assistance the country provides to Ukraine (normalized by the country’s GDP) or by their opinion on this topic (via the Eurobarometer survey). As mentioned above, we consider these measures as different proxies of support for Ukraine’s military efforts. First, we examine the retweeting rate (normalized by population). Then, we compare the narrative preferences to the amount of support for Ukraine: are there narratives associated with increased support for Ukraine? We operationalize this preference as the log odds of users from a country retweeting content having a specific narrative. Finally, we examine the geographical distribution of the retweeting rate and narrative preferences across Europe by plotting these on choropleth maps.



This paper is available on arxiv under CC 4.0 license.