How often have you wondered if following certain social media accounts, you are receiving content that is valuable to you or genuine? Inspired by this question, our team decided to find out how does the selection of “friends” on Twitter platform impacts the quality of information. We investigated how Twitter's hidden recommendation algorithms manipulate the content we get. We also decided to touch on the topic of research ethics in the Twitter environment. As a result of the research, the article was published: „Be Careful Who You Follow: the Impact of the Initial Set of Friends on COVID-19 Vaccine Tweets”.

INFORMATION BUBBLES

Nowadays we are becoming more and more aware of how a huge part of information in social media is fake news, conspiracy theories, or manipulation of facts.

We live our daily lives in information bubbles. In the real world, information bubbles are created by choosing the people we surround ourselves with or the information we want to hear or avoid. We make choices every day that can limit or expand our perception of the world around us.

pixabay5.png

In our study, we examined how the initial selection of friends on Twitter (one of the main platforms largely responsible for spreading misinformation) affects what we see on the timeline when engaging in contentious topics of public discourse. As an example, we examined social media interactions related to the COVID-19 pandemic and public sentiment towards vaccines and vaccination programs. What we observed was a strong asymmetry between the anti-vaccine and pro-vaccine sides. Furthermore, we also put focus on the ethical considerations of using automated bots to conduct experiments in the real-world environment of a social network of which humans are a part.

ETHICAL ASPECTS OF THE EXPERIMENT

Our bots did not collect any personal or sensitive data related to the people they were tracking. Nevertheless, they were operating among humans, and we decided to carefully evaluate the possible ethical risks associated with this way of conducting research.

Deliberate use of deception is the most ethically problematic issue. Bots operate under the guise of ordinary people. Other users, especially people followed by our bots, may mistakenly think they are real humans using Twitter platform. However, there is a compelling reason not to disclose the full identity of our bot in this case. We assume that if the bot's identity were revealed, the profile of the person being followed would be informed that this bot is testing exposure to disinformation about COVID-19 vaccines. There is then a very high probability that the bots would be blocked by those Twitter users who oppose vaccination. Understanding how misinformation about COVID-19 vaccines spreads on one of the most important social networks in the world is of great importance, which is why we decided to conduct our study. However, we were careful about ethical issues and tried to adhere to the following conditions:

DESCRIPTION OF THE STUDY

During our study, through the use of machine learning, we trained bots that collected tweets from their timelines every day for several hours (over 5 days; from May 18, 11:00 am to May 22, 5:00 pm). We must add at this point that, according to Twitter's API documentation, we are aware that what we obtained does not reflect the pattern in which ordinary Twitter users access the service. However, for the purposes of this study, we decided to make the simplifying assumption that the timeline accessed by our bot represents the stream of tweets that a normal user would see. Note, however, that there can be significant differences between the view provided by the Twitter API and the view provided by the regular service interface.

We created five bots that differed significantly in their initial set of observed accounts, as shown in the table below:

Untitled

While bots initiated as highly pro-vaccine (Alfred and Jessie) "saw" mostly neutral tweets, bots initiated as highly anti-vaccine (Anthony and Alice) received information strongly supporting their anti-vaccine stances. What we can observe is that those who are convinced of the efficacy and utility of COVID-19 vaccines do not tend to push this position among other pro-vaccine users. At the same time, anti-vaccine users seem to constantly reinforce their message and position among other anti-vaccine users.

Of course, the procedure performed by our bots is a simplification of the actual human interaction with the Twitter service. It should also be added here that the content or source of the tweets themselves are not the only issues that contribute to the spread of misinformation. Behavioral events such as re-tweeting, watching, and cancelling watching (both coming from humans and resulting from the algorithmic procedure built into the bots) can be easily confused with social acceptance and support for marginal ideas and anti-science stances.