Nº 22-58: Fake News in Social Networks

AuthorM. Weber, C. Aymanns, J. Foerster, C.-P. Georg
Date19 Jul. 2022
CategoryWorking Papers

We propose multi-agent reinforcement learning as a new method for modeling fake news in social networks. This method allows us to model human behavior in social networks both in unaccustomed populations and in populations that have adapted to the presence of fake news. In particular the latter is challenging for existing methods. We find that a fake-news attack is more effective if it targets highly connected people and people with weaker private information. Attacks are more effective when the disinformation is spread across several agents than when the disinformation is concentrated with more intensity on fewer agents. Furthermore, fake news spread less well in balanced networks than in clustered networks. We test a part of these findings in a human-subject experiment. The experimental evidence provides support for the predictions from the model. This suggests that our model is suitable to analyze the spread of fake news in social networks.