paint-brush
Empirical Success in Prebunkingby@editorialist

Empirical Success in Prebunking

by THE Tech EditorialistJune 14th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The study defines optimal prebunking as a minimax optimization problem. Algorithm 3, validated through simulations on Chung-Lu models, outperforms baseline methods but faces limitations in network structure assumptions and real-world deviations. Future work aims to extend prebunking optimization to entire social networks.
featured image - Empirical Success in Prebunking
THE Tech Editorialist HackerNoon profile picture

Author:

(1) Yigit Ege Bayiz, Electrical and Computer Engineering The University of Texas at Austin Austin, Texas, USA (Email: [email protected]);

(2) Ufuk Topcu, Aerospace Engineering and Engineering Mechanics The University of Texas at Austin Austin, Texas, USA (Email: [email protected]).

Abstract and Introduction

Related Works

Preliminaries

Optimal Prebunking Problem

Deterministic Baseline Policies

Temporally Equidistant Prebunking

Numerical Results

Conclusion and References

VIII. CONCLUSION

We define the problem of optimally delivering prebunks to a user as a minimax optimization problem, and under SI propagation assumptions propose algorithms that guarantee feasibility. We demonstrate that our theoretically backed approach Algorithm 3 also yields better results than the other two baselines in empirical analysis using simulated misinformation propagations on Chung-Lu models. Algorithm 3 is also often computationally feasible to solve, as at each time, it relies on solving a linear program that is computationally inexpensive.


Our results, however, suffer from the limitations we impose on the network structure. Real-world misinformation propagation often deviates significantly from the SI model predictions. Our models also only provide feasibility guarantees under the discrete-time epidemic propagation assumptions. We also focus solely on delivering optimal prebunks to each user one by one, which is different from optimizing prebunk deliveries on the entire network. For future work, we plan to extend our problem and results to optimizing misinformation deliveries across the entire social network.

REFERENCES

[1] Social media fact sheet, Apr. 2021. [Online]. Available: https://www.pewresearch.org/internet/fact-sheet/socialmedia/.


[2] B. Swire-Thompson, D. Lazer, et al., “Public health and online misinformation: Challenges and recommendations,” Annu Rev Public Health, vol. 41, no. 1, pp. 433–451, 2020.


[3] Y. Benkler, R. Faris, and H. Roberts, Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press, 2018.


[4] S. van der Linden, A. Leiserowitz, S. Rosenthal, and E. Maibach, “Inoculating the public against misinformation about climate change,” Global Challenges, vol. 1, no. 2, 2017. DOI: https : / / doi . org / 10 . 1002 / gch2 . 201600008.


[5] S. Lewandowsky, U. K. H. Ecker, and J. Cook, “Beyond misinformation: Understanding and coping with the “post-truth” era.,” Journal of Applied Research in Memory and Cognition, vol. 6, pp. 353–369, 4 2017. DOI: 10.1016/j.jarmac.2017.07.008.


[6] G. Pennycook and D. G. Rand, “The psychology of fake news,” Trends in Cognitive Sciences, vol. 25, no. 5, pp. 388–402, 2021. DOI: https://doi.org/10.1016/j.tics. 2021.02.007.


[7] F. Liu and M. Buss, “Optimal control for heterogeneous node-based information epidemics over social networks,” IEEE Transactions on Control of Network Systems, vol. 7, no. 3, pp. 1115–1126, 2020. DOI: 10. 1109/TCNS.2019.2963488.


[8] Y. E. Bayiz and U. Topcu, Countering misinformation on social networks using graph alterations, 2022. arXiv: 2211.04617 [cs.SI].


[9] Z. Guo, M. Schlichtkrull, and A. Vlachos, “A Survey on Automated Fact-Checking,” Transactions of the Association for Computational Linguistics, vol. 10, pp. 178– 206, Feb. 2022. DOI: 10.1162/tacl a 00454.


[10] I. Augenstein, C. Lioma, D. Wang, et al., “MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims,” Association for Computational Linguistics, 2019. DOI: 10.18653/v1/D19-1475.


[11] U. K. H. Ecker, S. Lewandowsky, J. Cook, et al., “The psychological drivers of misinformation belief and its resistance to correction,” Nature Reviews Psychology, vol. 1, no. 1, pp. 13–29, Jan. 2022. DOI: 10 . 1038 / s44159-021-00006-y.


[12] I. Chaturvedi, E. Cambria, R. E. Welsch, and F. Herrera, “Distinguishing between facts and opinions for sentiment analysis: Survey and challenges,” Information Fusion, vol. 44, pp. 65–77, 2018. DOI: https://doi.org/ 10.1016/j.inffus.2017.12.006.


[13] M. Mohtarami, R. Baly, J. Glass, P. Nakov, L. Marquez, ` and A. Moschitti, “Automatic stance detection using end-to-end memory networks,” Association for Computational Linguistics, Jun. 2018. DOI: 10.18653/v1/N18- 1070.


[14] M. Del Vicario, A. Bessi, F. Zollo, et al., “The spreading of misinformation online,” Proceedings of the National Academy of Sciences, vol. 113, no. 3, pp. 554–559, 2016. DOI: 10.1073/pnas.1517441113.


[15] S. Shaar, N. Babulkov, G. Da San Martino, and P. Nakov, “That is a known lie: Detecting previously fact-checked claims,” Association for Computational Linguistics, 2020, pp. 3607–3618. DOI: 10.18653/v1/ 2020.acl-main.332.


[16] A. L. Hill, D. G. Rand, M. A. Nowak, and N. A. Christakis, “Infectious disease modeling of social contagion in networks,” PLOS computational biology, vol. 6, no. 11, 2010.


[17] S. Raponi, Z. Khalifa, G. Oligeri, and R. Di Pietro, “Fake news propagation: A review of epidemic models, datasets, and insights,” ACM Trans. Web, vol. 16, no. 3, Sep. 2022, ISSN: 1559-1131. DOI: 10.1145/3522756. [Online]. Available: https://doi.org/10.1145/3522756.


[18] S. Krishnasamy, S. Banerjee, and S. Shakkottai, “The behavior of epidemics under bounded susceptibility,” SIGMETRICS Perform. Eval. Rev., 2014. DOI: 10.1145/ 2637364.2591977.


[19] L. Zhao, H. Cui, X. Qiu, X. Wang, and J. Wang, “Sir rumor spreading model in the new media age,” Physica A: Statistical Mechanics and its Applications, vol. 392, no. 4, pp. 995–1003, 2013.


[20] Y.-Q. Wang and J. Wang, “Sir rumor spreading model considering the effect of difference in nodes’ identification capabilities,” International Journal of Modern Physics, vol. 28, no. 05, 2017.


[21] M. Kimura, K. Saito, and H. Motoda, “Efficient estimation of influence functions for sis model on social networks,” in International Joint Conference on Artificial Intelligence, 2009.


[22] F. Jin, E. Dougherty, P. Saraf, Y. Cao, and N. Ramakrishnan, “Epidemiological modeling of news and rumors on twitter,” in Proceedings of Workshop on Social Network Mining and Analysis, 2013.


[23] F. Chung and L. Lu, “Connected components in random graphs with given expected degree sequences,” Annals of Combinatorics, vol. 6, no. 2, pp. 125–145, 2002, ISSN: 0219-3094. DOI: 10.1007/PL00012580.


[24] D. Fasino, A. Tonetto, and F. Tudisco, “Generating large scale-free networks with the chung–lu random graph model,” Networks, vol. 78, no. 2, pp. 174–187, 2021.


[25] K. Pogorelov, D. T. Schroeder, P. Filkukova, S. Brenner, ´ and J. Langguth, “Wico text: A labeled dataset of conspiracy theory and 5g-corona misinformation tweets,” in Workshop on Open Challenges in Online Social Networks. Association for Computing Machinery, 2021.


[26] J.-L. Guillaume, M. Latapy, and C. Magnien, “Comparison of failures and attacks on random and scale-free networks,” in OPODIS 2004, Springer, 2005.


[27] P. Crucitti, V. Latora, M. Marchiori, and A. Rapisarda, “Efficiency of scale-free networks: Error and attack tolerance,” Physica A: Statistical Mechanics and its Applications, vol. 320, pp. 622–642, 2003. DOI: https: //doi.org/10.1016/S0378-4371(02)01545-5.


This paper is available on arxiv under CC 4.0 license.