Рекомендации вредоносного контента в Youtube: мета-анализ

Авторы

  • A. M. Yeleussizova Евразийский национальный университет имени Л.Н. Гумилева, Казахстан, Астана http://orcid.org/0000-0002-9630-0862

DOI:

https://doi.org/10.26577/HJ.2023.v67.i1.07
        140 120

Аннотация

Алгоритмы рекомендательной системы YouTube становятся объектом споров и дискуссий, отмечены инциденты с предложениями пользователям вредоносного, чаще всего экстремистского контента. Цель настоящего исследования – проверка гипотезы, что рекомендации YouTube могут содержать вредоносный контент, а также способствовать его распространению.

Научная и практическая значимость данной работы заключается в том, что сеть YouTube имеет огромную популярность, и знание об алгоритмах работы её рекомендательной системы поможет повысить информационную грамотность пользователей, и, как следствие, предостеречь от опасности, которую может вызвать воздействие на сознание вредоносной информации. Основная задача данного исследования заключается в популяризации идеи, что вся информация, размещённая в сети Internet, требует тщательной проверки на правдивость, а также, ввиду повышения информационной грамотности пользователей, настоящее исследование в глобальном плане способствует предотвращению терактов, актов членовредительства, самоубийства, педофилии и т. п.

Материалами для проведения исследования послужили работы, опубликованные за последние 5 лет, содержащие описание хотя бы одного из типов вредоносного контента. Поиск интересующего материала проводился на основе авторитетных наукометрических баз данных Google Scholar, Scopus, Web of Science и PubMed методом экстрагирования подходящих материалов. В результате, в соответствии с критериями приемлемости и исключения, было отобрано 22 исследования, после чего был проведён мета-анализ, по результатам которого установлено, что в 13 из них рекомендации YouTube содержали и способствовали распространению вредоносного контента, в 7 исследованиях учёные представили неоднозначные результаты, и только в 2 исследованиях авторами установлено, что рекомендации не содержали вредоносный контент и не способствовали его распространению.

Таким образом, согласно результатам данного исследования установлено, что в рекомендациях YouTube может содержаться и распространяться запрещённый вредоносный контент, в связи с чем авторы настоятельно рекомендуют внести коррективы в алгоритмы работы рекомендаций с целью ограждения пользователей от запрещённой информации, включая пропаганду экстремизма и насилия.

Ключевые слова: YouTube, рекомендации, рекомендательная система, вредоносный контент, псевдонаучный контент, радикальный контент, экстремизм, педофилия, мета-анализ.

Библиографические ссылки

1. Абдрашев, Р.М. (2016). Противодействие интернет-пропаганде экстремизма в Республике Казахстан. Вестник Сибирского юридического института МВД России, 1 (22): 58-62.
2. Abul-Fottouh, D., Song, M.Y., & Gruzd, A. (2020). Examining algorithmic biases in YouTube’s recommendations of vaccine videos. International Journal of Medical Informatics, 140: 104175. https://doi.org/10.1016/j.ijmedinf.2020.104175
3. Alfano, M., Fard, A.E., Carter, J.A., Clutton, P., & Klein, C. (2021). Technologically scaffolded atypical cognition: The case of YouTube’s recommender system. // Synthеse, 199 (1): 835-858.
4. Allcott, H., Braghieri, L., Eichmeyer, S., & Gentzkow, M. (2020). The welfare effects of social media. American Economic Review, 110 (3): 629-676. https://doi.org/10.1257/aer.20190658
5. AVAAZ (2020). Why is YouTube broadcasting climate misinformation to millions? [Report]. Avaaz. https://secure.avaaz.org/campaign/en/youtube_climate_misinformation/
6. Chen, A., Nyhan, B., Reifler, J., Robertson, R., & Wilson, C. (2022). Exposure to Alternative & Extremist Content on YouTube [Report]. Anti-Defamation League. https://www.adl.org/resources/reports/exposure-to-alternative-extremist-content-on-youtube
7. Courtois, C., & Timmermans, E. (2018). Cracking the Tinder Code: An Experience Sampling Approach to the Dynamics and Impact of Platform Governing Algorithms. Journal of Computer Mediated Communication, 23 (1): 1-16. https://doi.org/10.1093/jcmc/zmx001
8. Faddoul, M., Chaslot, G., & Farid, H. (2020). A longitudinal analysis of YouTube’s promotion of conspiracy videos. ArXiv preprint arXiv: 2003.03318. https://doi.org/10.48550/ARXiv.2003.03318
9. Fano A. N. et al. (2022). Evaluation of YouTube as a Source of Information Regarding Syndactyly. Pediatrics, 149 (1): 779.
10. Green, S.J. (2019). God told me he was a lizard’: Seattle man accused of killing his brother with a sword. The Seattle Times. https://www.seattletimes.com/seattle-news/crime/god-told-me-he-was-alizard-seattle-man-accused-of-killing-his-brother-with-a-sword/
11. Hosseinmardi, H. et al. (2020). Evaluating the scale, growth, and origins of right-wing echo chambers on YouTube. ArXiv preprint arXiv: 2011.12843. https://doi.org/10.48550/arXiv.2011.12843
12. Hussein, E., Juneja, P., & Mitra, T. (2020). Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. Proceedings of the ACM on Human-Computer Interaction, 4 (CSCW1): 1-27. https://doi.org/10.1145/3392854
13. Kaakinen, M., Oksanen, A., & Räsänen, P. (2018). Did the risk of exposure to online hate increase after the November 2015 Paris attacks? A group relations approach. Computers in Human Behavior, 78, 90-97. https://doi.org/10.1016/j.chb.2017.09.022
14. Kaiser, J., & Rauchfleisch A. (2020). Birds of a feather get recommended together: Algorithmic homophily in YouTube’s channel recommendations in the United States and Germany. Social Media + Society, 6 (4): 2056305120969914. https://doi.org/10.1177/2056305120969914
15. Kaiser, J., & Rauchfleisch, A. (2019). The implications of venturing down the rabbit hole. Internet Policy Review, 8 (2): 1-22.
16. Kaiser, J., Rauchfleisch, A., & Cordova, Y. (2021). Comparative Approaches to Mis / Disinformation| Fighting Zika with Honey: An Analysis of YouTube’s Video Recommendations on Brazilian YouTube. International Journal of Communication, 15: 1244-1262.
17. Касымбекова, Н.М., Шынгысова, Н.T. (2022). Роль социальных сетей в формировании общественного мнения. Вестник КазНу им. Аль-Фараби. Серия Журналистики, 3 (65): 69. https://doi.org/10.26577/HJ.2022.v65.i3.07
18. Ledwich, M., & Zaitsev, A. (2019). Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. ArXiv preprint arXiv: 1912.11211. https://doi.org/10.48550/arXiv.1912.11211
19. Ledwich, M., Zaitsev, A., & Laukemper, A. (2022). Radical bubbles on YouTube? Revisiting algorithmic extremism with personalised recommendations. First Monday, 27 (12). https://doi.org/10.5210/fm.v27i12.12552
20. Müller, K., & Schwarz, C. (2021). Fanning the Flames of Hate: Social Media and Hate Crime. Journal of the European Economic Association, 19 (4): 2131–2167. https://doi.org/10.1093/jeea/jvaa045
21. Munger, K., & Phillips, J. (2022). Right-wing YouTube: A supply and demand perspective. The International Journal of Press / Politics, 27 (1): 186-219. https://doi.org/10.1177/194016122096476
22. Nickles, M.A., Rustad, A.M., Ogbuefi, N., McKenney, J.E., & Stout, M. (2022). What's being recommended to patients on social media? A cross-sectional analysis of acne treatments on YouTube. Journal of the American Academy of Dermatology, 86 (4): 920-923. https://doi.org/10.1016/j.jaad.2021.03.053
23. Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C., & Neumann, K. (2021). Too dark to see? Explaining adolescents’ contact with online extremism and their ability to recognize it. Information, Communication & Society, 24 (9): 1229-1246. https://doi.org/10.1080/1369118X.2019.1697339
24. Papadamou, K. et al. (2020). Disturbed YouTube for kids: Characterizing and detecting inappropriate videos targeting young children. Proceedings of the International AAAI Conference on Web and Social Media, 14: 522-533. https://doi.org/10.1609/icwsm.v14i1.7320
25. Papadamou, K. et al. (2021). “How over is it?” Understanding the Incel Community on YouTube. Proceedings of the ACM on Human-Computer Interaction, 5 (CSCW2): 1-25. https://doi.org/10.1145/3479556
26. Papadamou, K. et al. (2022). “It is just a flu”: Assessing the Effect of Watch History on YouTube’s Pseudoscientific Video Recommendations. Proceedings of the International AAAI Conference on Web and Social Media, 16: 723-734. https://doi.org/10.1609/icwsm.v16i1.19329
27. Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., & Meira, W. (2020). Auditing Radicalization Pathways on YouTube. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency: 131-141. https://doi.org/10.1145/3351095.3372879
28. Röchert, D., Weitzel, M., & Ross, B. (2020). The homogeneity of right-wing populist and radical content in YouTube recommendations. International Conference on Social Media and Society: 245-254. https://doi.org/10.1145/3400806.3400835
29. Roose, K. (2019). The making of a YouTube radical. The New York Times, 8. https://rhet104.commacafe.org/wp-content/uploads/2021/05/Making-of-a-YouTube-Radical.pdf
30. Schaub, M., & Morisi, D. (2020). Voter mobilisation in the echo chamber: Broadband internet and the rise of populism in Europe. European Journal of Political Research, 59 (4): 752-773. https://doi.org/10.1111/1475-6765.12373
31. Schmitt, J.B., Rieger, D., Rutkowski, O., & Ernst, J. (2018). Counter-messages as prevention or promotion of extremism?! The potential role of YouTube: Recommendation algorithms. Journal of Communication, 68 (4): 780-808. https://doi.org/10.1093/joc/jqy029
32. Spinelli, L., & Crovella, M. (2020). How YouTube leads privacy-seeking users away from reliable information. Adjunct publication of the 28th ACM conference on user modeling, adaptation and personalization: 244-251. https://doi.org/10.1145/3386392.3399566
33. Stöcker, C., & Preuss, M. (2020). Riding the wave of misclassification: how we end up with extreme YouTube content. International Conference on Human-Computer Interaction: 359-375. https://doi.org/10.1007/978-3-030-49570-1_25
34. Williams, M.L., Burnap, P., Javed, A., Liu, H., & Ozalp, S. (2020). Hate in the machine: Anti-Black and anti-Muslim social media posts as predictors of offline racially and religiously aggravated crime. The British Journal of Criminology, 60 (1): 93-117. https://doi.org/10.1093/bjc/azz049
35. Есенбекова, Ұ.М., Алдабергенова, Ж.Ж., Маманкул, А.А., Смаилова, Б.А., Толегенова, С.Т. (2022). Әлеуметтік медианың жастар белсенділігіне әсері. Вестник КазНу им. Аль-Фараби. Серия Журналистики, 63 (1): 53. https://doi.org/10.26577/HJ.2022.v63.i1.06

References

1. Abdrashev, R.М. (2016). Protivodejstvie internet-propagande jekstremizma v Respublike Kazahstan [Counteraction to extremism propaganda on the Internet in the Republic of Kazakhstan]. Bulletin of the Siberian Law Institute of the Ministry of Internal Affairs of Russia, vol. 1, no 22, pp. 58–62.
2. Abul-Fottouh, D., Song, M.Y., & Gruzd, A. (2020). Examining algorithmic biases in YouTube’s recommendations of vaccine videos. International Journal of Medical Informatics, 140: 104175. https://doi.org/10.1016/j.ijmedinf.2020.104175
3. Alfano, M., Fard, A.E., Carter, J.A., Clutton, P., & Klein, C. (2021). Technologically scaffolded atypical cognition: The case of YouTube’s recommender system. // Synthеse, 199 (1): 835-858.
4. Allcott, H., Braghieri, L., Eichmeyer, S., & Gentzkow, M. (2020). The welfare effects of social media. American Economic Review, 110 (3): 629-676. https://doi.org/10.1257/aer.20190658
5. AVAAZ (2020). Why is YouTube broadcasting climate misinformation to millions? [Report]. Avaaz. https://secure.avaaz.org/campaign/en/youtube_climate_misinformation/
6. Chen, A., Nyhan, B., Reifler, J., Robertson, R., & Wilson, C. (2022). Exposure to Alternative & Extremist Content on YouTube [Report]. Anti-Defamation League. https://www.adl.org/resources/reports/exposure-to-alternative-extremist-content-on-youtube
7. Courtois, C., & Timmermans, E. (2018). Cracking the Tinder Code: An Experience Sampling Approach to the Dynamics and Impact of Platform Governing Algorithms. Journal of Computer Mediated Communication, 23 (1): 1-16. https://doi.org/10.1093/jcmc/zmx001
8. Faddoul, M., Chaslot, G., & Farid, H. (2020). A longitudinal analysis of YouTube’s promotion of conspiracy videos. ArXiv preprint arXiv: 2003.03318. https://doi.org/10.48550/ARXiv.2003.03318
9. Fano A. N. et al. (2022). Evaluation of YouTube as a Source of Information Regarding Syndactyly. Pediatrics, 149 (1): 779.
10. Green, S.J. (2019). God told me he was a lizard’: Seattle man accused of killing his brother with a sword. The Seattle Times. https://www.seattletimes.com/seattle-news/crime/god-told-me-he-was-alizard-seattle-man-accused-of-killing-his-brother-with-a-sword/
11. Hosseinmardi, H. et al. (2020). Evaluating the scale, growth, and origins of right-wing echo chambers on YouTube. ArXiv preprint arXiv: 2011.12843. https://doi.org/10.48550/arXiv.2011.12843
12. Hussein, E., Juneja, P., & Mitra, T. (2020). Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. Proceedings of the ACM on Human-Computer Interaction, 4 (CSCW1): 1-27. https://doi.org/10.1145/3392854
13. Kaakinen, M., Oksanen, A., & Räsänen, P. (2018). Did the risk of exposure to online hate increase after the November 2015 Paris attacks? A group relations approach. Computers in Human Behavior, 78, 90-97. https://doi.org/10.1016/j.chb.2017.09.022
14. Kaiser, J., & Rauchfleisch A. (2020). Birds of a feather get recommended together: Algorithmic homophily in YouTube’s channel recommendations in the United States and Germany. Social Media + Society, 6 (4): 2056305120969914. https://doi.org/10.1177/2056305120969914
15. Kaiser, J., & Rauchfleisch, A. (2019). The implications of venturing down the rabbit hole. Internet Policy Review, 8 (2): 1-22.
16. Kaiser, J., Rauchfleisch, A., & Cordova, Y. (2021). Comparative Approaches to Mis / Disinformation| Fighting Zika with Honey: An Analysis of YouTube’s Video Recommendations on Brazilian YouTube. International Journal of Communication, 15: 1244-1262.
17. Kassymbekova, N.M. & Shyngyssova, N.T. (2022). Rol' social'nyh setej v formirovanii obshhestvennogo mnenija [The role of social networks in public opinion shaping]. Herald Of Journalism, vol. 3, no 65, p. 69. https://doi.org/10.26577/HJ.2022.v65.i3.07
18. Ledwich, M., & Zaitsev, A. (2019). Algorithmic extremism: Examining YouTube’s rabbit hole of radicalization. ArXiv preprint arXiv: 1912.11211. https://doi.org/10.48550/arXiv.1912.11211
19. Ledwich, M., Zaitsev, A., & Laukemper, A. (2022). Radical bubbles on YouTube? Revisiting algorithmic extremism with personalised recommendations. First Monday, 27 (12). https://doi.org/10.5210/fm.v27i12.12552
20. Müller, K., & Schwarz, C. (2021). Fanning the Flames of Hate: Social Media and Hate Crime. Journal of the European Economic Association, 19 (4): 2131–2167. https://doi.org/10.1093/jeea/jvaa045
21. Munger, K., & Phillips, J. (2022). Right-wing YouTube: A supply and demand perspective. The International Journal of Press / Politics, 27 (1): 186-219. https://doi.org/10.1177/194016122096476
22. Nickles, M.A., Rustad, A.M., Ogbuefi, N., McKenney, J.E., & Stout, M. (2022). What's being recommended to patients on social media? A cross-sectional analysis of acne treatments on YouTube. Journal of the American Academy of Dermatology, 86 (4): 920-923. https://doi.org/10.1016/j.jaad.2021.03.053
23. Nienierza, A., Reinemann, C., Fawzi, N., Riesmeyer, C., & Neumann, K. (2021). Too dark to see? Explaining adolescents’ contact with online extremism and their ability to recognize it. Information, Communication & Society, 24 (9): 1229-1246. https://doi.org/10.1080/1369118X.2019.1697339
24. Papadamou, K. et al. (2020). Disturbed YouTube for kids: Characterizing and detecting inappropriate videos targeting young children. Proceedings of the International AAAI Conference on Web and Social Media, 14: 522-533. https://doi.org/10.1609/icwsm.v14i1.7320
25. Papadamou, K. et al. (2021). “How over is it?” Understanding the Incel Community on YouTube. Proceedings of the ACM on Human-Computer Interaction, 5 (CSCW2): 1-25. https://doi.org/10.1145/3479556
26. Papadamou, K. et al. (2022). “It is just a flu”: Assessing the Effect of Watch History on YouTube’s Pseudoscientific Video Recommendations. Proceedings of the International AAAI Conference on Web and Social Media, 16: 723-734. https://doi.org/10.1609/icwsm.v16i1.19329
27. Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., & Meira, W. (2020). Auditing Radicalization Pathways on YouTube. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency: 131-141. https://doi.org/10.1145/3351095.3372879
28. Röchert, D., Weitzel, M., & Ross, B. (2020). The homogeneity of right-wing populist and radical content in YouTube recommendations. International Conference on Social Media and Society: 245-254. https://doi.org/10.1145/3400806.3400835
29. Roose, K. (2019). The making of a YouTube radical. The New York Times, 8. https://rhet104.commacafe.org/wp-content/uploads/2021/05/Making-of-a-YouTube-Radical.pdf
30. Schaub, M., & Morisi, D. (2020). Voter mobilisation in the echo chamber: Broadband internet and the rise of populism in Europe. European Journal of Political Research, 59 (4): 752-773. https://doi.org/10.1111/1475-6765.12373
31. Schmitt, J.B., Rieger, D., Rutkowski, O., & Ernst, J. (2018). Counter-messages as prevention or promotion of extremism?! The potential role of YouTube: Recommendation algorithms. Journal of Communication, 68 (4): 780-808. https://doi.org/10.1093/joc/jqy029
32. Spinelli, L., & Crovella, M. (2020). How YouTube leads privacy-seeking users away from reliable information. Adjunct publication of the 28th ACM conference on user modeling, adaptation and personalization: 244-251. https://doi.org/10.1145/3386392.3399566
33. Stöcker, C., & Preuss, M. (2020). Riding the wave of misclassification: how we end up with extreme YouTube content. International Conference on Human-Computer Interaction: 359-375. https://doi.org/10.1007/978-3-030-49570-1_25
34. Williams, M.L., Burnap, P., Javed, A., Liu, H., & Ozalp, S. (2020). Hate in the machine: Anti-Black and anti-Muslim social media posts as predictors of offline racially and religiously aggravated crime. The British Journal of Criminology, 60 (1): 93-117. https://doi.org/10.1093/bjc/azz049
35. Yessenbekova, U., Aldabergenova, Z., Mamankul, A., Smailova, B., & Tolegenova, S. (2022). Äleumettik medianyñ jastar belsendiligine äseri [The impact of social networks on youth activity]. Herald Of Journalism, vol. 63, no 1, p. 53. https://doi.org/10.26577/HJ.2022.v63.i1.06

Загрузки

Как цитировать

Yeleussizova, A. M. (2023). Рекомендации вредоносного контента в Youtube: мета-анализ. Серия Журналистики, 67(1). https://doi.org/10.26577/HJ.2023.v67.i1.07