Trust in Generative Artificial Intelligence as a Mirror of Institutional Trust
DOI:
https://doi.org/10.55959/MSU2070-1381-113-2025-22-30Keywords:
Trust in artificial intelligence, generative artificial intelligence, institutional trust, sociology of technology, trust in technology, interpretative methods, sociocultural foundations of trust.Abstract
The article examines the relationship between trust in generative artificial intelligence (GenAI) as a specific type of technology and trust in institutions, as well as methods capable of uncovering the deeper causes of these interconnections. The relevance of the topic is determined by the growing autonomy of technologies, which increases their integration into social relations and complicates the distribution of responsibility among actors involved in the creation, development, and operation of technology. The aim of the study is to demonstrate that the level of trust in GenAI technologies and the ways they are used can serve as an indicator of institutional trust and reflect a broader social context. Methodologically, the paper relies on a theoretical and analytical approach: it includes a review of classical and contemporary works in the fields of institutional trust, sociology of technology, and trust in artificial intelligence. Special attention is paid to comparing classical sociological concepts with modern empirical research and analyzing existing contradictions in empirical data. The paper describes the mutual influence between institutional and technological trust: in conditions of low institutional trust, technologies often substitute for institutions, serving as their functional analogues, whereas a high level of institutional trust, conversely, strengthens trust in technologies introduced by those institutions. The study identifies methodological challenges in defining trust in GenAI and characterizes their implications. The results show that trust in GenAI cannot be reduced to technical criteria of reliability and explainability due to the social nature of trust and its cultural and institutional foundations. The paper concludes by emphasizing the need for qualitative interpretative methods — narrative, phenomenological, and ethnographic analysis — to uncover the mechanisms of trust formation and redistribution between institutions and technologies. These approaches make it possible to reveal the sociocultural foundations of trust and outline perspectives for further interdisciplinary research.
References
Алексеев А.Ю., Гарбук С.В. Как можно доверять системам искусственного интеллекта? Объективные, субъективные и интерсубъективные параметры доверия // Искусственные общества. 2022. Т. 17. № 2. DOI: 10.18254/S207751800020550-4
Бахтигараева А.И., Брызгалин В.А. Роль социального капитала и институционального доверия в отношении населения к инновациям // Вестник Московского университета. Серия 6: Экономика. 2018. № 4. С. 3–24.
Гарбук С.В. Модель доверия к прикладным системам искусственного интеллекта // Вестник Московского университета. Серия 21: Управление (государство и общество). 2024. Т. 21. № 4. С. 151–169. DOI: 10.55959/MSU2073-2643-21-2024-4-151-169
Гидденс Э. Последствия современности. М.: Праксис, 2011.
Никишина Е.Н., Припузова Н.А. Институциональное доверие как фактор отношения к новым технологиям // Журнал институциональных исследований. 2022. Т. 14. № 2. С. 22–35. DOI: 10.17835/2076-6297.2022.14.2.022-035
Петрунин Ю.Ю., Попова С.С., Хань Ц. От фармацевтической индустрии к индустрии ИИ: трансфер регулирования // Государственное управление. Электронный вестник. 2025. № 109. С. 45–51. DOI: 10.55959/MSU2070-1381-109-2025-45-51
Afroogh S., Akbari A., Malone E., Kargar M., Alambeigi H. Trust in AI: Progress, Challenges, And Future Directions // Humanities and Social Sciences Communications. 2024. Vol. 11. DOI: 10.1057/s41599-024-04044-8
Angino S., Ferrara F.M., Secola S. The Cultural Origins of Institutional Trust: The Case of the European Central Bank // European Union Politics. 2022. Vol. 23. Is. 2. P. 212–235. DOI: 10.1177/14651165211048325
Chatterji A., Cunningham T., Deming D.J., Hitzig Z., Ong C., Shan C.Y., Wadman K. How People Use ChatGPT // NBER Working Paper No. 34255. 2025. DOI: 10.3386/w34255
Choung H., David P., Ross A. Trust and Ethics in AI // AI & Society. 2023. Vol. 38. P. 733–745. DOI: 10.1007/s00146-022-01473-4
Coeckelbergh M. Can We Trust Robots? // Ethics and Information Technology. 2012. Vol. 14. Is. 1. P. 53–60. DOI: 10.1007/s10676-011-9279-1
Dahlin E. Trust in AI // AI & Society. 2025. Vol. 40. P. 6089–6095. DOI: 10.1007/s00146-025-02429-0
Dang Q., Li G. Unveiling Trust in AI: The Interplay of Antecedents, Consequences, and Cultural Dynamics // AI & Society. 2025. DOI: 10.1007/s00146-025-02477-6
Geertz C. Thick Description: Toward an Interpretive Theory of Culture // The Interpretation of Cultures: Selected Essays. New York: Basic Books, 1973. P. 3–30.
Huynh M.-T., Aichner T. In Generative Artificial Intelligence We Trust: Unpacking Determinants and Outcomes for Cognitive Trust // AI & Society. 2025. Vol. 40. P. 5849–5869. DOI: 10.1007/s00146-025-02378-8
Kaasa A., Andriani L. Determinants of Institutional Trust: The Role of Cultural Context // Journal of Institutional Economics. 2022. Vol. 18. Is. 1. P. 45–65.DOI: 10.1017/S1744137421000199
Lee J.D., See K.A. Trust in Automation: Designing for Appropriate Reliance // Human Factors. 2004. Vol. 46. Is. 1. P. 50–80. DOI: 10.1518/hfes.46.1.50_30392
Luhmann N. Trust and Power. Chichester: John Wiley & Sons, 1979.
Ryan M. In AI We Trust: Ethics, Artificial Intelligence, and Reliability // Science and Engineering Ethics. 2020. Vol. 26. P. 2749–2767. DOI: 10.1007/s11948-020-00228-y
Winner L. Do Artifacts Have Politics? // Daedalus. 1980. Vol. 109. Is. 1. P. 121–136.
Downloads
Published
Similar Articles
- Yuriy Y. Petrunin, Svetlana S. Popova, Jianing Han, From the Pharmaceutical Industry to the AI Industry: The Regulation Transfer , Public Administration. E-journal (Russia): No. 109 (2025)
- Dina K. Tanatova, Margarita V. Vdovina, Irina V. Dolgorukova, Socio-Managerial Analysis of Family Well-Being , Public Administration. E-journal (Russia): No. 109 (2025)
- Raisa N. Shpakova, Dmitriy I. Gorodetskiy, Prospects of Using Artificial Intelligence Technologies to Solve Regional Strategic Planning Problems , Public Administration. E-journal (Russia): No. 112 (2025)
- Dina V. Krylova, Aleksander A. Maksimenko, Using Artificial Intelligence in Corruption Discernment and Counteraction: International Experience Review , Public Administration. E-journal (Russia): No. 84 (2021)
- Ilya M. Kuznechenko, Risks of Decision-Making Organization and Implementation Based on Big Data Analytics and Artificial Intelligence , Public Administration. E-journal (Russia): No. 104 (2024)
- Yuriy Yu. Petrunin, Generative Artificial Intelligence and the Issue of Consciousness , Public Administration. E-journal (Russia): No. 112 (2025)
- Zhang Jianhua, Innovation Research on Digital Transformation of Manufacturing Industry in China and Russia in the Digital Economy Era , Public Administration. E-journal (Russia): No. 110 (2025)
- Valeria Yu. Dmitrievskaya, Tatiana V. Zaitseva, Social Embeddedness of Material Motivation in Organizational Workforce , Public Administration. E-journal (Russia): No. 111 (2025)
- Oleg A. Chernov, Elena S. Palkina, Modernizing the IMO Member State Audit Scheme to Increase Efficiency of Maritime Transport , Public Administration. E-journal (Russia): No. 101 (2023)
- Elena N. Veduta, Liparit A. Gegamyan, Artificial Intelligence in Ensuring Sustainable Economic Development , Public Administration. E-journal (Russia): No. 110 (2025)
You may also start an advanced similarity search for this article.


