In ChatGPT we trust? Auditing how generative AIs understand and detect online political misinformation

Kuznetsova, Elizaveta; Makhortykh, Mykola; Baghumyan, Ani; Urman, Aleksandra (6 September 2023). In ChatGPT we trust? Auditing how generative AIs understand and detect online political misinformation (Unpublished). In: ECPR 2023. 4-8 September 2023.

Full text not available from this repository. (Request a copy)

The growing use of AI-driven systems creates new opportunities as well as risks for cyber politics. From search engines organising political information flows (Unkel & Haim, 2019) to personalised news feeds determining individual exposure to misinformation (Kuznetsova & Makhortykh, 2023), these systems increasingly shape how human actors perceive and engage with political matters worldwide. However, besides changing human interactions with cyber politics, the development of technology also gives rise to new types of non-human political actors which go beyond information curation (e.g. as search algorithms do) and are capable of generating and evaluating political information in a more nuanced way.

In this paper, we focus on one type of non-human actors dealing with cyber politics: generative artificial intelligence (AI). Generative AIs, such as ChatGPT or MidJourney, are distinguished by their ability to generate new content in the text or image format. More advanced forms of text-oriented generative AIs (e.g. ChatGPT or ChatSonic) are not only capable of producing content in the variety of textual formats but can also serve as conversational agents interpreting and evaluating human input (e.g. to detect whether it contains false information or has a certain political leaning). Consequently, such generative AIs can transform many aspects of cyber politics, including the use of misinformation in online environments which is viewed as a major threat for liberal democracies. By identifying misinformation and bringing awareness of the users to it, generative AIs can cull the spread of false content and counter disinformation campaigns. However, by failing to deal with it properly, generative AIs can also facilitate spread of misinformation online or even be used for generating and disseminating new types of false narratives.

In this study, we examine the possible implications of the rise of generative AIs on online misinformation. For this aim, we conduct an algorithmic audit of two commonly used generative AIs: ChatGPT and ChatSonic. Specifically, we examine how these AIs understand the concepts of disinformation and misinformation and to what degree they distinguish them from the related concepts of digital propaganda using the definition-oriented inquiries. Then, we systematically examine the ability of generative AIs to differentiate between the true and the false claims dealing with the two case studies: the war in Ukraine and the COVID-19 pandemic.

Item Type:

Conference or Workshop Item (Paper)

Division/Institute:

03 Faculty of Business, Economics and Social Sciences > Social Sciences > Institute of Communication and Media Studies (ICMB)

UniBE Contributor:

Makhortykh, Mykola, Baghumyan, Ani, Urman, Aleksandra

Subjects:

000 Computer science, knowledge & systems
300 Social sciences, sociology & anthropology
300 Social sciences, sociology & anthropology > 320 Political science
900 History

Language:

English

Submitter:

Mykola Makhortykh

Date Deposited:

02 Oct 2023 09:43

Last Modified:

02 Oct 2023 09:43

Uncontrolled Keywords:

chatGPT, generative AI, misinformation, propaganda, disinformation, Holocaust, war in Ukraine, Russia, denial, climage change, LGBTQ+, COVID

URI:

https://boris.unibe.ch/id/eprint/186815

Actions (login required)

Edit item Edit item
Provide Feedback