Al-Moghrabi, Dalya; Arqub, Sarab Abu; Maroulakos, Michael P; Pandis, Nikolaos; Fleming, Padhraig S (2024). Can ChatGPT identify predatory biomedical and dental journals? A cross-sectional content analysis. Journal of dentistry, 142, p. 104840. Elsevier 10.1016/j.jdent.2024.104840
Text
1-s2.0-S0300571224000101-main.pdf - Accepted Version Restricted to registered users only until 13 January 2025. Available under License Publisher holds Copyright. Download (1MB) |
OBJECTIVES
To assess whether ChatGPT can help to identify predatory biomedical and dental journals, analyze the content of its responses and compare the frequency of positive and negative indicators provided by ChatGPT concerning predatory and legitimate journals.
METHODS
Four-hundred predatory and legitimate biomedical and dental journals were selected from four sources: Beall's list, unsolicited emails, the Web of Science (WOS) journal list and the Directory of Open Access Journals (DOAJ). ChatGPT was asked to determine journal legitimacy. Journals were classified into legitimate or predatory. Pearson's Chi-squared test and logistic regression were conducted. Two machine learning algorithms determined the most influential criteria on the correct classification of journals.
RESULTS
The data were categorized under 10 criteria with the most frequently coded criteria being the transparency of processes and policies. ChatGPT correctly classified predatory and legitimate journals in 92.5% and 71% of the sample, respectively. The accuracy of ChatGPT responses was 0.82. ChatGPT also demonstrated a high level of sensitivity (0.93). Additionally, the model exhibited a specificity of 0.71, accurately identifying true negatives. A highly significant association between ChatGPT verdicts and the classification based on known sources was observed (P <0.001). ChatGPT was 30.2 times more likely to correctly classify a predatory journal (95% confidence interval: 16.9-57.43, p-value: <0.001).
CONCLUSIONS
ChatGPT can accurately distinguish predatory and legitimate journals with a high level of accuracy. While some false positive (29%) and false negative (7.5%) results were observed, it may be reasonable to harness ChatGPT to assist with the identification of predatory journals.
CLINICAL SIGNIFICANCE STATEMENT
ChatGPT may effectively distinguish between predatory and legitimate journals, with accuracy rates of 92.5% and 71%, respectively. The potential utility of large-scale language models in exposing predatory publications is worthy of further consideration.
Item Type: |
Journal Article (Original Article) |
---|---|
Division/Institute: |
04 Faculty of Medicine > School of Dental Medicine > Department of Orthodontics |
UniBE Contributor: |
Pandis, Nikolaos |
Subjects: |
600 Technology > 610 Medicine & health |
ISSN: |
1879-176X |
Publisher: |
Elsevier |
Language: |
English |
Submitter: |
Pubmed Import |
Date Deposited: |
15 Jan 2024 15:35 |
Last Modified: |
08 Mar 2024 00:15 |
Publisher DOI: |
10.1016/j.jdent.2024.104840 |
PubMed ID: |
38219888 |
Uncontrolled Keywords: |
Editorial policies ethics in publication medical ethics open access publishing scientific publishing transparency |
BORIS DOI: |
10.48350/191623 |
URI: |
https://boris.unibe.ch/id/eprint/191623 |