AI Medical Compendium Journal:
Accountability in research

Showing 1 to 6 of 6 articles

Transparency in research: An analysis of ChatGPT usage acknowledgment by authors across disciplines and geographies.

Accountability in research
This investigation systematically reviews the recognition of generative AI tools, particularly ChatGPT, in scholarly literature. Utilizing 1,226 publications from the Dimensions database, ranging from November 2022 to July 2023, the research scrutini...

Are there accurate and legitimate ways to machine-quantify predatoriness, or an urgent need for an automated online tool?

Accountability in research
Yamada and Teixeira da Silva voiced valid concerns with the inadequacies of an online machine learning-based tool to detect predatory journals, and stressed on the urgent need for an automated, open, online-based semi-quantitative system that measure...

Let's be fair. What about an AI editor?

Accountability in research
Much of the current attention on artificial intelligence (AI)-based natural language processing (NLP) systems has focused on research ethics and integrity but neglects their roles in the editorial and peer-reviewing process. We argue that the academi...

Challenges for enforcing editorial policies on AI-generated papers.

Accountability in research
ChatGPT, a chatbot released by OpenAI in November 2022, has rocked academia with its capacity to generate papers "good enough" for academic journals. Major journals such as and professional societies such as the World Association of Medical Editors ...

Letter to editor: NLP systems such as ChatGPT cannot be listed as an author because these cannot fulfill widely adopted authorship criteria.

Accountability in research
This letter to the editor suggests adding a technical point to the new editorial policy expounded by Hosseini et al. on the mandatory disclosure of any use of natural language processing (NLP) systems, or generative AI, in writing scholarly publicati...