
ORCID
Wilson, 0000-0002-1471-654X, Burleigh, 0000-0003-2393-5477
Abstract
Protecting research integrity is crucial for maintaining trust in the scholarly record. Historically, threats to research integrity stemmed from deliberate human actions, such as data manipulation and misrepresentation. Generative artificial intelligence (GAI) is increasingly prevalent in higher education and is capable of generating research data and scholarly papers almost instantly. The immediate production of research data challenges traditional standards of academic rigor and integrity. The purpose of this study was to explore the role and impact of GAI on research integrity and the scholarly record, emphasizing the need for robust safeguards. For this study, we submitted Likert-type survey questions to GAI, specifically ChatGPT, and investigated how ChatGPT responded to the questions and generated quantitative and mixed-methods data that could be presented as human study participant responses. As GAI evolves, the academic community must address its potential misuse, particularly in research institutions where the pressure to “publish or perish” is pervasive. In the age of GAI, ensuring the accuracy and honesty of the scholarly record is imperative for the credibility of innovative and impactful research. Findings indicate that data integrity in higher education research may be at risk if institutions do not establish clear, enforceable guidelines and policies to mitigate the potential misuse of GAI.
Included in
Higher Education Administration Commons, Higher Education and Teaching Commons, Scholarship of Teaching and Learning Commons