•  
  •  
 
Journal of Educational Research and Practice

ORCID

0000-0001-8822-9077 0000-0002-9214-4030

Abstract

Unbeknownst to many researchers, online surveys can be vulnerable to attacks from bots and generative artificial intelligence (AI), which can generate hundreds of fraudulent responses instantly. Confronting this challenge to data integrity, we employed an adaptive approach to test the effectiveness of various anti-fraud tactics (e.g., CAPTCHA; honeypot questions; question pairs; open-ended questions) and to distinguish between good-faith human respondents versus bots or fraudsters. Findings revealed that bots with generative AI capabilities bypassed some conventional defenses, including CAPTCHA, and effectively supplied human-like responses to open-ended questions. Other tactics, like removing response incentives and verifying IP addresses, showed promise but came with trade-offs. This study highlights the increasing complexity of online survey fraud and advocates for innovating new defenses and training.

Share

COinS