Scholarly communication infrastructure has been beyond its breaking point for a while. Generative pre-trained transformers look increasingly like an epistemological omnicidal weapon that we've failed to contain.

Two main risks arise from the increasingly common use of GPT to (mass-)produce fake, scientific publications. First, the abundance of fabricated “studies” seeping into all areas of the research infrastructure threatens to overwhelm the scholarly communication system and jeopardize the integrity of the scientific record. A second risk lies in the increased possibility that convincingly scientific-looking content was in fact deceitfully created with AI tools and is also optimized to be retrieved by publicly available academic search engines, particularly Google Scholar.

Worryingly it is free tools like Google Scholar, and fields like environmental and health sciences that have obvious policy and practice relevance, that seem particularly vulnerable.

GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation - Harvard Kennedy School Misinformation Review

Ben Harris-Roxas @ben_hr