What will a researcher be? Or: How I Learned to Stop Worrying and Love the Epistemological Bomb
This post first appeared on the Harris-Roxas Health blog.
Last week a paper published in Nature quietly marked the beginning of the end of research as we knew it. As with most genuinely consequential things it was only noticed by a few people while the rest of us were worried about what can euphemistically be described as world events.
The paper describes a system called The AI Scientist. It can generate research ideas, write and execute code, run experiments, analyse results, produce a complete scientific manuscript, and conduct its own peer review. One of its submissions got through peer review for a top-tier machine learning conference.
Everything from conception to paper, no humans in the loop.
What the system actually does
The AI Scientist operates in four phases:
- It generates research hypotheses, filters them against the existing literature for novelty, and selects promising directions.
- It devises and executes experimental plans, debugging its own code when things break.
- It produces a complete manuscript in the style of a scientific conference paper, including a literature review, methods, results, and conclusion.
- An automated reviewer, which is itself an AI system trained against established peer-review guidelines, provides scores and an accept/reject recommendation.
The system was tested in two modes. The template-based version was given starting code and a research area and asked to extend it. The template-free version started from scratch, generating research proposals more like the abstract of a paper and then designing experiments to test them. The template-free version was adapted to submit papers to a real workshop (the conference organisers were aware of this). Three papers were submitted alongside 40 human-authored ones. Reviewers were told some submissions were AI-generated but not which ones.
The upshot: one paper by The AI Scientist was accepted. The paper quality produced by The AI Scientist can also improve as the underlying models improve, suggesting this is the beginning of a change that we’ll see more of.
“But it's only machine learning"
The predictable response to this paper this is to draw a reassuring boundary. This works for computational experiments only. It can't do fieldwork, it can't interview people, it doesn't understand context, it hallucinates citations. All of that is true (for the moment).
But this confuses present limitations with future constraints, and it mistakes the type of research it’s been tested on for the capabilities of the system. The same research process pipeline (generate ideas, design studies, execute, analyse, write) applies to a very large proportion of what researchers in health, social sciences, economics, and public administration actually do. If (when) AI systems can reliably interact with data collected by other means, the scope for this expands dramatically. A lot of cohort data sits waiting to be analysed. Clinical trial data is structured and standardised. Administrative data is vast and mostly machine-readable.
The question is not whether this approach will extend beyond machine learning. It is how fast, and whether we’re ready.
A different kind of science
There are some important issues to contend with here that go beyond the numbers who will be affected as workers, though I’ll get to that.
Research, as most of us humans do it, is inefficient in ways that are generative. The tangents that that becomes breakthroughs, or more often just promising lines of work. The way of thinking about a problem that allows us to move forward, to become unstuck. The discussions with people in other fields that reframes what’s possible. The years spent working with a community that produce insights static datasets can’t get at. These aren’t bugs in the process of research, they’re features - but we rarely fund them or recognise them as such.
The AI Scientist produces research that is, by design, driven by plausibility, internal consistency, and novelty relative to the literature. It optimises for what peer review rewards. That isn’t nothing, but it is a different way of knowing and understanding. Epistemology, if you’ll forgive a pretentious term. A system that navigates the existing map of human knowledge efficiently may be exactly the wrong tool for making new maps. Research isn’t just more of what we already know how to measure.
This matters because as these systems proliferate we risk not just changing who does research activities, but changing what research is for. If the literature itself becomes increasingly populated by outputs optimised for acceptance rather than insight (an issue that predates LLMs), the feedback loops spiral in a direction that won’t be good. I raise this not to cause alarm, but because the research community needs to have this conversation honestly, and soon, rather than continuing to be bemused bystanders.
What the history of automation tells us
Before we get to what research institutions should do, it is worth thinking about what kind of threat this actually is. The answer is more nuanced, and probably more troubling, than that we will simply be replaced.
Economists who have studied previous waves of automation and computerisation have found that automation rarely eliminates occupations outright. Among 271 detailed occupations tracked across sixty years of US labour market data, only elevator operators can be said to have disappeared primarily because of automation. Everywhere else the pattern has been partial - automation changed the task mix within jobs, shifted work between them, and required workers to learn new skills. Wholesale elimination has been relatively rare. Researchers are probably not modern elevator operators. The job probably won’t vanish, but that’s not the real story.
When looking at computerisation between the 70s and 2015 or so, the occupations that used computers grew faster, not slower - even routine and middle income job. What automation produced in those settings wasn’t job destruction so much as a massive reallocation of labour. New skills were required, old bundling of tasks into roles dissolved. Employment shifted dramatically and a lot of people were left behind in that transition, at considerable personal cost and heartache. Those transitions didn't happen smoothly or fairly, and there is no reason to assume the research sector will be different. New skills were costly to learn, and computerisation was associated with substantially greater within-occupation wage inequality.
This is the pattern most likely to play out in research over the next decade. Not the elimination of researchers as a class, but a sharp and sustained widening of the gap between those who can work effectively with AI-augmented systems and those who cannot. Between the senior researcher with the conceptual authority and domain expertise to direct these tools, and the junior researcher whose less valued tasks (literature reviews, data processing, coding, drafting, running standard analyses) are precisely the structured, iterative, well-described work that systems like The AI Scientist are being built to do faster and cheaper.
The pathway into research expertise has always run through doing that kind of foundational work. The friction that we need to learn is based on repetitious work. If that kind of work is automated away before we’ve thought about what replaces it, we will not just have a more unequal research workforce. The people who never learned the craft by doing it will be the ones expected to lead it a generation from now.
The change management failures we’re about to repeat
Here’s what I find most concerning, professionally and practically. Universities, research institutions, and governments are going to face decisions about research workforce capacity that these tools will accelerate. Grant reviewers, research assistants, ECR investigators, systematic reviewers, data analysts - all roles where researchers build their skills and where enormous amounts of important work gets done. The economics of getting AI to fill these roles are going to be hard to resist (and it will take active resistance). The institutional pressures will be significant.
What I do not see anywhere is any serious organisational preparation for this. No foresight work or horizon scanning. No change management process. No workforce transition planning. No honest conversations with research staff about what their roles will look like in five or ten years. No rethinking of how we develop the next generation of researchers in a world where the scaffolding tasks that used to build expertise will be automated.
We are at the point where organisations should be asking hard questions about what human researchers will do when AI does the legible parts of research better and faster. Instead, most institutions are watching, waiting, and quietly hoping someone else decides first. Strategies are being released that don’t even name this as an issue.
That isn’t a strategy. It’s a plan to be surprised.
What needs to happen
The AI Scientist is not the end of research. But it’s the end of the idea that this is someone else's problem. Every university, research institute, government agency, health service, and higher education union that employs, funds or represents researchers needs to treat AI-augmented research as an urgent workforce development and change management issue, not a technology curiosity.
That means investing in understanding specifically where AI can and will augment researcher capacity and where it may act as substitutes for it - and that this shouldn’t be based on AI industry spin. It means building genuine AI capacity in research teams, and not just shifting the responsibility for this onto individuals as is happening now. It also means having frank conversations about what roles are changing and creating pathways for people rather than career-ending cliffs. It means doing this now, while there is still time to do it thoughtfully, rather than reactively in five years when the economics force a decision and there is no plan in place. I worry if our universities have the capacity to do this.
Research is a human endeavour and researchers are probably not going to suffer the fate of elevator operators. That doesn’t mean it’s immune to the forces reshaping every other knowledge economy profession. But it means we have a responsibility to navigate those forces with the kind of rigour and attention we bring to the questions we research. We haven’t been doing that and it’s time we started.
Sources
Bessen, J. E. (2016). How Computer Automation Affects Occupations: Technology, Jobs, and Skills (SSRN Scholarly Paper No. 2690435). Social Science Research Network. https://doi.org/10.2139/ssrn.2690435
Lu, C., Lu, C., Lange, R. T., Yamada, Y., Hu, S., Foerster, J., Ha, D., & Clune, J. (2026). Towards end-to-end automation of AI research. Nature, 651(8107), 914–919. https://doi.org/10.1038/s41586-026-10265-5