Have a great day everyone. Remember that other people’s opinions of you are just fan theories.


Budget for a broken social contract

Greg Jericho on last night’s budget, and the kicking it gives to people with disabilities:

And so now we have a government saying those with disabilities need to be kicked off the NDIS because gas companies need their profits…

Most people know someone on the NDIS, and few would think they are rorting it. The problem, of course, is that the NDIS is a Productivity Commission idea, which believes the private sector delivers better efficiency than the public sector.

It never does. It always leads to profiteering and worse service and yet when you are beholden to neo-liberalism, what need have you for reality?…

Because people know with such things what happens is the shonks rort the system and get away with it, while those with disabilities need to endure the cuts because ‘spending is out of control’.



For today’s icebreaker we’ll be sorting ourselves into budget winners and budget losers.


Hard to walk back the impact of this paper though…

Influential study touting ChatGPT in education retracted over red flags



Good morning

City street with tram tracks, glowing traffic lights, and silhouetted buildings against an orange dawn sky

I’m glad Webex exists because it’s a helpful reminder that there are worse options that I could be forced to use.


More on the Job Ready Graduates Program, a failed, deeply inequitable policy that needs immediate reform:

“The [2025] modelling… shows the number of students with debts over $50,000 has increased by 70%, and humanities students are set to pay off their debts into their 40s.”

One in four humanities students in Australia to take more than 25 years to pay off student loans, Treasury finds - The Guardian


When considering what’s the purpose of Anzac Day, we should pay as much attention to what people do as to what they say:

“…a number of crowd members booed loudly and repeatedly during an Acknowledgment of Country by Uncle Ray Minniecon…

“We have experienced this type of racism for over 200 years,” he told media after the service. “One of the questions that we have in our minds is: What crime did we commit to attract this kind of racism?””

Loud boos mar Anzac Day dawn services in Martin Place and Melbourne - SMH


I thought you’re not supposed to go to war with your own client states (he says nervously)

Could Trump withdraw US support for UK sovereignty of Falklands? - The Guardian


“…we have a choice about who we want to be as a society. For me, that choice is clear. I want to live in a country where people seeking safety are not treated as risks to be managed, but as people to be protected. Because safety should not come with conditions. And belonging should not depend on how we are described.”

Angus Taylor’s comments remind refugees like me that our belonging is conditional - Thouraya Lahmadi


One day, all this will be yours gestures at box

An open red Viennetta Silver Server box sits on a wooden sideboard beside a glass jug and a decorative teacup. Inside the box is a silver-coloured serving spoon designed for serving Viennetta, a layered frozen dessert. It was widely marketed in the ‘80s and ‘90s as a slightly fancy dessert for family dinners and special occasions.An open red Viennetta Silver Server box sits on a wooden sideboard beside a glass jug and a decorative teacup. Inside the box is a silver-coloured serving spoon designed for serving Viennetta, a layered frozen dessert. It was widely marketed in the ‘80s and ‘90s as a slightly fancy dessert for family dinners and special occasions.


Turmoil has engulfed the [update]. The taxation of trade routes to [insert] is in dispute. Hoping to resolve the matter with a blockade of deadly battleships, the greedy [update] has stopped all shipping to [insert name].

Trump has said the US will begin a blockade of the strait of Hormuz


Good morning

A panoramic view of a cityscape features a skyline with tall buildings, a bay with a bridge, and lush greenery in the foreground.

World held hostage by reliance on fossil fuels, Christiana Figueres warns – and climate health impacts are ‘mother of all injustices’ - The Guardian

In March, research published in the international science journal Nature found that ocean levels had been underestimated due to inaccurate modelling. In some areas of the global south, including south-east Asia and the Indo-Pacific, they may be 100cm to 150cm higher than previously thought.


Instead of disrupting art, illustration, graphic design, education, journalism, copywriting, marketing, translation, software development, legal services, accounting, voice acting, and music production, why don’t overconfident technology midwits disrupt PRINTERS

🖨️🏏


What will a researcher be? Or: How I Learned to Stop Worrying and Love the Epistemological Bomb

This post first appeared on the Harris-Roxas Health blog.

Last week a paper published in Nature quietly marked the beginning of the end of research as we knew it. As with most genuinely consequential things it was only noticed by a few people while the rest of us were worried about what can euphemistically be described as world events.

The paper describes a system called The AI Scientist. It can generate research ideas, write and execute code, run experiments, analyse results, produce a complete scientific manuscript, and conduct its own peer review. One of its submissions got through peer review for a top-tier machine learning conference.

Everything from conception to paper, no humans in the loop.

What the system actually does

The AI Scientist operates in four phases:

  1. It generates research hypotheses, filters them against the existing literature for novelty, and selects promising directions.
  2. It devises and executes experimental plans, debugging its own code when things break.
  3. It produces a complete manuscript in the style of a scientific conference paper, including a literature review, methods, results, and conclusion.
  4. An automated reviewer, which is itself an AI system trained against established peer-review guidelines, provides scores and an accept/reject recommendation.

The system was tested in two modes. The template-based version was given starting code and a research area and asked to extend it. The template-free version started from scratch, generating research proposals more like the abstract of a paper and then designing experiments to test them. The template-free version was adapted to submit papers to a real workshop (the conference organisers were aware of this). Three papers were submitted alongside 40 human-authored ones. Reviewers were told some submissions were AI-generated but not which ones.

The upshot: one paper by The AI Scientist was accepted. The paper quality produced by The AI Scientist can also improve as the underlying models improve, suggesting this is the beginning of a change that we’ll see more of.

“But it's only machine learning"

The predictable response to this paper this is to draw a reassuring boundary. This works for computational experiments only. It can't do fieldwork, it can't interview people, it doesn't understand context, it hallucinates citations. All of that is true (for the moment).

But this confuses present limitations with future constraints, and it mistakes the type of research it’s been tested on for the capabilities of the system. The same research process pipeline (generate ideas, design studies, execute, analyse, write) applies to a very large proportion of what researchers in health, social sciences, economics, and public administration actually do. If (when) AI systems can reliably interact with data collected by other means, the scope for this expands dramatically. A lot of cohort data sits waiting to be analysed. Clinical trial data is structured and standardised. Administrative data is vast and mostly machine-readable.

The question is not whether this approach will extend beyond machine learning. It is how fast, and whether we’re ready.

A different kind of science

There are some important issues to contend with here that go beyond the numbers who will be affected as workers, though I’ll get to that.

Research, as most of us humans do it, is inefficient in ways that are generative. The tangents that that becomes breakthroughs, or more often just promising lines of work. The way of thinking about a problem that allows us to move forward, to become unstuck. The discussions with people in other fields that reframes what’s possible. The years spent working with a community that produce insights static datasets can’t get at. These aren’t bugs in the process of research, they’re features - but we rarely fund them or recognise them as such.

The AI Scientist produces research that is, by design, driven by plausibility, internal consistency, and novelty relative to the literature. It optimises for what peer review rewards. That isn’t nothing, but it is a different way of knowing and understanding. Epistemology, if you’ll forgive a pretentious term. A system that navigates the existing map of human knowledge efficiently may be exactly the wrong tool for making new maps. Research isn’t just more of what we already know how to measure.

This matters because as these systems proliferate we risk not just changing who does research activities, but changing what research is for. If the literature itself becomes increasingly populated by outputs optimised for acceptance rather than insight (an issue that predates LLMs), the feedback loops spiral in a direction that won’t be good. I raise this not to cause alarm, but because the research community needs to have this conversation honestly, and soon, rather than continuing to be bemused bystanders.

What the history of automation tells us

Before we get to what research institutions should do, it is worth thinking about what kind of threat this actually is. The answer is more nuanced, and probably more troubling, than that we will simply be replaced.

Economists who have studied previous waves of automation and computerisation have found that automation rarely eliminates occupations outright. Among 271 detailed occupations tracked across sixty years of US labour market data, only elevator operators can be said to have disappeared primarily because of automation. Everywhere else the pattern has been partial - automation changed the task mix within jobs, shifted work between them, and required workers to learn new skills. Wholesale elimination has been relatively rare. Researchers are probably not modern elevator operators. The job probably won’t vanish, but that’s not the real story.

When looking at computerisation between the 70s and 2015 or so, the occupations that used computers grew faster, not slower - even routine and middle income job. What automation produced in those settings wasn’t job destruction so much as a massive reallocation of labour. New skills were required, old bundling of tasks into roles dissolved. Employment shifted dramatically and a lot of people were left behind in that transition, at considerable personal cost and heartache. Those transitions didn't happen smoothly or fairly, and there is no reason to assume the research sector will be different. New skills were costly to learn, and computerisation was associated with substantially greater within-occupation wage inequality.

This is the pattern most likely to play out in research over the next decade. Not the elimination of researchers as a class, but a sharp and sustained widening of the gap between those who can work effectively with AI-augmented systems and those who cannot. Between the senior researcher with the conceptual authority and domain expertise to direct these tools, and the junior researcher whose less valued tasks (literature reviews, data processing, coding, drafting, running standard analyses) are precisely the structured, iterative, well-described work that systems like The AI Scientist are being built to do faster and cheaper.

The pathway into research expertise has always run through doing that kind of foundational work. The friction that we need to learn is based on repetitious work. If that kind of work is automated away before we’ve thought about what replaces it, we will not just have a more unequal research workforce. The people who never learned the craft by doing it will be the ones expected to lead it a generation from now.

The change management failures we’re about to repeat

Here’s what I find most concerning, professionally and practically. Universities, research institutions, and governments are going to face decisions about research workforce capacity that these tools will accelerate. Grant reviewers, research assistants, ECR investigators, systematic reviewers, data analysts - all roles where researchers build their skills and where enormous amounts of important work gets done. The economics of getting AI to fill these roles are going to be hard to resist (and it will take active resistance). The institutional pressures will be significant.

What I do not see anywhere is any serious organisational preparation for this. No foresight work or horizon scanning. No change management process. No workforce transition planning. No honest conversations with research staff about what their roles will look like in five or ten years. No rethinking of how we develop the next generation of researchers in a world where the scaffolding tasks that used to build expertise will be automated.

We are at the point where organisations should be asking hard questions about what human researchers will do when AI does the legible parts of research better and faster. Instead, most institutions are watching, waiting, and quietly hoping someone else decides first. Strategies are being released that don’t even name this as an issue.

That isn’t a strategy. It’s a plan to be surprised.

What needs to happen

The AI Scientist is not the end of research. But it’s the end of the idea that this is someone else's problem. Every university, research institute, government agency, health service, and higher education union that employs, funds or represents researchers needs to treat AI-augmented research as an urgent workforce development and change management issue, not a technology curiosity.

That means investing in understanding specifically where AI can and will augment researcher capacity and where it may act as substitutes for it - and that this shouldn’t be based on AI industry spin. It means building genuine AI capacity in research teams, and not just shifting the responsibility for this onto individuals as is happening now. It also means having frank conversations about what roles are changing and creating pathways for people rather than career-ending cliffs. It means doing this now, while there is still time to do it thoughtfully, rather than reactively in five years when the economics force a decision and there is no plan in place. I worry if our universities have the capacity to do this.

Research is a human endeavour and researchers are probably not going to suffer the fate of elevator operators. That doesn’t mean it’s immune to the forces reshaping every other knowledge economy profession. But it means we have a responsibility to navigate those forces with the kind of rigour and attention we bring to the questions we research. We haven’t been doing that and it’s time we started.

Sources

Bessen, J. E. (2016). How Computer Automation Affects Occupations: Technology, Jobs, and Skills (SSRN Scholarly Paper No. 2690435). Social Science Research Network. https://doi.org/10.2139/ssrn.2690435

Lu, C., Lu, C., Lange, R. T., Yamada, Y., Hu, S., Foerster, J., Ha, D., & Clune, J. (2026). Towards end-to-end automation of AI research. Nature, 651(8107), 914–919. https://doi.org/10.1038/s41586-026-10265-5


Absolutely no thanks to Morrison’s government for sleepwalking Australia into AUKUS. Will probably be remembered as amongst the worst and most consequential decisions of the last decade.