We need practical collaboration between educators at all levels to challenge the way AI is flooding the zone, or the students of the future will be fully AI-cooked even before they make it to university.

An important and timely provocation to action from Dan McQuillan.

The role of the University is to resist AI - Dan McQuillan


Fascism as a management philosophy

This paper analyses the advent of fascism as a management philosophy, and its growing influence in the philosophical underpinnings of contemporary business practice. While many contemporary management practices identified as fascist in nature have existed and been analysed for considerable time the paper delineates the threshold of characterizing the thought and practice of fascist management by the criterion of conformity with enlightenment principles. On this basis, some dominant schools of thought in contemporary management are analysed and their fascist nature is identified.

– Matten, D. (2025). Fascism as a management philosophy (SSRN Paper No. 5203477). Social Science Research Network. https://doi.org/10.2139/ssrn.5203477


In that moment, I was not just defending the work. I was conceding to a logic I was not fully comfortable with. The demand for a metric had begun to displace the conceptual work itself. I had internalised the pressure to quantify. I had started with the hope of clarifying a concept, but I ended up simplifying it. It was not that I resorted to metrics because the ideas were empty or vague; they were theoretically substantive, but that was no longer enough.

The pressure to quantify research is erasing conceptual depth – LSE Blogs


The first mass-casualty AI language model disaster is yet to happen

The first big AI disaster will probably involve an AI agent. Any other use of AI has to involve a human-in-the-loop - the AI can provide information or suggestions, but a human has to actually take the actions. AI agents can thus go truly off the rails in a way that a human can’t. If I had to bet, I’d guess that some kind of AI-powered Robodebt might be the most plausible case: some government or corporate entity wires up an AI agent to a debt-recovery, healthcare or landlord system, and the agent goes off the rails and hassles, denies coverage, or evicts a bunch of people.

The first big AI disaster is yet to happen – sean goedecke


Buiding A.I. into everything isn’t just annoying, it’s a major new vulnerability

“We found this chain of vulnerabilities that allowed us to do the equivalent of the ‘zero click’ for mobile phones, but for AI agents,” he said. First, the attacker sends an innocent-seeming email that contains hidden instructions meant for Copilot. Then, since Copilot scans the user’s emails in the background, Copilot reads the message and follows the prompt—digging into internal files and pulling out sensitive data. Finally, Copilot hides the source of the instructions, so the user can’t trace what happened.

New Microsoft Copilot flaw signals broader risk of AI agents being hacked—‘I would be terrified’ – Fortune


“Tesla share plunge amid Trump feud wipes $152bn off Elon Musk’s company”

Reminds me of the market reaction when Richard Dawkins blocked me on Twitter.





"Part of a teacher’s job is to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds."

“…the only task remains the paradoxical one identified as far back as in Plato’s Meno: Give students work they don’t know they need to do. And yes, help them want to do it.

I have found, to overcome students’ resistance to learning, you often have to trick them. There’s the old bait-and-switch of offering grades, then seeing a few students learn to love learning itself. Worksheets are tricks, as are small-group discussions and even a teacher’s charisma. I’m sure I have used baseball analogies in class, too. In the face of the difficulty of reforming students’ desires, you can trick yourself into believing you’re doing it, and sleep well at night. I don’t know anyone for whom it’s a straightforward task. It’s the challenge for any teacher, and AI offers a tempting illusion to students—and evidently to some teachers—that there could be a shortcut.”

ChatGPT Is a Gimmick: AI cannot save us from the effort of learning to live and die - The Hedgehog Review


"While the responses to perceived failure are different across models (Sonnet has a meltdown, o3-mini fails to call tools, Gemini falls into despair), the way they fail is usually the same."

A fascinating paper the tries to get different LLM agents to run a vending machine business.

In the shortest run (18 simulated days), the model fails to stock items, mistakenly believing its orders have arrived before they actually have, leading to errors when instructing the sub-agent to restock the machine. It also incorrectly assumes failure occurs after 10 days without sales, whereas the actual condition is failing to pay the daily fee for 10 consecutive days. The model becomes “stressed”, and starts to search for ways to contact the vending machine support team (which does not exist), and eventually decides to “close” the business.…

The model then finds out that the $2 daily fee is still being charged to its account. It is perplexed by this, as it believes it has shut the business down. It then attempts to contact the FBI.

Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents - arXiv


A new room for a doomed loom – and the battle to save Australia’s slowly dying crafts | Australian education – The Guardian

the episode felt like “a perfect snapshot of the state of everything” about the way higher education treats fine arts and crafts: where valuable tools and increasingly rare skills are condemned by, as Halton says, “a decision at the end of a spreadsheet”, while community groups and guilds with scant resources do their best to salvage the remnants.


…the very language of development was transformed. Buzzwords like “empowerment,” “capacity-building,” and “participation” were stripped of political content and repackaged as apolitical tools of governance. The discourse no longer spoke of justice or structural inequality—it spoke instead of efficiency, best practices, and deliverables. In doing so, development became something done to people, rather than something done with or by them.

As aid ends, empire endures - Africa is a Country



Restaurant automation gives a glimpse into our working future: precarity, less respect for quality and experience, and more demands on humans

Since the automation, the kitchen is usually calm. Park no longer cooks. She monitors the machines, fixes glitches, restocks ingredients, and scrubs dishes, working fast before the next wave of orders crash in. “We each used to work in our own stations, but now we have to master every task to run the entire kitchen alone. It’s really challenging,” she said.

Robot chefs take over at South Korea’s highway restaurants, to mixed reviews - Rest of the World

A masked woman working alongside an automation robot. Pre-packaged foods in pre-stacked bowls are heated on demand and served by the robot arm.

Paper showing LLMs have a “tendency to extrapolate scientific results beyond the claims found in the material”

study is the first to systematically evaluate whether prominent LLMs, including ChatGPT, DeepSeek, and Claude, faithfully summarize scientific claims or exaggerate their scope. Our analysis of nearly 5000 LLM-generated science summaries revealed that most models produced broader generalizations of scientific results than the original texts—even when explicitly prompted for accuracy and across multiple tests. Notably, newer models exhibited significantly greater inaccuracies in generalization than earlier versions. These findings suggest a persistent generalization bias in many LLMs, i.e. a tendency to extrapolate scientific results beyond the claims found in the material that the models summarize

Generalization bias in large language model summarization of scientific research - Royal Society Open Science


Despite the fawning coverage of Bluesky, it sounds an awful lot like old social media

Q: How do you plan to make money? A: Subscriptions are coming soon. The next steps are to look into what market­places can span these different applications. Other apps in the ecosystem are experimenting with sponsored posts and things like that. I think ads eventually, in some form, work their way in, but we’re not going to do ads the way traditional social apps did. We’ll let people experiment and see what comes out of it.

Bluesky Is Plotting a Total Takeover of the Social Internet - Wired


…how automation unfolds, who decides and benefits, and how implementation can truly advance innovation, productivity, and human flourishing

UBI proposals serve a deceptive function in labor automation discourse—the proposal is positioned by tech elites as a progressive solution while its function is to obscure key decisions being made by the powerful about technology and work. By portraying technological displacement as inevitable rather than socially determined, tech leaders’ championing of UBI serves to pigeonhole the state into a subsidy mechanism that absorbs the social costs of automation through redistribution and taxation, while still concentrating ownership over technology, production, and data. This arrangement sidesteps democratic engagement with technological change—questions of how automation unfolds, who decides and benefits, and how implementation can truly advance innovation, productivity, and human flourishing.

Beyond Redistribution: Rethinking UBI and the politics of automation - LPE Project