Steganography for all?
I lack technical expertise to understand the detail but this paper created a new form of steganography using GPT-4. Previous methods of steganography using LLMs required specific white box models – but this is a black box proces.
The researchers uses specific prompts to guide the generative AI and encryption to keep the messages (seemingly) secure. When someone receives the text, they can use the same system to extract the hidden message. So steganographic capabilities can be achieved using only public interfaces, if I understand correctly.
Given the AI-in-everything world we’re in, I wonder how this will be locked down.
Generative Text Steganography with Large Language Model – arXiv
This required a bit of initial self-discipline. There were no push notifications or buzzing smartwatches, so I had to make myself check the planner multiple times per day. Everything was manual. Adding events took the kind of thoughtful precision I had completely forgot about. It was more work, but I found it helped me focus. Writing tasks by hand made them feel real, and I seemed to remember them better, as well.
Finding Peter Putnam - Nautilus
A fascinating story about an amazing, forgotten person.
We need practical collaboration between educators at all levels to challenge the way AI is flooding the zone, or the students of the future will be fully AI-cooked even before they make it to university.
An important and timely provocation to action from Dan McQuillan.
The Ghosts in the Machine - Harpers
Spotify’s lo-fi slop to mushify your brain
Ice on the car this morning. According to the IPCC I’m supposed to be living in one of the later Mad Max films, not Fargo.

Fascism as a management philosophy
This paper analyses the advent of fascism as a management philosophy, and its growing influence in the philosophical underpinnings of contemporary business practice. While many contemporary management practices identified as fascist in nature have existed and been analysed for considerable time the paper delineates the threshold of characterizing the thought and practice of fascist management by the criterion of conformity with enlightenment principles. On this basis, some dominant schools of thought in contemporary management are analysed and their fascist nature is identified.
– Matten, D. (2025). Fascism as a management philosophy (SSRN Paper No. 5203477). Social Science Research Network. https://doi.org/10.2139/ssrn.5203477
In that moment, I was not just defending the work. I was conceding to a logic I was not fully comfortable with. The demand for a metric had begun to displace the conceptual work itself. I had internalised the pressure to quantify. I had started with the hope of clarifying a concept, but I ended up simplifying it. It was not that I resorted to metrics because the ideas were empty or vague; they were theoretically substantive, but that was no longer enough.
The pressure to quantify research is erasing conceptual depth – LSE Blogs
The first mass-casualty AI language model disaster is yet to happen
The first big AI disaster will probably involve an AI agent. Any other use of AI has to involve a human-in-the-loop - the AI can provide information or suggestions, but a human has to actually take the actions. AI agents can thus go truly off the rails in a way that a human can’t. If I had to bet, I’d guess that some kind of AI-powered Robodebt might be the most plausible case: some government or corporate entity wires up an AI agent to a debt-recovery, healthcare or landlord system, and the agent goes off the rails and hassles, denies coverage, or evicts a bunch of people.
Buiding A.I. into everything isn’t just annoying, it’s a major new vulnerability
“We found this chain of vulnerabilities that allowed us to do the equivalent of the ‘zero click’ for mobile phones, but for AI agents,” he said. First, the attacker sends an innocent-seeming email that contains hidden instructions meant for Copilot. Then, since Copilot scans the user’s emails in the background, Copilot reads the message and follows the prompt—digging into internal files and pulling out sensitive data. Finally, Copilot hides the source of the instructions, so the user can’t trace what happened.
“ One of the ways it’s going to destroy humans, long before there’s a nuclear disaster, is going to be the emotional hollowing-out of people.”
“They’re trying to convince people they can’t do the things they’ve been doing easily for years – to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies – to write that for you.” We will get to the point, she says with a grim laugh, “that you will essentially become just a skin bag of organs and bones, nothing else. You won’t know anything and you will be told repeatedly that you can’t do it, which is the opposite of what life has to offer. Capitulating all kinds of decisions like where to go on vacation, what to wear today, who to date, what to eat. People are already doing this. You won’t have to process grief, because you’ll have uploaded photos and voice messages from your mother who just died, and then she can talk to you via AI video call every day. One of the ways it’s going to destroy humans, long before there’s a nuclear disaster, is going to be the emotional hollowing-out of people.” – Justine Bateman
“Tesla share plunge amid Trump feud wipes $152bn off Elon Musk’s company”
Reminds me of the market reaction when Richard Dawkins blocked me on Twitter.