From the “this will age well” file:
“Trump Administration Will ‘Collapse’ in 30 Days, Says James Carville” Published 24 Feb 2025
"Part of a teacher’s job is to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds."
“…the only task remains the paradoxical one identified as far back as in Plato’s Meno: Give students work they don’t know they need to do. And yes, help them want to do it.
I have found, to overcome students’ resistance to learning, you often have to trick them. There’s the old bait-and-switch of offering grades, then seeing a few students learn to love learning itself. Worksheets are tricks, as are small-group discussions and even a teacher’s charisma. I’m sure I have used baseball analogies in class, too. In the face of the difficulty of reforming students’ desires, you can trick yourself into believing you’re doing it, and sleep well at night. I don’t know anyone for whom it’s a straightforward task. It’s the challenge for any teacher, and AI offers a tempting illusion to students—and evidently to some teachers—that there could be a shortcut.”
"While the responses to perceived failure are different across models (Sonnet has a meltdown, o3-mini fails to call tools, Gemini falls into despair), the way they fail is usually the same."
A fascinating paper the tries to get different LLM agents to run a vending machine business.
In the shortest run (18 simulated days), the model fails to stock items, mistakenly believing its orders have arrived before they actually have, leading to errors when instructing the sub-agent to restock the machine. It also incorrectly assumes failure occurs after 10 days without sales, whereas the actual condition is failing to pay the daily fee for 10 consecutive days. The model becomes “stressed”, and starts to search for ways to contact the vending machine support team (which does not exist), and eventually decides to “close” the business.…
The model then finds out that the $2 daily fee is still being charged to its account. It is perplexed by this, as it believes it has shut the business down. It then attempts to contact the FBI.
Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents - arXiv
the episode felt like “a perfect snapshot of the state of everything” about the way higher education treats fine arts and crafts: where valuable tools and increasingly rare skills are condemned by, as Halton says, “a decision at the end of a spreadsheet”, while community groups and guilds with scant resources do their best to salvage the remnants.
…the very language of development was transformed. Buzzwords like “empowerment,” “capacity-building,” and “participation” were stripped of political content and repackaged as apolitical tools of governance. The discourse no longer spoke of justice or structural inequality—it spoke instead of efficiency, best practices, and deliverables. In doing so, development became something done to people, rather than something done with or by them.
Restaurant automation gives a glimpse into our working future: precarity, less respect for quality and experience, and more demands on humans
Since the automation, the kitchen is usually calm. Park no longer cooks. She monitors the machines, fixes glitches, restocks ingredients, and scrubs dishes, working fast before the next wave of orders crash in. “We each used to work in our own stations, but now we have to master every task to run the entire kitchen alone. It’s really challenging,” she said.
Robot chefs take over at South Korea’s highway restaurants, to mixed reviews - Rest of the World

Paper showing LLMs have a “tendency to extrapolate scientific results beyond the claims found in the material”
study is the first to systematically evaluate whether prominent LLMs, including ChatGPT, DeepSeek, and Claude, faithfully summarize scientific claims or exaggerate their scope. Our analysis of nearly 5000 LLM-generated science summaries revealed that most models produced broader generalizations of scientific results than the original texts—even when explicitly prompted for accuracy and across multiple tests. Notably, newer models exhibited significantly greater inaccuracies in generalization than earlier versions. These findings suggest a persistent generalization bias in many LLMs, i.e. a tendency to extrapolate scientific results beyond the claims found in the material that the models summarize
Despite the fawning coverage of Bluesky, it sounds an awful lot like old social media
Q: How do you plan to make money? A: Subscriptions are coming soon. The next steps are to look into what marketplaces can span these different applications. Other apps in the ecosystem are experimenting with sponsored posts and things like that. I think ads eventually, in some form, work their way in, but we’re not going to do ads the way traditional social apps did. We’ll let people experiment and see what comes out of it.
Bluesky Is Plotting a Total Takeover of the Social Internet - Wired
…how automation unfolds, who decides and benefits, and how implementation can truly advance innovation, productivity, and human flourishing
UBI proposals serve a deceptive function in labor automation discourse—the proposal is positioned by tech elites as a progressive solution while its function is to obscure key decisions being made by the powerful about technology and work. By portraying technological displacement as inevitable rather than socially determined, tech leaders’ championing of UBI serves to pigeonhole the state into a subsidy mechanism that absorbs the social costs of automation through redistribution and taxation, while still concentrating ownership over technology, production, and data. This arrangement sidesteps democratic engagement with technological change—questions of how automation unfolds, who decides and benefits, and how implementation can truly advance innovation, productivity, and human flourishing.
Beyond Redistribution: Rethinking UBI and the politics of automation - LPE Project
AI therapy is a surveillance machine in a police state
Chatbots, likewise, escalate the risks of typical online secret-sharing. Their conversational design can draw out private information in a format that can be more vivid and revealing — and, if exposed, embarrassing — than even something like a Google search. There’s no simple equivalent to a private iMessage or WhatsApp chat with a friend, which can be encrypted to make snooping harder. (Chatbot logs can use encryption, but especially on major platforms, this typically doesn’t hide what you’re doing from the company itself.) They’re built, for safety purposes, to sense when a user is discussing sensitive topics like suicide and sex.
AI therapy is a surveillance machine in a police state - The Verge
Beyond Redistribution: Rethinking UBI and the politics of automation
However, within the automation discourse, UBI proposals also function to smooth over, and thereby naturalize, human displacement. UBI exists within a techno-deterministic worldview that presents automation as an inevitable force rather than as the result of social and political choices. This framing obscures existing power dynamics by portraying technology as a neutral productivity enhancer and labor displacement as a mere externality. In this narrative, UBI merely serves as a compensatory mechanism for technological “progress” while labor-capital dynamics remain unchallenged. Human workers are relegated to merely advocating for redistribution from the “winners” of technological change rather than shaping technological development itself. Furthermore, it reifies who the “winners” and “losers” are, crediting one side for the innovations upon which the other side loses.
_ Beyond Redistribution: Rethinking UBI and the politics of automation_ - LPE
Parking at hospitals has been a disaster for my entire career but it’s getting worse and actively harms patients and their families/supporters.
As the authors point out, a consistent systemic approach has been totally absent. Why is hospital parking so expensive? Two economics researchers explain