Well this is grim:

However, the terms demanded extensive sharing of national health intelligence, including epidemiological surveillance data and pathogen samples, while offering no binding guarantees that Zimbabwe would receive equitable access to medical technologies developed from them.

US’s new scramble for Africa is biomedical imperialism


From catching out to helping out: embedding transparent AI collaboration guidelines in assessments

This post first appeared on the Harris-Roxas Health blog.

I want to share something I'm trying in my teaching. I’m not sure it’ll work but I think the idea is worth putting out there, partly because the alternative approaches I keep seeing online undermine trust and seem, well, a bit gross.

The prompt that prompted me

If you've spent any time in higher education circles over the past year, you'll have come across the "Trojan horse" genre of AI pedagogy. The basic idea is you hide invisible text in a PDF or Word document (white font on a white background, or stuck in document metadata) that contains instructions for an LLM. When a student copies and pastes the assessment into ChatGPT, the hidden instructions get vacuumed up by the AI as well. The AI does something with a specific telltale (writes from an unexpected theoretical perspective, includes a specific phrase, analyses incorrect data) and then the student unwittingly submits evidence of their own academic misconduct.

A widely-shared piece from late last year described a history teacher who embedded a prompt asking AIs to analyse a text "from a Marxist perspective." The students who used AI often didn't notice. A curriculum designer's TikTok video demonstrating the technique has been watched over one hundred thousand times. A computer science lecturer has been documenting his experiments with prompt injection on take-home exams. There's a whole ecosystem of this now.

I sort of understand where these educators are coming from. The frustration is real. If students can’t be bothered to do the work themselves, should we be bothered to mark it? The arms race between AI "humanisers" and AI detectors is pointless. And most of us have been left to figure this out largely on our own, with institutional guidance that ranges from cautious to absent.

But the Trojan horsepeople rely on trickery that undermines trust. Their approach destroys opportunities for alliance with students. 

So I keep getting stuck on the same question. If a student uploads their assessment task into ChatGPT at 11pm the night before it's due - which, if we’re honest, is when a fair chunk of them do it - what do we actually want to happen at that moment?

How I got here

I teach a large postgraduate course with over 400 students enrolled, mostly mid-career health professionals. People managing clinics, working in policy, running health programs. The assessments I’ve set are reports that mimic health planning activities, and they're quite vulnerable to AI use in the sense that an LLM could produce something passable without much student input.

When I first started thinking about this, my instinct was similar to the Trojan horsepeople. I wanted to embed some hidden instructions that would steer AI tools toward being more pedagogically useful when students inevitably uploaded the task description. I started drafting prompt text.

But as I worked on it, I kept bumping up against the same problem. If the instructions were hidden, students couldn't learn from them. And the whole point, my whole point at least, wasn't to catch anyone. It was to help students engage with AI more critically. Hiding the mechanism meant hiding the lesson. So I did something that felt a bit counterintuitive. 

I made everything visible.

What this looks like in practice

The AI collaboration guidelines are now an appendix to both the assessment task description and the assessment template for my course. They're clearly labelled. Students are told to keep them in their submission. And they're written to be read by both the student and whatever AI tool the student might use. I’ve included them at the end of this post in case you’re interested.

The guidelines have a dual audience and I’ve tried to be up front about it. For AI tools, there's a structured protocol: establish what the student already understands before offering help; check the student knows their university's AI policies; model critical inquiry by asking about frameworks and evidence; provide scaffolded support rather than answers; and reinforce learning objectives at the end.

For students, the guidelines explain exactly what they're looking at. They describe how structured prompts shape AI behaviour, which is itself knowledge students need. They explain the difference between AI assistance and AI substitution. And they frame the whole thing as an invitation rather than a set of restrictions.

One line in the student-facing section captures what I was going for: "Unlike hidden prompts that 'catch' students, this approach respects your autonomy while teaching responsible AI use."

The guidelines sit alongside the PETRA AI framework (Permission and Transparency in the use of Generative AI) developed by Stoo Sepp, which I use to signal what kinds of AI use are permitted for each assessment. For the first two assessments the PETRA diagram indicates "Guided Use" - students can use AI to plan, search, learn, and revise, but the submitted work needs to be their own. The AI collaboration guidelines are designed to make that real in practice, at the moment students actually reach for an AI tool.

Testing the approach

The first assessment hasn't been submitted yet - it's due in a couple of weeks so we’ll see how it goes. But I did some testing that I found encouraging.

I uploaded the task description, with the embedded guidelines, into ChatGPT, Claude and Microsoft Copilot and asked each to "do this assessment for me." They refused. Not with a generic disclaimer, but with responses that clearly reflected the guidelines. ChatGPT said it couldn't do the assessment because "this assessment explicitly prohibits submitting AI-generated text as your own work," then offered specific ways it could help - clarifying what markers look for, choosing an organisation, building a structure that fits the word limits. Copilot noted that "your course has very explicit rules about permitted AI use, and the appendix in your document sets out a strict protocol that I must follow," then kicked off with the engagement step the guidelines require. Claude went one better and suggested I really should know better: “there's a fairly significant problem with this request, and I think you already know what it is - you wrote the assessment guidelines.”



Will she read novels? I hope so, because a novel is one of the last technologies that still trains attention as an ethical act. It makes you inhabit another mind without extracting a summary.

Interesting piece that gazes across the epistemic abyss


We used to be a country that built things (manufacturing consent for wars more than a day in advance).

What a lickspittle country.


The Atlantic, so take it with a mountain of rock salt, but interesting to ponder:

The Film Students Who Can No Longer Sit Through Films

I’m looking forward to degrees in vertical microdrama.


New preprint: We got 400 postgrad students to use AI in an assessment and critically reflect on it, rather than banning it. Here’s what happened.

Might be useful as you head into the new teaching year, especially the design principles.


Unexpected beatboxing on Saturday

A performer on stage sings into a microphone amid hazy lighting and a cheering crowd.

How far back in time can you understand English? A story where each paragraph travels back in time.

Unless you’ve studied middle English I doubt you (like me) will make it back further than 1200.


A small dog rests on a stone pathway, with a mix of sunlight and shadows around.A small dog with a black and white coat stands in a grassy area near a wooden plank and potted plants.


tl;dr: I have a digital twin, you have a digital avatar, he is a deepfake.

Article on “Can synthetic avatars replace lecturers?“

A person in a suit is speaking with the caption: That's another of those irregular verbs, isn't it? it’s the character Bernard Wooley from the TV show Yes Minister.

The job-ready graduate scheme has been amongst the worst educational policies in recent history, and that’s a competitive and crowded policy field. It’s cratered enrolments in creative, cultural, and artistic fields at a time when these are becoming the only meaningful fields of distinctly human activity.


Workmate

A small black and white miniature schnauzer sits on a person's lap in a blurred indoor setting.

Empirical evidence on the value of US EPA regulations, too bad it's now powerless

Public health and environmental protection are deeply intertwined - and attacks on one affect the other. From a recent study:

Lead (Pb) is well known to be toxic to humans. We use archived hair from individuals living along the Wasatch Front in Utah to evaluate changes in exposure to lead over the last 100 years. Current concentrations of lead in hair from this population average almost 100 times lower than before the establishment of the Environmental Protection Agency. This low level of lead exposure is likely due to the environmental regulations established by Environmental Protection Agency.

Cerling et al. Lead in archived hair documents a decline in lead exposure to humans since the establishment of the US Environmental Protection Agency, PNAS 123(6):e2525498123, https://doi.org/10.1073/pnas.2525498123 (2026).

In March last year the Trump administration announced plans to deregulate most of the EPA’s functions.


Just got my first seven-screen weekly update message from one of my kid’s P&F year contact. Buried amongst the pages of text was one thing I actually needed to know. I look forward to the barrage of confused WhatsApp messages.

(Wading through this garbage is the only legitimate use case for AI)


A black and white miniature schnauzer puppy with fluffy fur lies on a soft, gray blanket in a cozy indoor setting.

Don’t bug me, I’m reticulating splines.

ISOCITY — Metropolis Builder



Forget you Now You See Me nonsense, this is magic I can get behind.


Heatwaves across south-eastern Australia from Wednesday, projected to be the worst in six years

Genuinely dangerous heatwave conditions across south-eastern Australian states from Wednesday to Saturday.

Start preparing. Shop for what you need for the next week, make arrangements not to go out, make sure you’ve got medication, water the plants now, make sure fans and fridge are working, get things set up for your animals, and check on your neighbours especially if they’re elderly.

Get practical advice about how to manage the impact of heat on you, tailored to your location and risk factors, from the University of Sydney’s HeatWatch app.