I lack technical expertise to understand the detail but this paper created a new form of steganography using GPT-4. Previous methods of steganography using LLMs required specific white box models – but this is a black box proces.

The researchers uses specific prompts to guide the generative AI and encryption to keep the messages (seemingly) secure. When someone receives the text, they can use the same system to extract the hidden message. So steganographic capabilities can be achieved using only public interfaces, if I understand correctly.

Given the AI-in-everything world we’re in, I wonder how this will be locked down.

Generative Text Steganography with Large Language Model – arXiv