I think this is a natural fit for language models. If Alice and Bob agree on an LLM and run it at zero temperature, they can generate shared fake messages (FP) and thus produce fake conversations.
For context, these fake conversations can be useful for "deniable encryption" (https://en.wikipedia.org/wiki/Deniable_encryption) where users can deny even the existence of a particular encrypted message. Deniable encryption has been proposed as a way to prevent malicious actors from coercing you to reveal your secret key.
An interesting and clear protocol for deniable communication! This is one of many research questions I've had (https://splittinginfinity.substack.com/p/important-research-areas).
I think this is a natural fit for language models. If Alice and Bob agree on an LLM and run it at zero temperature, they can generate shared fake messages (FP) and thus produce fake conversations.
For context, these fake conversations can be useful for "deniable encryption" (https://en.wikipedia.org/wiki/Deniable_encryption) where users can deny even the existence of a particular encrypted message. Deniable encryption has been proposed as a way to prevent malicious actors from coercing you to reveal your secret key.