So if I'm reading this right:

Microsoft Security Blog
ββWhisper Leak: A novel side-channel attack on remote language models | Microsoft Security Blog
Understand the risks of encrypted AI traffic exposure and explore practical steps users and cloud providers can take to stay secure. Learn more.
(If one forgives the sampling limitations (one question, 100 positive responses, etc.), which I'm not sure you should...)
The output of a language model is predictable enough that, even with simple ML models, if you
use an LLM to generate prompts about a topic x and not about a topic x
record the timestamps of when the LLM responded with the next chunk of tokens for each generated prompt
train the ML model to classify "known to be/not be about x" from the timestamps
then you can get effectively perfect precision of guessing when something is about x.
Or, if Microsoft's own research is true, then the output of LLMs is not in fact a divinely powerful information oracle, but so stereotyped you can predict the topic by the timing of responses. The oracle only appears divine because we cannot see any of the other requests, and are led to believe our question is unique and personal.
Or, this is a profound self own about the use of LLMs for information, period.