@PayPerQ and
@Maple AI
Although the user prompt input through your systems are private, once the user prompt is fed to one of the many available LLMβs you have, those LLMβs retain whatever they want, is that right? Or do they not retain them because youβre able to configure it for privacy through the API connection?
In my own case using your services, I assume the LLMβs are retaining my prompts, even if for a short time. Therefore I try to be cognizant to not include info. in the prompts that are βsensitiveβ, and should be private.