Thinking about the filters debate, it dawned on me that it's all a moot point: 80 bytes may be enough to use as a latent space in an autoencoder [1].
For example, you could encode an image (ie. embed it in the latent space), convert the latent vector to a consensus valid address, and publish to the chain. Then you release the decoder half of the autoencoder publicly (say on nostr) and anyone could interpret that image (or any image encoded with the encoder half) if they ran the decoder.
That's effectively what happens when inscribing a jpeg. Granted we use traditional software for rendering/decoding the image but I don't see much distinction.
1. 

Autoencoder - Wikipedia
