Profile

User's avatar
npub1uhn7...rerk
npub1uhn7...rerk
AI is a 0.01 quality JPEG. It ingests a staggering amount of data to produce a relatively tiny file that you can decompress with a lot of guesswork on the computer's side. The result is, usually, mostly ok on a larger scale but when you try to look at individual pixels, they're the correct color only by pure luck.
What ingests large quantities of data, applies some math and outputs a much smaller set of data that can still be used to reconstruct what was ingested? Compression algorithms, right? And some of them compress *so much* that data can only be extracted partially and the rest needs to be "guessed" - like JPEGs. LLMs ingest TB after TB and compress that into a few miserable GB. And that's why the info they produce is like a JPEG with a compression factor of 1000 and more.
Following a discussion on reddit of all places, here's a poll. Please spread far and wide because [garbled audio, static, unintelligible noises] The question is: do you normally listen to and enjoy audiobooks? Also, this is what I mean by "inner monologue" (sorry it's not WP but that page terrifies me) #askfedi
Meta response to random bans (and extreme slander, i believe) is "watchagunnadoaboutit". And you know what? They're right, because they're powerful and uncaring and safe in the knowledge they'll face no repercussions for getting it wrong. At this point, they're doing what any other hegemonizing swarm would do, it's their nature. It's US who should do something.