LLMs will confidently make up info about things that happened after their knowledge cutoff—even when they know their cutoff! They're trained to sound plausible, not to say “I don’t know.” The wild part? Admitting ignorance isn't a technical impossibility but it's just not the default setting (yet). As Andrej Karpathy says: “Every LLM response is a sort of hallucination, they just happen to be right most of the time.” #AI #ChatGPT
Bitcoin does this in just one week. image
Thats why we use NOSTR. image