Thread

Replies (3)

I 100% understand where this is comint from and they are right; however, this is a perfect example of poor prompting not of poor AI. LLMs are statistical calculators. You feed them bad data, you get bad results. The issue is that people are actually believing these tools are human replacements and therefore place incredibly high expectations on the results. View quoted note →
🛡️
That's exactly what they're like. 'No, I did not tell you to do that. Do not touch that. Do only amd exactly what I've told you to do' Claude says yeah, sure, then proceeds to utterly ignore me. Then argue with me. It's like a savant child.