Observations from video going viral about AI/LLM-related tech:
- Many people don't understand that 'AI' and 'NPU' chips that can barely run tiny models can't accelerate giant 10+ GB LLM models
- Many people have no clue you can run AI models offline / self-hosted





