Publius Pseudis 3 months ago if you haven't worked out local LLMs yet, you probably should soon. Its not hard.
Publius Pseudis 3 months ago Here is a single command to get you started: curl -fsSL https://ollama.com/install.sh | sh && ollama run gemma3:270m
Publius Pseudis 3 months ago ↩ replying to Publius Pseudis that model (gemma3:270m) should run on pretty much any cpu and ram config at a usable speed. its not a super smart model. Explore Ollama SearchSearch for models on Ollama. to see other models. My daily driver is qwen3:4b - it fits in 12gb of vram and is good enough for most tasks.
Publius Pseudis 3 months ago ↩ replying to Publius Pseudis if you are running SpywareOSs (Mac/Windows) ollama has its own download for them: Download Ollama on macOSDownload Ollama for macOS