I know I'm late to the game, but I just set up #ollama on my server with the open webui front end. It is really cool! I am very impressed with how fast the responses are and also super impressed with llava's "vision". Llava can describe images pretty well!
My only experience with ollama before this was on a 1gb raspberry pi clone (the libre potato board), and it was impressive in it's own way... Capable of running a decent llm on less than 1gb of ram and potato CPU. Just very slowly.
