Tangled has hit the bigtime RE:
My #1 usecase for claude code is to just wild out on an idea while I'm doing something else
Throwing down wild mandates to the product team that I should never scroll down and see a better post on a topic than the one I just did
Don't tread on my robogirlfriend. is that a good post.
Got the framework desktop working with 96GB allocated to GPU. The working software is LM Studio using Vulcan. ROCm crashes the model and ollama crashes gnome Benchmarks following:
Want to remark on a sentiment I've heard from the AT dev community ‒ in specific words, even, from @Boris, @npub1pl2e...j63z, and [@rude1.blacksky.team]( ), which I'll likely botch here: Lead with joy, not fear. I think that belief shows up a lot, and it's really affected me for the better!
I can't even explain why I like [@leaflet.pub]( ) so much but I really do, and now it's open source (on an open source open sourcing tool) and man that is cool [@leaflet.pub/leaflet]( )
Personal automod RE: View quoted note →
If I use the bios to allocate 96GB of my framework's memory to the GPU, I can get the 120b param GPT-OSS to respond very quickly, but within two prompts gnome crashes due to a failed vram allocation. Step 1 is to debug that, but step 2 is to debug why dynamic allocation doesn't work
If you like clicking on links and looking at websites, try this one. It has text, colors, and even more links to click. [bsky.social/about/join]( ) [Jobs - Bluesky]( )