I think I got nerd sniped into implementing online compaction for CLN's gossip store. Mainly from people agitating that we should use a DB for gossip messages. This will speed startup and reduce memory usage. And it's only going to take me a couple of days' work, depending on how many side-cleanups I do. It can be hard to tell a flawed implementation of a good idea from a bad idea. But trust me, this is gonna be great!
The last CLN release was terrible for really large nodes: the bookkeeper migrations were exponentially slow on Postgres, then the first call to bookkeeper would grind the node and even OOM it. Some quickbfixes went in, and there will probably be another point release with those. Thanks to those who donated their giant accounting dbs! But the exciting thing is all the optimization and cleanup work that came from me profiling how we handle this large data. Not just reducing the time (those single linked lists being abused!) which I've been spending the last weeks on, but last night I started measuring *latency* while we're under stress. 1. Anyone can spray us with requests: the initial problem was caused by bookkeeper querying a few hundred thousand invoices. 2. A plugin can spew almost infinite logs. 2. xpay intercepts every command in case xpay-handle-pay is set, so it can intercept "pay" commands. Fixes: 1. Yield after 100 JSON commands in a row: we had logic for autoclean, but that was 250ms, not count based. We now do both. 2. Fix our I/O loop to rotate through fds and deferred tasks, so yield is fair. 3. Yielding on plugin reply RPC too, which definitely helps. 4. Hooks can now specify an array of filter strings: for the command hook, these are the commands you're interested in. As a result, my hand-written sleep1/lightning-cli help test spikes at 44ms, not 110 seconds! My next steps are to use synthetic data (say, 10M records to allow for future growth) and formalize latency measurements into our CI. Next release looking amazing!