I sometimes have a little cheese or a banana before bed and see if I have crazy dreams. They say there's no evidence for it, but I wanna find out.
I had some shit where I wanted to read financial transactions that were being sent to an email inbox as CSV exports and they wouldn't export them with regularity any other way and I couldn't get IMAP shit to work with Go. I wrote the IMAP part in Python and got that working, told OpenCode with GPT how to write it like I wanted to the database. That worked. Then I had it convert it back into Go again. Boom. Now I've got an executable I can jam on a server and it'll just run for potentially years without interruption.
I much prefer deploying Go executables because they just work, never fuck up with shared libraries, never need a new runtime to function. They just keep on chugging. Very low maintenance.
Okay! I updated the hell out of the system, purged dead and junk data, brought the instance size down. It's all working nicely and it's cheaper too.
I brought the instance connection settings and Postgres settings down to what they should be for a machine half the size and it seems to be working. Sweet. I'm gonna downsize the machine now.
Before:
```
Filesystem Size Used Avail Use% Mounted on
/dev/sda 158G 74G 77G 50% /
```
After:
```
Filesystem Size Used Avail Use% Mounted on
/dev/sda 158G 17G 133G 12% /
```
FEELS GOOD MAN.
Now I can downsize once more.
I'm gonna bring my instance down to vacuum the entier database now. BRB.
The database pruning continues. I ended up doing this.
CREATE TABLE tmp_orphan_activity_ids (id uuid);
INSERT INTO tmp_orphan_activity_ids (id)
SELECT a.id
FROM public.activities a
LEFT JOIN public.objects o ON a.data ->> 'object' = o.data ->> 'id'
LEFT JOIN public.activities a2 ON a.data ->> 'object' = a2.data ->> 'id'
LEFT JOIN public.users u ON a.data ->> 'object' = u.ap_id
WHERE NOT a.local
AND jsonb_typeof(a.data -> 'object') = 'string'
AND o.id IS NULL
AND a2.id IS NULL
AND u.id IS NULL;
-- INSERT 0 19676859
CREATE OR REPLACE PROCEDURE prune_orphaned_activities_from_table(batch_size integer DEFAULT 5000)
LANGUAGE plpgsql
AS $$
DECLARE
rows_deleted integer;
BEGIN
LOOP
WITH picked AS (
SELECT id
FROM public.tmp_orphan_activity_ids
LIMIT batch_size
),
deleted AS (
DELETE FROM public.activities a
USING picked p
WHERE a.id = p.id
RETURNING a.id
)
DELETE FROM public.tmp_orphan_activity_ids t
USING deleted d
WHERE t.id = d.id;
GET DIAGNOSTICS rows_deleted = ROW_COUNT;
COMMIT;
RAISE NOTICE 'deleted % rows this batch', rows_deleted;
EXIT WHEN rows_deleted = 0;
END LOOP;
END;
$$;
CALL prune_orphaned_activities_from_table(5000);
This cleared out the rest of the orphaned activities you'd get from that query. At last the query that prune_objects is able to run for that is able to actually finish executing within a few minutes. I just waited for the rest of the command and now it's actually able to run.