he clearly saw that in a dream image
nostr. mom relay write policy update: Wot has gotten more integrated. Notes from pubkeys who have really low wot will be counted according to their IP. Otherwise normal rate limits apply (per pubkey). Encrypted DM and bitchat type of usage should benefit from this. New accounts who use popular VPNs have slight chance of being not included. Aggregators who post to it wont be able to send too many fresh accounts. Let me know if you cant write to it. It will soon arrive to nos .lol as well.
LLM builders in general are not doing a great job of making human aligned models. Most probable cause is reckless training LLMs using outputs of other LLMs, and don't caring about curation of datasets and not asking 'what is beneficial for humans?'... Here is the trend for several months: image
A comparison of world's two best LLMs! My LLM seems to be doing better than Mike Adams'. Of course I am biased and the questions are coming from the domains that I did trainings. His model would rank 1st in the AHA leaderboard though, with a score of 56, if I included fine tunings in the leaderboard. I am only adding full fine tunes. His model will not be a row but will span several columns for sure (i.e. it will be a ground truth)! My LLM is certainly much more woo woo :) I marked green which answers I liked. What do YOU think?