I kinda want to do a Web of Trust scoring service as an app. So, you can download the entire Nostr graph to the phone and compute all scores yourself. You can then mannually update all scores as you need and save on Citrine.
Thread
Login to reply
Replies (40)
@npub1manl...n9tn beat you to it. Doesn't have the specific feature of Citrine, but he's already got the web of trust. And better yet trust isn't binary in his system. If y'all aren't already takin', you should be.
I know. He is using my NIP. :)
Not yet , lol. But weβre talking about it. ππΌ
Arent you working with David? He is using it.
The library I built is β¦ actually modular and extensible, ready to be integrated into clients and relays. Itβs a different project altogether from brainstorm.
While I never got around to integrating NIP-85, it does have an overall modular architecture. It would be pretty straightforward to add additional βoutputβ module and then a βtrust assertionsβ plugin for that.

GitHub
GitHub - Pretty-Good-Freedom-Tech/graperank-nodejs
Contribute to Pretty-Good-Freedom-Tech/graperank-nodejs development by creating an account on GitHub.
My Brainstorm π§ β‘οΈ instance uses its own custom coded GrapeRank calculator. ManiMe and I were working on integrating his above repo earlier this year, didnβt quite completed the process.
And yup, Brainstorm uses Vitorβs NIP to export scores.
ManiMe your GrapeRank calculator repo is used by grapevine.my, is it not?
Part of our goal when ManiMe made those repos was to experiment with separating the GrapeRank algo into modules. In particular, separation of the interpretation module (ingest raw data like kind 3 or 1984 events) from the calculation module (mostly weighted averages). Still needs more R&D. There will by design be lots of distinct interpretation modules. In principle a single calculation engine should have wide applicability, but Iβm not sure yet to what extent different teams may want to fork and customize the calculation engine. My calculator module has a few small tweaks that I think arenβt in ManiMeβs calculator.
Maybe issue badges for who qualifies as a human worthy of attention based on whatever algorithm, then client can just look for these badges from your pubkey, or some pubkey created just for this.
By default you could trust badges from @npub1u5nj...ldq3 or @npub176p7...vgup or @npub1gcxz...nj5z but a system like that would allow newcomers to offer different ways to rank that are better or not, or to just provide more up-to-date scores.
Badges become way too heavy to add everybody on them. I am thinking something similar, but with bloom filters.
What do you mean? It's a single badge for each profile. Or you mean it's too heavy to fetch a badge for every profile an app encounters?
Yeah, maybe.
In that case would be nice to have a standard for publishing lists of people as bloom filters, such that they are interoperable.
Yeah, that is still my dream. One way to save a bloom filter that anyone can recode and understand.
I've already though about the bloom filter idea, as well as @hzrd149 and many others.
Imo it's just a "short blanket".
You can use bloom filters to do basic filtering. E.g. hide this event because its author is not allowed, but you can only do it *after* you downloaded the event.
So, where it is useful?
Only where you can't control/specify the authors in the REQ beforehand. This includes:
- filtering spam from DMs
- filtering spam from replies under a post
- filtering spam from notifications
not for search, not for reccomendations
What's better for filtering imo is bulk ranking, which with RankProfiles takes few ms and allows to rank 1000 pubkeys in a single request.
(Oh and btw it's free for up to 100 request/day per user, which means a total of 100k per user).
Instead of a simple 0 or 1, in or out, you get a score between 0 and 1, you can use the reputation in more interesting ways.
e.g. for replies, you could compute a score which is
score = pagerank(author) * zapped amount(from people I follow idk)
and then filtering using the threshold T
score < T
This also has the benefits of not having to worry about expirations, because yes reputations change all the time, precisely ~18k times a day with the current scale of nostr. You just request the moment the user clicks on the event to see the replies.

Daniel Lemire's blog
Xor Filters: Faster and Smaller Than Bloom Filters
In software, you frequently need to check whether some objects is in a set. For example, you might have a list of forbidden Web addresses. As someo...
People can use many hash types, rounds, sizes and so on. We need to specify it.
@npub1g53m...drvk you were working with bloom filters at one point right? Are you using them in Iris? Any thoughts on this discussion?
I have been doing:
<size_uint>:<rounds_uint>:<base64 of the bytearray>:<salt_uint>
So,
100:10:AKiEIEQKALgRACEABA==:3
Means 100 bits, 10 rounds and the salt is 3
Then we can force in the event kind to always use MurMur3 for 32 bits (the best hash function).
Seems like convincing everyone in nostr to use the same exact specs would be a challenge. What if we come up with a system that doesnβt require everyone to use the same specs?
We declare a Decentralized List (kind 9998 per the custom NIP, linked below), called βBloom Filter Specsβ, and list the requisite parameters as βrequiredβ tags (rounds, salt, etc). So if you want to use some particular bloom filter, you declare an item on that list (a kind 9999 event) with your choice of specs and then refer to that event wherever necessary.


NostrHub
NostrHub | Discover and Publish NIPs
Explore official NIPs and publish your own custom NIPs on NostrHub.
Not everyone, just those using the specific kind.
The main issue is that there are thousands of potential hash algorithms to use in bloom filters. If we leave it open, all clients must implement ALL hashing functions just so that things don't break when they suddently face a hash that they don't support. Which, in the end, means that nobody can declare full support to the NIP because there are always new algos.
Also, there is not really a huge gain from tweaking hashing algorithms for Bloom filters. We just need one that works and everybody will be happy.
Just like we only use SHA256 on nostr events.
The kind 9998 list header declaration could specify the hashing algo. Or we could leave the hashing algo unspecified and recognize that it is not necessary for all clients to support all hashing algos, just like itβs not necessary to support all NIPs. Probably the community will gravitate to one algo organically, unless some devs have strong preferences that are not always aligned.
If getting everyone to agree to all the details is trivial, is there any reason not to go ahead and write up a bloom filter NIP?
Currently, @relaytools imports a whitelist (βglobal feed minus bots, impersonators, spam and other bad actorsβ) from Brainstorm via API that can be up to 200k pubkeys. Perhaps that would be a candidate use case.
Have you actually implemented this somewhere? (Production or testing) Iβm curious to know what use cases we might expect to see in the wild in the short term if a bloom filter nip were to exist.
Internally yes (I have not saved into events yet). All my event, address and pubkey relay hints are saved as 3 massive bloom filters. So, when the app needs to figure out which relays have given IDs, it checks the filter. Which means that I don't need to save any of the events.
Very interesting. Are any other clients doing this? Would you envision clients sharing filters like this?
I don't think so. Everybody just saves a huge list of relays in their databases.
There are many places clients could share bloom filters. This all started with this idea:
In this case, I proposed sha256 as a hash function so that clients didn't need to code MurMur3, but MurMur is so easy that we can just teach people how to do it.
GitHub
Per-event AUTH keys by vitorpamplona Β· Pull Request #1497 Β· nostr-protocol/nips
This adds two special tags to authorize certain keys to download events.
This is similar to NIP-70, but in the opposing direction (read instead of ...
for this size of a set, the WOA scores, about 100k scores. so it's very doable to just grab the scores, either with http api or websocket attestations..
then, if you want to bloom, you can do it on the device or service, in any way you see fit. I did think about it a bit tho, directly serving blooms, but the reality is, the scores might matter more than just a true/false type thing and, it wasn't that much savings.
Iβm reading your NIP-76. It only takes 100 bits to handle 10 million keys without any false positives?? Wow. Very cool π€―
They do, and I think individual scores will always be there.
But downloading 100K individual scores takes a while, uses a lot of data and space in the disk of the phone. Having ways to minimize that usage while providing some rough level of trust enables some light clients to exist.
For instance, a user search or tagging could use NIP-50 to download all the shit on relays and then filter by local bloom filters to know which users are real ones. If bloom filters are not available, then the app needs another round trip to download the individual scores of each key and discard them all when the user closes the search screen.
ππ³
I am not sure if that math is still good. This site can give you a better idea:
It's all about your probability
Bloom filter calculator
Calculate the optimal size for your bloom filter, see how many items a given filter can hold, or just admire the curvy graphs. Also borrow my MIT ...
the key is really, yes, if you can fit it into the relay, like you mentioned with citrine.. but, if it's too big, well, yeah.
I think that math was wrong. The 10,000,000 keys was not the number of keys inside the filter (which for NIP-76 would be 2-3 keys on average). But relays would have to check that filter against 10,000,000 + keys that can connect to them. The false positives claim was based on testing 10,000,000 keys against a simple filter like that.
Sounds like there will be lots of instances where a WoT Service Provider would want to deliver a bloom filter instead of a big list.
Big lists cause several problems:
1. Unwieldy to transmit by API; even just a slight delay could result in bad UX, depending on the use case
2. Wonβt fit in a single event due to size limits
3. Slows down processing when using the list for whatever the recipient is using it for.
Any rule of thumb estimates we should keep in the back of our minds as to how big a list of pubkeys or event ids should be before we should think about delivering a bloom filter instead?
Yeah, I suppose 100 bits would be well past all 1βs if we tried to pack in 10^7 pubkeys. If Iβm understanding correctly how this works.
So the question we ask: given a certain set of parameters, if we throw X randomly selected pubkeys at it, what are the odds of 1 or more false positives? And for 10 million itβs still pretty tiny.
So I think I misunderstood what you meant by βcapable of handling up toβ a million keys. It means it would successfully defend against being attacked by one million pubkeys trying to gain access.
Yo, thatβs wild! π€ So, if weβre tossinβ 10 mil pubkeys into the mix and the odds are still low, whatβs the magic number for X that flips the script? π§ #CryptoMath #PubkeyMysteries
Wouldn't it be really easy to spam, making the graph huge?
Also couldn't you create an arbitrarily long circular chain which means it could overflow client memory or break the algorithm?
Yep, that's why it has been done in the server these days. But I think phones can run some of it.
