I kinda want to do a Web of Trust scoring service as an app. So, you can download the entire Nostr graph to the phone and compute all scores yourself. You can then mannually update all scores as you need and save on Citrine.
Thread
Login to reply
Replies (59)
I dont really want to download a whole new app just to see my score, so when you do make it, tell me my score.
You could just see it on Amethyst, but then it requires you to trust another service that compiles that information.
my profile doesn't show any score. It's missing. I'm on the newest amethyst. It's probably low because I seem to have enemies already.
It's not on Amethyst yet
@Pip the WoT guy built this already
As an app with local computation?
Personalized PageRank is a great addition to nostr. But it’s proven to be susceptible to link farm attacks. Try searching for an influencer filtered by PageRank and you’ll see bots, unless you plan to always limit results to the top one result.
Personalized GrapeRank addresses link farm attacks via mutes and kind 56 reports which target spammers that would otherwise make it through. An entire link farm swarm can be chopped off by a handful of targeted reports. Follows are great but they’re not enough.
View Article →
no because it's hardly doable now, and not doable in the long run.
First, after many memory optimisation, I am still using 4GB for my Redis-based graph.
Can you do it on the phone? Yeah, but not when nostr will be 10x what's now.
Second, you don't gain anything computing the rank yourself if you trust the provider with computing the graph. You might as well use the provider's ranks directly. Computing the graph requires downloading 10+ GB JSON and doing ~400k signature checks.
Third, counting follow list alone, the graph needs to be updated 18k times every day, to be always up to date, which is expensive and changes the ranks.
These are he number for the "whole" graph. A number of people might think that you don't need the whole graph, you just need 2-hops (follows of follows), but I argue that the experience is going to suffer dramatically. In every social media under the sun you can search, at least by user name and find the person. Imagine saying that in Nostr you can't because your locally computed social graph (which is graining your battery) isn't big enough. Like wtf.
WoT score, in place of pronouns section.
Thank you for your attention to this matter.
💜🫂
I am not sure if I am going to do inside of Amethyst. The app is becoming too big.
Are you collecting anon data on what users use in the app most?
Simplify.
Sometimes a Swiss army knife with to many tools gets heavy to carry around if most tools aren't used.
No, we don't collect anything
Not all data collection is bad.
🫂💜
True, but I don't really want to. Our goal is to make Amethyst work without any servers..
Opt-in On-Device Usage Reports?
Sata collection (even anonymized) is one of the most effective tools for app's longevity and development in the right direction!
Hard to build what people want, when you dont know what it is and only a few speak up!
💜🫂
Only if we publish a NIP that other clients can also follow and the information is public
Motivation:
Client and relay developers need actionable data to understand which features are used, how reliable their software is, and where to focus development efforts. Collecting this data in a public and transparent manner ensures users remain in control and the community can audit the information being gathered.
Bullish on GrapeRank.
The best one yet
So I did make this thing. It works pretty well … but also is kinda broken. You see, It never got a “Nostr side” cache … not yet at least. So we’re due for that.
This could just run inside a local relay 👀

GitHub
GitHub - Pretty-Good-Freedom-Tech/graperank-nodejs
Contribute to Pretty-Good-Freedom-Tech/graperank-nodejs development by creating an account on GitHub.
@ManiMe beat you to it. Doesn't have the specific feature of Citrine, but he's already got the web of trust. And better yet trust isn't binary in his system. If y'all aren't already takin', you should be.
I know. He is using my NIP. :)
Not yet , lol. But we’re talking about it. 👍🏼
Arent you working with David? He is using it.
The library I built is … actually modular and extensible, ready to be integrated into clients and relays. It’s a different project altogether from brainstorm.
While I never got around to integrating NIP-85, it does have an overall modular architecture. It would be pretty straightforward to add additional “output” module and then a “trust assertions” plugin for that.

GitHub
GitHub - Pretty-Good-Freedom-Tech/graperank-nodejs
Contribute to Pretty-Good-Freedom-Tech/graperank-nodejs development by creating an account on GitHub.
My Brainstorm 🧠 ⚡️ instance uses its own custom coded GrapeRank calculator. ManiMe and I were working on integrating his above repo earlier this year, didn’t quite completed the process.
And yup, Brainstorm uses Vitor’s NIP to export scores.
ManiMe your GrapeRank calculator repo is used by grapevine.my, is it not?
Part of our goal when ManiMe made those repos was to experiment with separating the GrapeRank algo into modules. In particular, separation of the interpretation module (ingest raw data like kind 3 or 1984 events) from the calculation module (mostly weighted averages). Still needs more R&D. There will by design be lots of distinct interpretation modules. In principle a single calculation engine should have wide applicability, but I’m not sure yet to what extent different teams may want to fork and customize the calculation engine. My calculator module has a few small tweaks that I think aren’t in ManiMe’s calculator.
Maybe issue badges for who qualifies as a human worthy of attention based on whatever algorithm, then client can just look for these badges from your pubkey, or some pubkey created just for this.
By default you could trust badges from @david or @Pip the WoT guy or @Vitor Pamplona but a system like that would allow newcomers to offer different ways to rank that are better or not, or to just provide more up-to-date scores.
Badges become way too heavy to add everybody on them. I am thinking something similar, but with bloom filters.
What do you mean? It's a single badge for each profile. Or you mean it's too heavy to fetch a badge for every profile an app encounters?
Yeah, maybe.
In that case would be nice to have a standard for publishing lists of people as bloom filters, such that they are interoperable.
Yeah, that is still my dream. One way to save a bloom filter that anyone can recode and understand.

Daniel Lemire's blog
Xor Filters: Faster and Smaller Than Bloom Filters
In software, you frequently need to check whether some objects is in a set. For example, you might have a list of forbidden Web addresses. As someo...
People can use many hash types, rounds, sizes and so on. We need to specify it.
@npub1g53m...drvk you were working with bloom filters at one point right? Are you using them in Iris? Any thoughts on this discussion?
I have been doing:
<size_uint>:<rounds_uint>:<base64 of the bytearray>:<salt_uint>
So,
100:10:AKiEIEQKALgRACEABA==:3
Means 100 bits, 10 rounds and the salt is 3
Then we can force in the event kind to always use MurMur3 for 32 bits (the best hash function).
Seems like convincing everyone in nostr to use the same exact specs would be a challenge. What if we come up with a system that doesn’t require everyone to use the same specs?
We declare a Decentralized List (kind 9998 per the custom NIP, linked below), called “Bloom Filter Specs”, and list the requisite parameters as “required” tags (rounds, salt, etc). So if you want to use some particular bloom filter, you declare an item on that list (a kind 9999 event) with your choice of specs and then refer to that event wherever necessary.


NostrHub
NostrHub | Discover and Publish NIPs
Explore official NIPs and publish your own custom NIPs on NostrHub.
Not everyone, just those using the specific kind.
The main issue is that there are thousands of potential hash algorithms to use in bloom filters. If we leave it open, all clients must implement ALL hashing functions just so that things don't break when they suddently face a hash that they don't support. Which, in the end, means that nobody can declare full support to the NIP because there are always new algos.
Also, there is not really a huge gain from tweaking hashing algorithms for Bloom filters. We just need one that works and everybody will be happy.
Just like we only use SHA256 on nostr events.
The kind 9998 list header declaration could specify the hashing algo. Or we could leave the hashing algo unspecified and recognize that it is not necessary for all clients to support all hashing algos, just like it’s not necessary to support all NIPs. Probably the community will gravitate to one algo organically, unless some devs have strong preferences that are not always aligned.
If getting everyone to agree to all the details is trivial, is there any reason not to go ahead and write up a bloom filter NIP?
Currently, @npub1fvma...szfu imports a whitelist (“global feed minus bots, impersonators, spam and other bad actors”) from Brainstorm via API that can be up to 200k pubkeys. Perhaps that would be a candidate use case.
Have you actually implemented this somewhere? (Production or testing) I’m curious to know what use cases we might expect to see in the wild in the short term if a bloom filter nip were to exist.
Internally yes (I have not saved into events yet). All my event, address and pubkey relay hints are saved as 3 massive bloom filters. So, when the app needs to figure out which relays have given IDs, it checks the filter. Which means that I don't need to save any of the events.
Very interesting. Are any other clients doing this? Would you envision clients sharing filters like this?
I don't think so. Everybody just saves a huge list of relays in their databases.
There are many places clients could share bloom filters. This all started with this idea:
In this case, I proposed sha256 as a hash function so that clients didn't need to code MurMur3, but MurMur is so easy that we can just teach people how to do it.
GitHub
Per-event AUTH keys by vitorpamplona · Pull Request #1497 · nostr-protocol/nips
This adds two special tags to authorize certain keys to download events.
This is similar to NIP-70, but in the opposing direction (read instead of ...
for this size of a set, the WOA scores, about 100k scores. so it's very doable to just grab the scores, either with http api or websocket attestations..
then, if you want to bloom, you can do it on the device or service, in any way you see fit. I did think about it a bit tho, directly serving blooms, but the reality is, the scores might matter more than just a true/false type thing and, it wasn't that much savings.
I’m reading your NIP-76. It only takes 100 bits to handle 10 million keys without any false positives?? Wow. Very cool 🤯
They do, and I think individual scores will always be there.
But downloading 100K individual scores takes a while, uses a lot of data and space in the disk of the phone. Having ways to minimize that usage while providing some rough level of trust enables some light clients to exist.
For instance, a user search or tagging could use NIP-50 to download all the shit on relays and then filter by local bloom filters to know which users are real ones. If bloom filters are not available, then the app needs another round trip to download the individual scores of each key and discard them all when the user closes the search screen.
👀🐳
I am not sure if that math is still good. This site can give you a better idea:
It's all about your probability
Bloom filter calculator
Calculate the optimal size for your bloom filter, see how many items a given filter can hold, or just admire the curvy graphs. Also borrow my MIT ...
the key is really, yes, if you can fit it into the relay, like you mentioned with citrine.. but, if it's too big, well, yeah.
I think that math was wrong. The 10,000,000 keys was not the number of keys inside the filter (which for NIP-76 would be 2-3 keys on average). But relays would have to check that filter against 10,000,000 + keys that can connect to them. The false positives claim was based on testing 10,000,000 keys against a simple filter like that.
Sounds like there will be lots of instances where a WoT Service Provider would want to deliver a bloom filter instead of a big list.
Big lists cause several problems:
1. Unwieldy to transmit by API; even just a slight delay could result in bad UX, depending on the use case
2. Won’t fit in a single event due to size limits
3. Slows down processing when using the list for whatever the recipient is using it for.
Any rule of thumb estimates we should keep in the back of our minds as to how big a list of pubkeys or event ids should be before we should think about delivering a bloom filter instead?
Yeah, I suppose 100 bits would be well past all 1’s if we tried to pack in 10^7 pubkeys. If I’m understanding correctly how this works.
So the question we ask: given a certain set of parameters, if we throw X randomly selected pubkeys at it, what are the odds of 1 or more false positives? And for 10 million it’s still pretty tiny.
So I think I misunderstood what you meant by “capable of handling up to” a million keys. It means it would successfully defend against being attacked by one million pubkeys trying to gain access.
Yo, that’s wild! 🤔 So, if we’re tossin’ 10 mil pubkeys into the mix and the odds are still low, what’s the magic number for X that flips the script? 🧐 #CryptoMath #PubkeyMysteries
Wouldn't it be really easy to spam, making the graph huge?
Also couldn't you create an arbitrarily long circular chain which means it could overflow client memory or break the algorithm?
Yep, that's why it has been done in the server these days. But I think phones can run some of it.