Thread

Replies (82)

Implement it as a backup. That's what I did for my meetup to use signal over tg. Now more comms happen on signal. Start talking about specific topics only on nostr group chat or making people buy shit from you there. Once you're confident it's stable, start shit talking signal and within a year the people that matter will actually be migrated. But definitely need to make sure it's stable.
Ill pair the critisism with a proposal. It needs a NIP or something that does two things. First it tells the clients receiving the nip that anything prior should be treated as deleted. Second, it should tell the compatible relays the messages no longer have to be stored. That way if someone sends some unsolicited stuff or simply a conversation I am not interested in I can actually get rid of it properly and declutter. Right now on Matrix I can simply leave the room, the DM experience is great there. Nostr is an incomplete and incompatible mess on this front. It needs fixing, not workarounds.
Also NIP17 groups are IMO quite problematic when it comes to gaslighting.
JOE2o's avatar JOE2o
Let’s say someone creates a NIP17 groups client that facilitates the kind of purposeful history falsification described in the article below. We’ll call that client Gaslighter. With Gaslighter, for every new message you can create multiple different versions to be seen by each person in the group. These message versions can be completely different to each other, or just subtly different. Either way, when you press send each person gets a different message, and each person thinks the message they’re seeing is what everyone else is seeing. (To be clear each person gets the message customised for them and *does not* get the messages customised for everyone else, so for everyone in the group it's just one new message added to the history, even though you just sent a bunch.) The UI can be very user-friendly, showing all others in the group and, next to each avatar, the message that each will see when you press send. You can choose to apply one message to everyone and then make little edits per person, or just compose them one by one. You can also group people into sub-groups, create one message for one sub-group and one message for another, and so on. For pros, you can create a string of messages that includes both dirty messages (messages that some people will get sent but not others) and clean messages (messages that everyone in the group will get sent). This is to help thwart hash-based gap detection, if such a security feature ever enters the NIP17 spec, though in all likelihood this kind of gap detection will be deemed to be so unworkable (at least without *some* exposed metadata) that it won't. You can also choose to send a message to everyone except one poor person, or except a few poor people. And many other such devious things. Either way, with Gaslighter loaded up and a few minutes of posting you can turn any NIP17 β€œgroup” that you're in into this mutant thing where everyone has a comically different chat history to everyone else, and nobody knows it, and these chat histories will never re-align. (And it was all on purpose, by you, not the result of missed events.) Why would you do this? Most likely for fun. Messing with friends’ minds. β€œYou guys will never believe what happened to me this morning!” you send to all three others in a group. Then you send a different story to all three at once. The first story is shocking and unlucky, the second is shocking and lucky, and the third is just boring, hardly a story at all. Everyone gets very confused by everyone else’s reaction, and eventually you tell them about Gaslighter and everyone goes lol. (That said, after having played around with Gaslighter for a bit, even just having fun with friends, you’re probably always going to be on your mental guard when in a NIP17 group.) But when you start to consider the social-engineering attack surface here, it’s not so funny anymore (see the article below for an example). If NIP17 groups take off then at some point some normie user is going to get unfairly scammed in this way. I say unfairly because it’s clearly unfair to put it on the normie user to understand that the group chat history can potentially be manipulated to be different for each participant. (Key word, purposefully; not just missing a message here or there but socially engineered by an attacker so that each person has the history that the attacker wants them to have.) This is just not in keeping with how modern users understand group chats to work. If a normie user does get scammed in this way, you can be pretty sure the first question on his or her mind after being clued in to the scam will be β€œHow was that even possible?” View Article β†’
View quoted note →
That is security theatre though, if we’re being straightforward. First the user has to go through the UI motions to actually select the message and choose β€œreply” (swipe right or whatever the client demands). Probably 90% of the time the user is replying to the most recent message in the timeline, and if that’s the case it’ll be very rare that the user will actually select that message and hit reply. That feels silly. They *might* use the reply option if it’s some messages a few screens up in the timeline and they want to make clear that they're replying to that one and not the most recent one. But even then many people will just post a new message and let others work things out from context. So already maybe 90% of these gaslighting attacks get through with no chance of reply-based detection. And even in the rare instance that a user has formed this explicit reply habit, the attacker just has to spread the thing being proposed over a few messages, say three of them clean (sent to all) and only one a gaslit one. The user will have to reply to the gaslit one for the gap to show for others, but the user will have no idea which of the four relevant messages is the gaslit one, in client terms they're all identical. And then there are all the false positives, people just turn off or ignore the gap warnings since they show up all the time due to standard Nostr hiccups. It’s like being in a building with a fire drill every day, after a week nobody assumes it could be a fire. What's the false positive to actual attack ratio going to be here? And then you need the clients on both sides to implement the reply logic in such a way as there are no interop issues. Maybe some clients are out of sync with each other and missing parents just aren’t surfaced. So now maybe 95% of gaslighting attacks don’t get detected this way. Or 99%? Anyway, enough that. it can fairly be called security theatre. And then there are the attacks where you strategically don't send a message to a specific person or people, and you do this carefully, several times, over long periods. Very subtle but potentially just as harmful. Those attacks you can't catch out with the reply button at all, even theoretically. Keeping groups small doesn't help either. Groups of 4 to 5 are actually the ideal size for this type of social engineering. Too large and the attacker risks some random person in the group feeling like something in the conversation flow just isn’t right and saying something about it, triggering a β€˜compare notes’ discussion. For small groups, the members of which *don’t* trust each other like old friends, and are not tech-savvy enough to know to not blindly trust the chat history (and why not), then it can be a social engineering risk, again depending on the chat context. The NIP17 narrative is privacy, which people naturally understand as security too, and if that spreads you will get such groups making decisions in what they believe to be a secure space. And you will get those who see this as an opportunity for social engineering. Perhaps the social engineer will be the one to suggest a NIP17 group in the first place, since this is exactly what they’d be looking for, something ostensibly private and secure but in reality very open to manipulation. For Nostr devs who understand what's going on, it's fine. But for normies who have natural expectations about how chat groups work in this day and age it's unfairly insecure.
Also security theatre though, best avoided. A false sense of security is always worse than just being on your guard all the time. First you have to have some notification for Person B if the most recent message seen by Person A (at the time that person A was composing) isn't seen (yet) in the history of Person B, and these notifications will be flooded with false positives. (If you don't have a notification then the gaslighting attack, which could easily be real-time in nature, goes through.) I'd imagine most peole would hate it. Second, the attacker just sends an emoji immediately after the gaslit message and done. Person A's client only does this auto-tag for a new message, but by the time Person A is composing this new message the gaslit one has been buried under a clean one. So the auto-reply checks out. And then if you start going down the multiple past messages array or Merkle route paths, you're basically attempting to recreate Marmot/MLS from the ground up. Unless you meant some other way, not background-replying to the most recently seen message as I understood it there?
Multiple simple things have been suggested already to fix this but in the end no one saw it as that important for users. But it can be super simple because users just need to know that one other user is gaslighting everyone and kick him out. Tagging all IDs at least once in the conversation on new DMs can make a complete record of what the user has seen. Bloomfilters of messages in the past month/week can also help and fully solve it. It's not just theater. After all, it is possible to apply the same gaslighting tactics in every single nip in nostr. The simple fact that we can create content in the past can be used to gaslight everyone. They don't need the DM spec for it.
NIP17 groups are the only space in Nostr where this is code red. Both technically and socially. Technically other Nostr kinds are more transparent, have *some* exposed metadata handles, are not fanned out one by one, etc. But most importantly socially. Few people are going to get conned or socially engineered in kind1 threads or discord style NIP28 chats or whatever else. A closed Signal-style chat group, and one boasting best-in-class privacy, is a totally different risk space. If you're an attacker looking to expose this nostr gaslighting quirk, then NIP17 groups are straight where you'd head.
Again, it's an easy fix if what you saying truly matters. But something tells me you are not interested in fixing it. Other nips are not "more transparent". You can literally do the same thing in almost all of them with the same level of metadata. I know because I have debugged things like that in almost every nip Amethyst implements. Right now, my friends gaslight me on Signal and Whatsapp by editing and deleting messages as well or playing with push notification timing attacks. So, not even Marmot actually solves this.
Believe me I looked at this top to bottom, and top to bottom again. There is no easy fix. All of these easy fixes you're proposing, none of them come close to being either a fix, or easy, or both. This is a solved problem and you need one of the refined solutions we already have. And to say this is in the same category as editing and deleting messages on Signal is kinda disingenuous.
You were the one calling it "gaslighting". Gaslighting is gaslighting, regardless of the stack. I don't know exactly what you want solved since many of the things you cited in this thread are indeed solvable by the simple solutions I cited. I not only looked at them, but I actually coded and tested some of them. And they do it just fine. They just add more complexity for minimum gains, IMO.
Which solution specifically? The "remember to hit reply" solution doesn't work, clearly. The "hidden reply to last-seen message at time of compose" does not either. Your bloom filter one, if you’re saying every new message can include tags for every ID of every message the client/user has received, in the whole history of that group (or to some length back), and bloom filters on top, not sure where to start (also not sure that's what you're suggesting). What are the other simple solutions?
Again, I don't really know what you are actually trying to solve. If the goal is to just highlight to users that a person is the group is fucking them over in the last messages, all you need is a quick check on the last messages. You can add the e tags for as many messages are need to fix for the time wanted. Or you can do bloom filters. There are new kinds that were proposed to mark as seen each message, like a NIP25 reaction would be. There are other proposals that "summarize" things because really only the current messages matter. There are proposals that build a full chain of event IDs from the beginning to end of the chat. There were summary events discussed in the past. There were even MuSig and even ring signature authentications for everyone in the group. There are dozens of options for the things you mentioned. All of them have pros and cons. The more complex ones solve the most amount of issues related to this that you didn't even mentioned, but they all come at a cost of complexity.
Trying to solve is this: Reduce the chance that normies, thinking this works like Signal, WhatsApp, everything else they've ever used, get scammed here. By way of a gaslighting attack that is unique to NIP17 groups, and, if undertaken with skill, could be very effective. Assuming there comes a time when normies start to use this, which I'm hoping there will. This kind of gaslighting risk in closed-group ultra-private messengers is not normal, you have to admit that. Name me any other messenger in the category of ultra-private messaging that carries this risk? I'm open to the fact that one might exist out there, and if so curious what the UX is for this, but I haven't found any. For solutions, what I'm saying is the only solutions you're touching on that actually have wheels are really complex. It's a solved problem and all existing solutions are really complex too, so that only stands to reason. There are no simple fixes that don't end up as security theatre in some way.
Can you define what you mean by gaslighting? Gaslighting can be a lot of stuff (the word is VERY broad) and it does happen in signal and what's app too. So, if you want to keep debating this, you will need to define it better. Just "protecting normies from scammers" won't work because scammers don't need this flaw to scam people. Even if you fix it, they will still get scammed. You need to define what exactly you want solved. I think you are just dismissing actual solutions because they are not perfect. Nothing will ever be perfect. They can only help solve the things that were more appropriately specified. If they are specified then we can measure how well each solution works to solve it for the cost of implementing and pick the best one.
Sure happy to define it. - Person A, Person B and Person C are all β€œup to date” (seeing the same most recent message). - The history of Person A can contain one or more messages that are not in the history of Person B *in any form*, not a deleted-message tombstone, nothing, no trace whatsoever. And the reverse. And the same for Person A-C, B-C. - Additionally the history of Person A can be missing messages that appear in the history of Person B, again with no tombstone, no trace, no indication whatsoever that something is missing. -These "He sees it, she doesn't" messages can be sent simultaneously, so at the same time (and with a malicious client) Person D can send a separate message to Person A, a separate message to Person B, and a separate message to Person C. (or send to all but A at the same time). Also I'd add that any detection method cannot result in a false positive and is therefore explicitly trusted. The above combination of factors enables a level of social engineering that you can't really compare with what an attacker could achieve on Signal, WhatsApp, Telegram, etc. Yes of course people will get scammed anyway. Scammers love Telegram, and Telegram's serer ensures the above can't happen. What I'm arguing is that after getting scammed on Telegram it's fair to say (if cruel) that they have themselves to blame. For this case, if you game theory it out, I don't think we can say that they do fully have themselves to blame. It's a very unique attack vector.