Here's a left-side-of-the-bell-curve way to do the Internet Archive "right":
- Create browser extension
- User loads page
- User clicks "archive" button
- Whatever is in user's browser gets signed & published to relays
- Archival event contains URL, timestamp, etc.
- Do OpenTimestamps attestation via NIP-03
- ???
- Profit
I'm sure there's a 100 details I'm glossing over but because this is user-driven and does all the archiving "on the edge" it would just work, not only in theory but very much so in practice.
The reason why the Internet Archive can be blocked is because it is a central thing, and if users do an archival request they don't do the archiving themselves, they send the request to a central server that does the archiving. And that central server can be blocked.
Thread
Login to reply
Replies (30)
We absolutely need this
Interesting idea. Would it be enough to make a screenshot of the website, hash it and timestamp it?
DECENTRALIZE EVERYTHING.
View quoted note β
This is an urgent case as we live in the last days of "truth". Everything is being manipulated and erased in REAL TIME. Decentralized Internet Archive!
Love the idea. LFG!
Read about it before. Crazy how digital archives are targeted too...but this here is outrageous!
There really need to be swift vigilantiesque public hangings and pikings for the destruction of knowledge. The total control of access to knowledge is on the same demonic wish list as total surveillance.


Ars Technica
Anthropic destroyed millions of print books to build its AI models
Company hired Google's book-scanning chief to cut up and digitize "all the books in the world."
Iβm very left side. Can you just get a bot to auto archive everything
"Whatever is in user's browser gets signed & published to relays ".
This is the problem. For paywalled content, how can we be sure that there is no beacon stored somewhere in the page (DOM, js, html) that identifies the subscriber?
Let's focus on regular content and cross that bridge if we get there.
The main issue is that the big services are centralized and the self-hosted stuff isn't syndicated.
are paywalling services doing that - and punishing the user for screenshotting etc?
hahahahahahahshahahaha
thatβs so crazy if so. wow
They're trying everything in their power to make water not wet. 

i would have for sure read that years ago, but a great reminder ty gigi
i knew they try were trying to use this on music but i didnt realise they were embedding gotcha code so they can police how the user uses their computer, and heaven forbid, copies something. fucking hilarious.
hard to imagine why theyβre dying such a quick death π
theyβre suiciding themselves. making their product shit all because they cant come to terms with the characteristics of water.
i guess we should thank them
talk about failing the ego test
How thatβs Archive do it?
I don't think the goal has ever been to make data impossible to copy. The goal is most likely to make copying certain data more difficult. DRM has done that, whether you like it or not. The industry wouldn't do it if it didn't work to some degree.
But I also hate DRM and how it works. Totally agree on that.
Futile action by a dying industry.
not a dying industry⦠the players will just change and the rules of engagement will evolve

Thatβs a good thought . I have an extension Iβm working on that bridges the web over to nostr allowing users to create discussions anywhere on the web using nostr. It seems like an archive function would be a solid addition. If I can get the universal grill box idea solid I will work on the archival concept as well.
All the JavaScript getting ingested? Worried about the privacy part but very interesting.
Calling all vibe coders!
View quoted note β
I've been casually vibe coding this since Wednesday. I think it's quite a powerful idea. I have zero experience with making an extension, but it's the first time AI called a project 'seriously impressive' when I threw Gigi's idea in there.
So far I have come up with a few additional features but the spec would be this at a minimum:
OTS via NIP-03
Blossom for media
3 different types of archiving modes:
Forensic Mode: Clean server fetch, zero browser involvement = no tampering
Verified Mode: Dual capture (server + local) + automatic comparison = manipulation detection
Personal Mode: Exact browser view including logged-in content = your evidence
Still debugging Blossom integration and NIP-07 for signing extensions seems tricky. The only caveat is you would need a proxy to run verified + forensic modes, as CORS will block requests otherwise. Not sure how that would be handled other than hosting a proxy. Once I have a somewhat working version I may just throw all the source code out there, I dunno.
Some test archives I've done on a burner account using this custom Nostr archive explorer here.
View quoted note β
Some test archives I've done on a burner account using this custom Nostr archive explorer here.
Nostrie - Nostr Web Archive Explorer
You say this is left-side but there is nothing on the right-side of the curve since what you describe here is already at maximum complexity. And that archiver extension is a mess.
But sure, it's a good idea, so it must be done.
I made this extension:
, which is heavily modified from that other one.
Damn, this "Lit" framework for making webgarbage is truly horrible, and this codebase is a mess worse than mine, but I'm glad they have the dirty parts of actually archiving the pages working pretty well.
Then there is for browsing archives from others.
Please someone test this. If I have to test it again myself I'll cry. I must wait some days now to see if Google approves this extension on their store, meanwhile you can install it manually from the link above.
GitHub
Release whatever Β· fiatjaf/nostr-web-archiver
let's see if this works
websitestr
π
Please keep things uploaded into non-Google sites π
Google is sold, Google is finished and should be heavily boycotted for what they're doing to us π³οΈπ
It works. I'm not sure how to view my own, but my Amber log shows what I think is all the right activities.
I'm not sure what the crying is about. This extension is more cooperative than the scrobbler one.