There is an post describing the possibility of an organised campaign against archive.today [1] <a href="https://algustionesa.com/the-takedown-campaign-against-archive-today/" rel="nofollow">https://algustionesa.com/the-takedown-campaign-against-archi...</a><p>How does the tech behind archive.today work in detail? Is there any information out there that goes beyond the Google AI search reply or this HN thread [2]?<p>[1] <a href="https://algustionesa.com/the-takedown-campaign-against-archive-today/" rel="nofollow">https://algustionesa.com/the-takedown-campaign-against-archi...</a>
[2] <a href="https://news.ycombinator.com/item?id=42816427">https://news.ycombinator.com/item?id=42816427</a>
If they're under an organised defamation campaign, they're not helping themselves by DDoSing someone else's blog and editing archived pages.
archive.today works surprisingly well for me, often succeeding where archive.org fails.<p>archive.org also complies with takedown requests, so it's worth asking: could the organised campaign against archive.today have something to do with it preserving content that someone wants removed?
There was also the recent news about sites beginning to block the Internet Archive. Feels like we are gearing up for the next phase of the information war.
Was that written by AI? It sounds like AI, spends lots of time summarizing other posts, and has no listed author. My AI alarm is going off.
Yeah, wow. Definitely setting off my AI summary alarm.
Yeah nearly certainly.
Ars was caught recently using AI to write articles when the AI hallucinated about a blogger getting harassed by someone using AI agents. The article quoted his blog and all the quotes were nonsense.
Even if something is AI generated the author, and the editor, should at least attempt to read back the article. English isn't my native language, so that obviously plays in, but very frequently I find that articles I struggle to read are AI generated, they certainly have that AI feel.<p>It would be interesting to run the numbers, but I get the feeling that AI generated articles may have a higher LIX number. Authors are then less inclined to "fix" the text, because longer word makes them seem smarter.
A big fear of mine is something happening to archive.is<p>There is so much is archived there, to lose it all would be a tragedy.
There are number of blog posts like<p>owner-archive-today . blogspot . com<p>2 years old, like J.P's first post on AT
They are able to scrape paywalled sites at random, so im guessing a residential botnet is used.
But how do they bypass the paywall? They can't just pretend to be Google by changing the user-agent, this wouldn't work all the time, as some websites also check IPs, and others don't even show the full content to Google.<p>They also cannot hijack data with a residential botnet or buy subscriptions themselves. Otherwise, the saved page would contain information about the logged-in user. It would be hard to remove this information, as the code changes all the time, and it would be easy for the website owner to add an invisible element that identifies the user. I suppose they could have different subscriptions and remove everything that isn't identical between the two, but that wouldn't be foolproof.
On the network layer, I don't know. But on the WWW layer, archive.today operates accounts that are used to log into websites when they are snapshotted. IIRC, the archive.today manipulates the snapshots to hide the fact that someone is logged in, but sometimes fails miserably:<p><a href="https://megalodon.jp/2026-0221-0304-51/https://d914s229qk4kjj.archive.ph:443/OvhsE/6d2358d8aa9f1e6dfc9160556570979134a699dd/scr.png" rel="nofollow">https://megalodon.jp/2026-0221-0304-51/https://d914s229qk4kj...</a><p><a href="https://archive.is/Y7z4E" rel="nofollow">https://archive.is/Y7z4E</a><p>The second shows volth's Github notifications. Volth was a major nix-pkgs contributor, but his Github account disappeared.<p><a href="https://github.com/orgs/community/discussions/58164" rel="nofollow">https://github.com/orgs/community/discussions/58164</a>
There are some pretty robust browser addons for bypassing article paywalls, notably <a href="https://gitflic.ru/project/magnolia1234/bypass-paywalls-firefox-clean#installation" rel="nofollow">https://gitflic.ru/project/magnolia1234/bypass-paywalls-fire...</a><p>This particular addon is blocked on most western git servers, but can still be installed from Russian git servers. It includes custom paywall-bypassing code for pretty much every news websites you could reasonably imagine, or at least those sites that use conditional paywalls (paywalls for humans, no paywalls for big search engines). It won't work on sites like Substack that use proper authenticated content pages, but these sorts of pages don't get picked up by archive.today either.<p>My guess would be that archive.today loads such an addon with its headless browser and thus bypasses paywalls that way. Even if publishers find a way to detect headless browsers, crawlers can also be written to operate with traditional web browsers where lots of anti-paywall addons can be installed.
Wow, did not know about the regional blocking of git servers! Makes me wonder what else is kept from the western audience, and for what reason this blocking is happening.<p>Thanks for sketching out their approach and for the URI.
But don't news websites check for ip addresses to make sure they really are from Google bots?
Most of them don’t check the IP, it would seem. Google acquires new IPs all the time, plus there are a lot of other search systems that news publishers don’t want to accidentally miss out on. It’s mostly just client side JS hiding the content after a time delay or other techniques like that. I think the proportion of the population using these addons is so low, it would cost more in lost SEO for news publishers to restrict crawling to a subset of IPs.
I use this add on. It does get blocked sometimes but they update the rules every couple of weeks.
I thought saved pages sometimes do contain users' IP's?<p><a href="https://www.reddit.com/r/Advice/comments/5rbla4/comment/dd5xw6n/" rel="nofollow">https://www.reddit.com/r/Advice/comments/5rbla4/comment/dd5x...</a><p>The way I (loosely) understand it, when you archive a page they send your IP in the X-Forwarded-For header. Some paywall operators render that into the page content served up, which then causes it to be visible to anyone who clicks your archived link and Views Source.
> But how do they bypass the paywall?<p>I’m guessing by using a residential botnet and using existing credentials by unknowingly ”victims” by automating their browsers.<p>> Otherwise, the saved page would contain information about the logged-in user.<p>If you read this article, theres plenty of evidence they are manipulating the scraped data.<p>But I’m just speculating here…
But in the article they talk about manipulating users devices to do a DDOS, not scrape websites. And the user going to the archive website is probably not gonna have a subscription, and anyway I'm not sure that simply visiting archive.today will make it able to exfiltrate much information from any other third party website since cookies will not be shared.<p>I guess if they can control a residential botnet more extensively they would be able to do that, but it would still be very difficult to remove login information from the page, the fact that they manipulated the scraped data for totally unrelated reasons a few times proves nothing in my opinion.
They do remove the login information for their own accoubts (e.g. the one they use for LinkedIn sign-up wall). Their implementation is not perfect, though, which is how the aliases were leaked in the first place.
I don't see the point in doxing anyone, especially those providing a useful service for the average internet user. Just because you can put some info together, it doesn't mean you should.<p>With this said, I also disagree with turning everyone that uses archive[.]today into a botnet that DDoS sites. Changing the content of archived pages also raises questions about the authenticity of what we're reading.<p>The site behaves as if it was infected by some malware and the archived pages can't be trusted. I can see why Wikipedia made this decision.
For a very brief time, "doxing" (that is, dropping dox, that is, dropping docs, or documents) used to mean something useful. You gathered information that was not out in public, for example by talking to people or by stealing it, and put it out in the open.<p>It's very silly to talk about doxing when all someone has done is gather information anyone else can equally easily obtain, just given enough patience and time, especially when it's information the person in question put out there themselves. If it doesn't take any special skills or connections to obtain the information, but only the inclination to actually perform the research on publicly available data, I don't see what has been done that is unethical.
Call it stalking or harrasment if you prefer. Regardless its rude (sometimes illegal) behaviour.<p>That's no justification for using visitors to your site to do a DDOS.<p>In the slang of reddit: ESH
It's neither of those. Stalking refers to persistent, unwanted, one-sided interactions with a person such as following, surveilling, calling, or sending messages or gifts. Investigating a person's past or identity doesn't involve any interaction with the physical person. Harassment is persistent attempts to interact with someone after having been asked to stop. Again, an investigation doesn't require any form of interaction.
> Harassment is persistent attempts to interact with someone<p>No, harassment also includes persistent attempts to cause someone grief, whether or not they involve <i>direct</i> interactions with that person.<p>From Wikipedia:<p>> Harassment covers a wide range of behaviors of an offensive nature. It is commonly understood as behavior that demeans, humiliates, and intimidates a person.
Doxing in the loose sense could be harassment in certain circumstances, such as if you broadcast a person's home address to an audience with the intent to cause that audience to use that address, even if the address was already out there. In that case, the problem is not the release of information, but the intent you're communicating with the release. It would be the same if you told that audience "you know guys? It's not very difficult to find jdoe's home address if you google his name. I'm not saying anything, I'm just saying." Merely de-pseudonymizing a screen name may or may not be harassment. Divulging that jdoe's real name is John Doe would not have the same implications as if his name was, say, Keanu Reeves.<p>Because the two are distinct, one can't simply replace "doxing" with "harassment".
Generally speaking, every case I've seen of people using the term "doxing" tends to be for the case that specifically <i>is</i> harassment; it has the connotation of using the information, precisely because <i>if you aren't intending to use it there's no good reason for you to have it</i>.
In this case archive.today has a lot of influence over the information we take in because of the rise in paywalls. They have the potential of modifying the news we absorb at scale.<p>In that context I don't think the question ("actually, who is providing all this information to me and what interests drive them") is one that's misplaced. Maybe we shouldn't look into a gift horse's mouth but don't forget this could be a Trojan horse as well.<p>The article brought to light some ties to Russia but probably not ties to its government and its troll farms. Rather an independent and pretty rebellious citizen. That's good to hear. And that's valuable information. I trust the site more after reading the article, not less.<p>The article could have redacted the names they found but they were found with public sources and these sources validate the encountered information (otherwise the results could have been dismissed)
Did you read the article? They dug deep, they didn't just do a google search and leave it at that. They drew links between deleted posts and defunct accounts, they compared profile pictures of anonymous profiles.<p>I'm not defending the archive.today webmaster but it's unfortunately understandable they are angry. Saying what the blogger did was merely point out public information is a gross oversimplification.
Eh, you can find in public data things like "what is someone's address" based only on their name by looking up public records of mortgage records. That however is quite bad form, and if you did do that, I think it would be pretty unethical.
It's also kind of ironic that a site whose whole premise is to preserve pages forever, whether the people involved like it or not, is seeking to take down another site because they are involved and don't like it. Live by the sword, etc.
> It's also kind of ironic that a site whose whole premise is to preserve pages forever, whether the people involved like it or not<p>Oddly, I think archive.today has explicitly said that's not what they're there for, and the people shouldn't rely on their links as a long-term archive.
Sites that exist to archive other websites will almost always need to dynamically change the content of the HTML that they're serving in some way or another. (For example, a link that points to the root of the website may need changed in order to point to the right location.)<p>So it doesn't <i>necessarily</i> raise questions about whether the content has been changed or not. The difference is in whether that change is there to make the archive usable - and of course, for archive.today, that's not the case.
> Changing the content of archived pages also raises questions about the authenticity of what we're reading.<p>This is absolutely the buried lede of this whole saga, and needs to be the focus of conversation in the coming age.
Did they actually run the DDoS via a script or was this a case of inserting a link and many users clicked it? They are substantially different IMO
<a href="https://news.ycombinator.com/item?id=46624740">https://news.ycombinator.com/item?id=46624740</a> has the earliest writeup that I know of. It was running it via a script and intentionally using cache busting techniques to try to increase load on the hosted wordpress infrastructure.
> It was running<p>It still is, uBlocks default lists are killing the script now but if it's allowed to load then it still tries to hammer the other blog.
Given the site is hosted on wordpress.com, who don't charge for bandwidth, it seems to have been completely ineffective.
Thank you this is exactly the information I was looking for.<p>"You found the smoking gun!"
they silently ran the DDoS script on their captcha page (which is frequently shown to visitors, even when simply viewing and not archiving a new page)
this seems like type of thing that should be on blockchain and decentralized nodes validate authenticity, it could support revisions but not lose originals
As far as I understand the person behind archive.today might face jail time if they are found out. You shouldn't be surprised that people lash out when you threaten their life.<p>I don't think the DDOSing is a very good method for fighting back but I can't blame anyone for trying to survive. They are definitely the victim here.<p>If that blog really doxxed them out of idle curiosity they are an absolute piece of shit. Though I think this is more of a targeted campaign.
One thing they always teach you in Crime University is "don't break two laws at the same time." If you have contrabands in your car, don't speed or run red lights, because it brings attention and attentions means jail.<p>In this case, I didn't know that the archive.today people were doxxed <i>until they started the ddos campaign and caught attention</i>. I doubt anyone in this thread knew or cared about the blogger until he was attacked. And now this entire thing is a matter of permanent record on Wikipedia and in the news. archive.today's attempt at silencing the blogger is only bringing them more trouble, not less.<p>Barbara_Streisand_Mansion.jpg
We do not know <i>what</i> was important in that doxx.<p>Probably nothing and the DDoS hype was intentional to distract attention and highlight J.P.'s doxx among the other, making them insignificant.<p>J.P. might be the only one of the doxxers who could promote their doxx in media, and this made his doxx special, not the content?<p>Anyway, it made the haystack bigger keeping needle the same.
The weird thing is that there was nothing new in that blog post. And on top of that it couldn't conclusively say who the owner of archive.today is, so no one still knows.
> As far as I understand the person behind archive.today might face jail time if they are found out. You shouldn't be surprised that people lash out when you threaten their life.<p>One of the really strange things about all of this is that there is a public forum post in which a guy claims to be the site owner. So this whole debacle is this weird mix of people who are angry and saying "clearly the owner doesn't want to be associated with the site" on the one hand, but then on the other hand there's literally a guy who says he's the one that owns the site, so it doesn't seem like that guy is very worried about being associated with it?<p>It also seems weird to me that it's viewed as inappropriate to report on the results of Googling the guy who said he owns the site, but maybe I'm just out of touch on that topic.
> is that there is a public forum post in which a guy claims to be the site owner.<p>Which forum post? The post mentioned by the blogger, the post on an F-Secure forum (a company with cybersecurity products) was a request for support by the owner of archive.today regarding a block of their site. It's arguably not intended as a public statement by the owner of the archive, and they were simply careless with their username.
There are even YouTube videos (of GamerGate-time, thus before AI era) with a guy claiming to be the site owner. A bit difficult to OSINT :)
I don't see how that contradicts anything? He's almost certainly using a nomme de guerre.
Somebody who a) directs DDOS attacks and b) abuses random visitors' browser for those DDOS attacks is never the victim.<p>You don't know their motives for running their site, but you do get a clear message about their character by observing their actions, and you'd do well to listen to that message.
The character is completely irrelevant to whether they are a victim of doxxing.<p>They might be the worst person ever but that doesn't matter. People can be good and bad, sometimes the victim sometimes the perpetrator.<p>Is it morally wrong to doxx someone and cause them to go to jail because they are running an archive website? Yes. It is. It doesn't matter who the person is. It does not matter what their motivations are.
There are plenty of cases where the operator of archive.today refused to take down archives of pages with people's identifying information, so it's a huge double standard for them to insist on others to not look into their identity using public information.
So, we are back at eye for eye and tooth for tooth?
Irrelevant to a determination of fact, yes. But very relevant to the question of whether or not I care about any of this. Bad thing happened to bad person, lots of drama ensued, come rubberneck the various internet slapfights, details at 11. In other news, water is wet.
Has anyone else noticed that some of Archive.today's X/Twitter captures [1] are logged in with an account called "advancedhosters" [2], which is associated with a web hosting company apparently located in Cyprus? The latest post [3] from the account links to a blog post [4] including private communications between the webmaster of Archive.today (using their previously-known "Volth" alias) and a site owner requesting a takedown. Also note that the previous post [5] from the "advancedhosters" account was a link to a pro-Russia, anti-Ukraine article, archived via Archive.today of course. Seems like an interesting lead to untangle.<p>[1] <a href="https://archive.today/20240714173022/https://x.com/archiveis/status/1771339650176553315" rel="nofollow">https://archive.today/20240714173022/https://x.com/archiveis...</a><p>[2] <a href="https://x.com/advancedhosters" rel="nofollow">https://x.com/advancedhosters</a><p>[3] <a href="https://x.com/advancedhosters/status/1731129170091004412" rel="nofollow">https://x.com/advancedhosters/status/1731129170091004412</a><p>[4] <a href="https://lj.rossia.org/users/mopaiv/257.html" rel="nofollow">https://lj.rossia.org/users/mopaiv/257.html</a><p>[5] <a href="https://x.com/advancedhosters/status/1501971277099286539" rel="nofollow">https://x.com/advancedhosters/status/1501971277099286539</a>
It could be a donated account. I've noticed archive.whatever also bypasses some paywalls by using legitimate account logins but I doubt there's one person going around subscribing to every news outlet that gets any coverage.<p>If archive.whatever wasn't so useful to the general public, it'd be hard to distinguish from a criminal operation given the way it operates, unlike say the Internet Archive who goes through all of the proper legal paperwork to be a real nonprofit.
Lead to what?
I noticed last year that some archived pages are getting altered.<p>Every Reddit archived page used to have a Reddit username in the top right, but then it disappeared. "Fair enough," I thought. "They want to hide their Reddit username now."<p>The problem is, they did it retroactively too, removing the username from past captures.<p>You can see on old Reddit captures where the normal archived page has no username, but when you switch the tab to the Screenshot of the archive it is still there. The screenshot is the original capture and the username has now been removed for the normal webpage version.<p>When I noticed it, it seemed like such a minor change, but with these latest revelations, it doesn't seem so minor anymore.
> When I noticed it, it seemed like such a minor change, but with these latest revelations, it doesn't seem so minor anymore.<p>That doesn't seem nefarious, though. It makes sense they wouldn't want to reveal whatever accounts they use to bypass blocks, and the logged-in account isn't really meaningful content to an archive consumer.<p>Now, if they were changing the content of a reddit <i>post or comment</i>, that would be an entirely different matter.
If it's not nefarious why isn't it documented as part of their policies? They're not tracking those changes and making clear it was anonymization, why not? If they're not tracking and publishing changes to the documents what's to say they haven't edited other things? The short answer is that without another archived copy we just don't know and that's what's making people uncomfortable. They also injected malicious JS into the site. What's to stop them from doing that again? Trust and transparency are the name of the game with libraries. I could care less about the who they are, but their actions as steward of a collection for posterity fail to encourage my trust.
Editing what is billed as an archive defeats the purpose of an "archive".
> Editing what is billed as an archive defeats the purpose of an "archive".<p>No, certain edits are understandable and required. Even the archive.org edits its pages (e.g. sticks banners on them and does a bunch of stuff to make them work like you'd expect).<p>Even paper archives edit documents (e.g. writing sequence numbers on them, so the ordering doesn't get lost).<p>Disclosing exactly what account was used to download a particular page is arguably irrelevant information, and may even compromise the work of archiving pages (e.g. if it just opens the account to getting blocked).
The relevant part of the page to archive is the content of the page, not the user account that visited the page. Most sane people would consider two archives of the same page with different user accounts at the top, the same page.
Don't be surprised by this, there are a lot more edits than you think. For example, CSS is always inlined so that pages could render the same as it was archived.
It seems a lot of people havent heard of it, but I think its worth plugging <a href="https://perma.cc/" rel="nofollow">https://perma.cc/</a> which is really the appropriate tool for something like Wikipedia to be using to archive pages.<p>mroe <a href="https://en.wikipedia.org/wiki/Perma.cc" rel="nofollow">https://en.wikipedia.org/wiki/Perma.cc</a>
It costs money beyond 10 links, which means either a paid subscription or institutional affiliation. This is problematic for an encyclopedia anyone can edit, like Wikipedia.
This is assuming they can't work out something with wikipedia to offer it for free (via a wikiforge tool, or bot) in exchange for the exposure of being the most common archive provider/putting a "used by Wikimedia" logo on their website.<p>The major reason archive.today was being used is that it also bypassed paywalls, and I don't think perma.cc does that normally.
Wikimedia could pay, they have an endowment of ~$144M [1] (as of June 30, 2024). Perma.cc has Archive.org and Cloudflare as supporting partners, and their mission is aligned with Wikimedia [2]. It is a natural complementary fit in the preservation ecosystem. You have to pay for DOIs too, for comparison [3] (starting at $275/year and $1/identifier [4] [5]).<p>With all of this context shared, the Internet Archive is likely meeting this need without issue, to the best of my knowledge.<p>[1] <a href="https://meta.wikimedia.org/wiki/Wikimedia_Endowment" rel="nofollow">https://meta.wikimedia.org/wiki/Wikimedia_Endowment</a><p>[2] <a href="https://perma.cc/about" rel="nofollow">https://perma.cc/about</a> ("Perma.cc was built by Harvard’s Library Innovation Lab and is backed by the power of libraries. We’re both in the forever business: libraries already look after physical and digital materials — now we can do the same for links.")<p>[3] <a href="https://community.crossref.org/t/how-to-get-doi-for-our-journal/3163" rel="nofollow">https://community.crossref.org/t/how-to-get-doi-for-our-jour...</a><p>[4] <a href="https://www.crossref.org/fees/#annual-membership-fees" rel="nofollow">https://www.crossref.org/fees/#annual-membership-fees</a><p>[5] <a href="https://www.crossref.org/fees/#content-registration-fees" rel="nofollow">https://www.crossref.org/fees/#content-registration-fees</a><p>(no affiliation with any entity in scope for this thread)
> Organizations that do not qualify for free usage can contact our team to learn about creating a subscription for providing Perma.cc to their users. Pricing is based on the number of users in an organization and the expected volume of link creation.<p>If pricing is so much that you have to have a call with the marketing team to get a quote, i think it would be a poor use of WMF funds.<p>Especially because volume of links and number of users that wikimedia would entail is probably double their entire existing userbase at least.<p>Ultimately we are mostly talking about a largely static web host. With legal issues being perhaps the biggest concern. It would probably make more sense for WMF to create their own than to become a perma.cc subscriber.<p>However for the most part, partnering with archive.org seems to be going well and already has some software integration with wikipedia.
If the WMF had a dollar for every proposal to spend Endowment-derived funds, their Endowment would double and they could hire one additional grant-writer
Do you have experience with this? I'd like to hear more, really. I think this is the first time I've seen a suggestion for something new they can spend money on. I usually just see talk about where to spend less.
If the endowment is invested so that it brings very conservative 3% a year, it means that it brings $4.32M a year. By doubling that, rather many grant writers could be hired.
Does Wikipedia really need to outsource this? They already do basically everything else in-house, even running their own CDN on bare metal, I'm sure they could spin up an archiver which could be implicitly trusted. Bypassing paywalls would be playing with fire though.
> Does Wikipedia really need to outsource this?<p>I hope so. Archiving is a legal landmine.
Archive.org is the archiver, rotted links are replaced by Archive.org links with a bot.<p><a href="https://meta.wikimedia.org/wiki/InternetArchiveBot" rel="nofollow">https://meta.wikimedia.org/wiki/InternetArchiveBot</a><p><a href="https://github.com/internetarchive/internetarchivebot" rel="nofollow">https://github.com/internetarchive/internetarchivebot</a>
Yeah for historical links it makes sense to fall back on IAs existing archives, but going forward Wikipedia could take their own snapshots of cited pages and substitute them in if/when the original rots. It would be more reliable than hoping IA grabbed it.
Not opposed, Wikimedia tech folks are very accessible in my experience, ask them to make a GET or POST to <a href="https://web.archive.org/save" rel="nofollow">https://web.archive.org/save</a> whenever a link is added via the Wiki editing mechanism. Easy peasy. Example CLI tools are <a href="https://github.com/palewire/savepagenow" rel="nofollow">https://github.com/palewire/savepagenow</a> and <a href="https://github.com/akamhy/waybackpy" rel="nofollow">https://github.com/akamhy/waybackpy</a><p>Shortcut is to consume the Wikimedia changelog firehose and make these http requests yourself, performing a CDX lookup request to see if a recent snapshot was already taken before issuing a capture request (to be polite to the capture worker queue).
This already happens. Every link added to Wikipedia is automatically archived on the wayback machine.
[citation needed]
Ironic, I know. I couldn't find where I originally heard this years ago, but the InternetArchiveBot page linked above says "InternetArchiveBot monitors every Wikimedia wiki for new outgoing links" which is probably referring to what I said.
I didn't know you can just ask IA to grab a page before their crawler gets to it. In that case yeah it would make sense for Wikipedia to ping them automatically.
Why wouldn't Wikipedia just capture and host this themselves? Surely it makes more sense to DIY than to rely on a third party.
Spammers and pirates just got super excited at that plan!
Archive.org are left wing activists that will agree to censor anything other left wing activists or large companies don't want online.
Of course they do. If Wikipedia did it themselves they'd immediately get DMCA'd and sued into oblivion.<p>> Bypassing paywalls would be playing with fire though.<p>That's the only reason archive.today was used. For non-paywalled stuff you can use the wayback machine.
I switched to Perma.cc earlier this week and have had a mixed experience to say the least. I think image heavy pages just error out completely, while still charging me such as:<p><a href="https://www.in.gov/nircc/planning/highway/traffic-data/intersectionarterial-data/" rel="nofollow">https://www.in.gov/nircc/planning/highway/traffic-data/inter...</a><p>and reddit blocks their agent seemingly. It is open source though.
[dead]
A bit off topic, but are there any self hosted open source archiving servers people are using for personal usage?<p>I think ArchiveBox[1] is the most popular. I will give it a shot, but it's a shame they don't support URL rewriting[2], which would be annoying for me. I read a lot of blog and news articles that are split across multiple pages, and it would be nice if that article's "next page" link was a link to the next archived page instead of the original URL.<p>1: <a href="https://archivebox.io/" rel="nofollow">https://archivebox.io/</a><p>2: <a href="https://github.com/ArchiveBox/ArchiveBox/discussions/1395" rel="nofollow">https://github.com/ArchiveBox/ArchiveBox/discussions/1395</a>
I like Readeck – <a href="https://codeberg.org/readeck/readeck" rel="nofollow">https://codeberg.org/readeck/readeck</a><p>Open source. Self hosted or managed. Native iOS and Android apps.<p>Its Content Scripts feature allows custom JS scripts that transform saved content, which could be used to do URL rewriting.
Omnom comes to mind:<p>* <a href="https://omnom.zone/" rel="nofollow">https://omnom.zone/</a><p>* <a href="https://github.com/asciimoo/omnom" rel="nofollow">https://github.com/asciimoo/omnom</a>
Is it not possible to create a non-repudiable archive of what a website served, when, entirely locally i.e. not relying on some third party site who might disappear or turn out to be unreliable?<p>Could you not in theory record the whole TLS transaction? Can it not be replayed later and re-verified?<p>Up until an old certificate leaks or is broken and you can fake anything "from back when it was valid", I guess.
I don't know, but archive sites could at least publish hashes of the content at archive time. This could be used to prove an archive wasn't tampered with later. I'm pretty underwhelmed by the Wayback Machine (archive.org), it's no better technically than archive.today.
Unfortunately you can't usefully replay TLS and be able to validate it, so no that does not work. Best strategy would probably be a public transparency log, but websites are pretty variable and dynamic so this would be unlikely to work for many.
Actually you can! After all, TLS lacks the deniability features of more advanced cryptosystems (like OTR or Signal).<p>The technology for doing this is called a Zero Knowledge Proof TLS Oracle:<p><a href="https://eprint.iacr.org/2024/447.pdf" rel="nofollow">https://eprint.iacr.org/2024/447.pdf</a><p><a href="https://tlsnotary.org" rel="nofollow">https://tlsnotary.org</a><p>The 10k-foot view is that you pick the random numbers involved in the TLS handshake in a deterministic way, much like how zk proofs use the Fiat-Shamir transform. In other words, instead of using true randomness, you use some hash of the transcript of the handshake so far (sort of). Since TLS doesn't do client authentication the DH exchange involves randomness from the client.<p>For all the blockchain haters out there: cryptocurrency is the reason this technology exists. Be thankful.
<a href="https://web.archive.org/web/20260220191245if_/https://arstechnica.com/tech-policy/2026/02/wikipedia-bans-archive-today-after-site-executed-ddos-and-altered-web-captures/" rel="nofollow">https://web.archive.org/web/20260220191245if_/https://arstec...</a><p>archive.today is very popular on HN; the opaque, shortened URLs are promoted on HN every day<p>I can't use archive.today. I tried but gave up. Too many hassles. I might be in the minority but I know I'm not the only one. As it happens. I have not found any site that I cannot access without it<p>The most important issue with archive.today though is the person running it, their past and present behaviour. It speaks for itself<p>Whomever it is, they have lot of info about HN users' reading habits given that archive.today URLs are so heavily promoted by HN submitters, commenters and moderators
Archive.today wants/needs EDNS subnet<p>"Geolocation" as a justication is ambiguous<p>Why need for geolocation<p>Geolocation can be used for multiple purposes<p>"Performance" is only one purpose<p>Other purposes might offer the user no benefit, and might even be undesirable for users<p>As a result, some users dont send EDNS subnet. It's always been optional to send it<p>Even public resolvers, third party DNS services, like Cloudflare, recognise the tradeoffs for users and allow users to avoid sending it<p>Archive.today wants/needs EDNS subnet so bad it tries to gather it using a tracking pixel or it tries to block users who dont send it, e.g., Cloudflare users<p>Thus, before one even considers all the other behaviour of this website operator, some of which is mentioned in this thread, there is a huge red flag for anyone who pays attention to such details<p>As with almost all websites repeated DNS lookups are not an absolute requirement for successful HTTP requests<p>There are some IP addresses for archive.{today,is,md,ph,li,...} that have continued to work for years
I use archive.today all the time. How do you access pages, like for instance on the economist, without it?
For me, all archive.* links just present an endless captcha loop. I am not using CF DNS or any proxy/VPN, but even if I do try those things, it still doesn't work.
With the paywall blocker so good it got banned! You can also get it on Android.<p><a href="https://gitflic.ru/project/magnolia1234/bypass-paywalls-firefox-clean" rel="nofollow">https://gitflic.ru/project/magnolia1234/bypass-paywalls-fire...</a>
<p><pre><code> http-request set-header user-agent "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr" if { hdr(host) -m end economist.com }
</code></pre>
Years ago I used some other workaround that no longer works, maybe something like amp.economist.com. AMP with text-only browser was a useful workaround for many sites<p>Workarounds usually don't last forever. Websites change from time to time. This one will stop working at some point<p>There are some people who for various reasons cannot use archive.today
for instance on the economist: <a href="https://news.ycombinator.com/item?id=46060487">https://news.ycombinator.com/item?id=46060487</a>
If dang and tomhow enforce a policy against paywalled content would garner less interest in accessing those pages via third parties. Most news gets reported by multiple outlets in general, so the same discussions would still surface.
you can change the tld of any archive.today link if .today doesn't work. for example archive.ph, archive.is, archive.md, etc
The fact is i cant have a discussion about a paywalled article without reading it. Archive.today is popular as a paywall bypass because nobody wants HN to devolve into debate based on a headline where nobody has rtfa.
"archive.today" as used here means the collection of archive.tld domains, where .tld could be ".is", ".md", ".ph", etc.<p>"promoted" as used here means placing an archive.tld URL at the top of an HN thread so that many HN readers will
follow it, or placing these URLs elsewhere in threads
> Whomever it is, they have lot of info about HN users' reading habits given that archive.today URLs are so heavily promoted by HN submitters, commenters and moderators<p>It's not promoted, it's just used as a paywall bypass so everyone can read the linked article.
Curiously, this isn't the first time archive.today was implicated in a DDoS. A HN post from three years back shows some pasted snippets of similar XmlHttpRequest code running on archive.ph (an archive.today synonym site). Post link: <a href="https://news.ycombinator.com/item?id=38233062">https://news.ycombinator.com/item?id=38233062</a><p>On that occasion, the target of the attack was a site named northcountrygazette.org, whose owner seems to have never become aware of the attack. The HN commenter noted when they went to the site manually it was incredibly slow, which would suggest the DDoS attempt was effective.<p>I tried to see if there was anything North Country Gazette had published that the webmaster of archive.today might have taken issue with, and I couldn't find anything in particular. However, the "Gazette" had previously threatened readers with IP logging to prosecute paywall bypassers (<a href="https://news.slashdot.org/story/10/10/27/2134236/pay-or-else-news-site-threatens" rel="nofollow">https://news.slashdot.org/story/10/10/27/2134236/pay-or-else...</a>), and also blocks archivers in its robots.txt file, indicating it is hostile towards archiving in general.<p>I can no longer access North Country Gazette, so perhaps it has since gone out of business. I found a few archived posts from its dead website complaining of high server fees. Like the target of this most recent DDoS, June Maxam, the lady behind North Country Gazette, also appears/appeared to be a sleuth.
Just went into a rabbit hole looking into this, wow, can't tell if this is just another drama on the weird wide web or something else.
I believe there are multiple options with different degree of "half-baked"-ness, but can anyone name the best self-hosted version of this service?<p>Ultimately, what we all use it for is pretty straight-forward, and it seems like by now we should've arrived at having approximately one best implementation, which could be used both for personal archiving and for iternet-facing instances (perhaps even distributed). But I don't know if we have.
Sounds like there's a gap in the market for a "commons" archive... maybe powered by something p2p like BitTorrent protocol?<p>This would have sounded Very Normal in the 2000s... I wonder if we can go back :)
P2p is generally bad for this usecase. P2P generally only works for keeping popular content around (content gets dropped when the last peer that cares disconnects). If the content was popular it wouldnt need to be archived in the first place.
IMO there is actually a very low hanging fruit here, even without P2P or DHTs we could have an URI scheme that consists of a domain and document hash. It is then up to the user to add alternate mirrors for domains. Aside from privacy, it doesn't really matter who answers these requests since the documents are self-signing.
Kinda off-topic, but has anyone figured out how archive.today manages to bypass paywalls so reliably? I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous. I figured that they have found an (automated) way to imitate Googlebot <i>really</i> well.
> I figured that they have found an (automated) way to imitate Googlebot <i>really</i> well.<p>If a site (or the WAF in front of it) knows what it's doing then you'll never be able to pass as Googlebot, period, because the canonical verification method is a DNS lookup dance which can only succeed if the request came from one of Googlebots dedicated IP addresses. Bingbot is the same.
There are ways to work around this. I've just tested this: I've used the URL inspection tool of Google Search Console to fetch a URL from my website, which I've configured to redirect to a paywalled news article. Turns out the crawler follows that redirect and gives me the full source code of the redirected web site, without any paywall.<p>That's maybe a bit insane to automate at the scale of archive.today, but I figure they do something along the lines of this. It's a perfect imitation of Googlebot because it is literally Googlebot.
I'd file that under "doesn't know what they're doing" because the search console uses a totally different user-agent (Google-InspectionTool) and the site is blindly treating it the same as Googlebot :P<p>Presumably they are just matching on *Google* and calling it a day.
> which I've configured to redirect to a paywalled news article.<p>Which specific site with a paywall?
> I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous.<p>The curious part is that they allow web scraping arbitrary pages on demand. So if a publisher could put in a lot of arbitrary requests to archive their own pages and see them all coming from a single account or small subset of accounts.<p>I hope they haven't been stealing cookies from actual users through a botnet or something.
Exactly. If I was an admin of a popular news website I would try to archive some articles and look at the access logs in the backend. This cannot be too hard to figure out.
You don't even need active measures. If a publisher is serious about tracing traitors there are algorithms for that (which are used by streamers to trace pirates). It's called "Traitor Tracing" in the literature. The idea is to embed watermarks following a specific pattern that would point to a traitor or even a coalition of traitors acting in concert.<p>It would be challenging to do with text, but is certainly doable with images - and articles contain those.
> which is, of course, ridiculous.<p>Why? in the world of web scrapping this is pretty common.
Because it works too reliably. Imagine what that would entail. Managing thousands of accounts. You would need to ensure to strip the account details form archived peages <i>perfectly</i>. Every time the website changes its code even slightly you are at risk of losing one of your accounts. It would constantly break and would be an absolute nightmare to maintain. I've personally never encountered such a failure on a paywalled news article. archive.today managed to give me a non-paywalled clean version every single time.<p>Maybe they use accounts for some special sites. But there is definetly some automated generic magic happening that manages to bypass paywalls of news outlets. Probably something Googlebot related, because those websites usually give Google their news pages without a paywall, probably for SEO reasons.
Do you know where the doxxed info ultimately originates from? It turns out that the archives leaked account names. Try Googling what happened to volth on Github.
Using two or more accounts could help you automatically strip account details.
Replace any identifiers like usernames and emails with another string automatically.
I could be wrong, but I think I've seen it fail on more obscure sites. But yeah it seems unlikely they're maintaining so many premium accounts. On the other hand they could simply be state-backed. Let's say there are 1000 likely paywalled sites, 20 accounts for each = 20k accounts, $10/month => $200k/month = $2.4m a year. If I were an intelligence agency I'd happily drop that plus costs to own half the archived content on the internet.<p>Surely it wouldn't be too hard to test. Just set up an unlisted dummy paywall site, archive it a few times and see what the requests looks like.
Interesting theory. It would also be a good way to subtly undermine the viability of news outlets, not to mention the insidious potential of altering snapshots at will. OTOH, I'd expect a state-sponsored effort to be more professional in terms of not threatening and smearing some blogger who questioned them.
It's because it's actively maintained, and bypassing the paywalls is its whole selling point, thus, they do have to be good at it.<p>They bypass the rendering issues by "altering" the webpages. It's not uncommon to archive a page, and see nothing because of the paywalls; but then later on, the same page is silently fixed. They have a Tumblr where you can ask them questions; at one point, it's been quite common for everyone to ask them to fix random specific pages, which they did promptly.<p>Honestly, you cannot archive a modern page, unless you alter it. Yet they're now being attacked under the pretence of "altering" webpages, but that's never been a secret, and it's technologically impossible to archive without altering.
I’m an outsider with experience building crawlers. You can get pretty far with residential proxies and browser fingerprint optimization. Most of the b-tier publishers use RBC and heuristics that can be “worked around” with moderate effort.
I imagine accounts are the only way that archive.today works on sites like 404media.co that seem to have server sided paywalls. Similarly, twitter has a completely server sided paywall.
It’s not reliable, in the sense that there are many paywalled sites that it’s unable to archive.
But it is reliable in the sense that if it works for a site, then it usually never fails.
no tool is 100% effective. Archive.today is the best one we've seen
There is an <i>enormous</i> amount of stuff that is <i>only</i> on archive.today, including stuff that is otherwise gone forever. A mix of stuff that somebody only ever did archive.today on and not archive.org, and stuff that <i>could</i> only be archived on archive.today because archive.org fails on it.<p>Anything on twitter post-login-wall for one. A million only-semi-paywalled news articles for others. But mainly an unfathomably long tail.<p>It was extremely distressing when the admin started(?) behaving badly for this reason. That others are starting to react this way to it is understandable. What a stupid tragedy.
Wikipedia's own page on this topic is much more succinct about the context and change in policy<p><a href="https://en.wikipedia.org/wiki/Wikipedia:Archive.today_guidance" rel="nofollow">https://en.wikipedia.org/wiki/Wikipedia:Archive.today_guidan...</a>
> Change the original source to something that doesn't need an archive (e.g., a source that was printed on paper), or for which a link to an archive is only a matter of convenience.<p>They're basically recommending changing verifiable references that can easily be cross-checked and verified, to "printed on paper" sources that could likely never be verified by any other Wikipedian, and can easily be used to provide a falsification and bias that could go unnoticed for extended periods of time.<p>Honestly, that's all you need to know about Wikipedia.<p>The "altered" allegation is also disingenuous. The reason archive.org never works, is precisely because it doesn't alter the pages enough. There's no evidence that archive.today has altered any actual main content they've archived; altering the hidden fields, usernames and paywalls, as well as random presentation elements to make the page look properly, doesn't really count as "altered" in my book, yet that's precisely what the allegation amounts to.
The accusation is not that they alter pages at all -- they obviously need to in order to make some pages readable/functional, bypass paywalls, or hide account names used to do so. The Wayback Machine does something similar with YouTube to make old videos playable.<p>The allegation here is that they altered page content not just to remove their own alias, but to <i>insert</i> the name of the blogger they were targeting. That moves it from a defensible technical change for accessibility to being part of their bizarre revenge campaign against someone who crossed them.
You should add this context to the talk page. You can do it anonymously without login. I wasn’t aware of either side of this allegation, and it’s helpful to understand this context.
this was referenced as the evidence for archive.today modifying content <a href="https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment/Archive.is_RFC_5#Evidence_of_altering_snapshots" rel="nofollow">https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment...</a>
Archive.is is now publishing really weird posts on their Tumblr blog, related to the whole thing<p><a href="https://archive-is.tumblr.com/post/806832066465497088/ladies-and-gentlemen-in-the-autumn-of-2025-i" rel="nofollow">https://archive-is.tumblr.com/post/806832066465497088/ladies...</a><p><a href="https://archive-is.tumblr.com/post/807584470961111040/it-seems-people-dont-read-between-the-lines-they" rel="nofollow">https://archive-is.tumblr.com/post/807584470961111040/it-see...</a>
The word salad with ukraine, arms trade, nazis, hunter biden, leave no doubt the operator is from Russia.
He’s probably being purposefully vague which makes for difficult reading.
Am I reading this right… they tampered with an archived page and then changed it back? How do we know? Is there another archive site that has before and after proof?
See <a href="https://en.wikipedia.org/wiki/Wikipedia%3ARequests_for_comment%2FArchive.is_RFC_5#Evidence_of_altering_snapshots" rel="nofollow">https://en.wikipedia.org/wiki/Wikipedia%3ARequests_for_comme...</a>
They've changed usernames they use to post under. That's the only "altered" allegation they've been accused of.<p>BTW, they also alter paywalls and other elements, because otherwise, many websites won't show the main content these days.<p>It kind of seems like "altered" is the new "hacker" today?
Specifically, they changed a "commenting as: [their alias]" UI element to "commenting as: [name of the blogger they were fighting with]".<p>Compare (the changed element is near the very bottom of the page; replace the "[dot]" since these URLs seem to trigger spam filters for some commenters):<p>archive [dot] is/gFD6Z<p>megalodon [dot] jp/2026-0219-1628-23/<a href="https://archive.is:443/gFD6Z" rel="nofollow">https://archive.is:443/gFD6Z</a>
I noticed I've started being redirected to a blank nginx server for archive.is... but only the .is domain, .ph and .today work just fine. I wonder if they ended up on an adblocker or two.
> If you want to pretend this never happened – delete your old article and post the new one you have promised. And I will not write “an OSINT investigation” on your Nazi grandfather<p>From hero to a Kremlin troll in five seconds.
Archive.today's domain registrar is Tucows for anyone wondering
It doesn't work properly anyway anymore...
So toward the end of last year, the FBI was after archive.today, presumably either for keeping track of things the current administration doesn't want tracked, or maybe for the paywall thing (on behalf of rich donors/IP owners). <a href="https://gizmodo.com/the-fbi-is-trying-to-unmask-the-registrar-behind-archive-today-2000682868" rel="nofollow">https://gizmodo.com/the-fbi-is-trying-to-unmask-the-registra...</a><p>That effort appears to have gone nowhere, so now suddenly archive.today commits reputational suicide? I don't suppose someone could look deeper into this please?
The archive.today operator claims on his blog that this was nothing major: <a href="https://lj.rossia.org/users/archive_today/" rel="nofollow">https://lj.rossia.org/users/archive_today/</a><p>> Regarding the FBI’s request, my understanding is that they were seeking some form of offline action from us — anything from a witness statement (“Yes, this page was saved at such-and-such a time, and no one has accessed or modified it since”) to operational work involving a specific group of users. These users are not necessarily associates of Epstein; among our users who are particularly wary of the FBI, there are also less frequently mentioned groups, such as environmental activists or right-to-repair advocates.<p>> Since no one was physically present in the United States at that time, however, the matter did not progress further.<p>> You already know who turned this request into a full-blown panic about “the FBI accusing the archive and preparing to confiscate everything.”<p>Not sure who he's talking about there.
> “I’m glad the Wikipedia community has come to a clear consensus, and I hope this inspires the Wikimedia Foundation to look into creating its own archival service,” he told us.<p>Hardly possible for Wikimedia to provide a service like archive.today given the legal trouble of the latter.<p>Strangely naive.
It would be nice if there was a non-dynamic snapshot archive as well as the page itself. That way, if the loaded JavaScript stops causes it to stop rendering, at least there’ll be a static fallback
"Non-paywalled" ad-free link to archive: <a href="https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment/Archive.is_RFC_5" rel="nofollow">https://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment...</a>
> an analysis of existing links has shown that most of its uses can be replaced.<p>Oh? Do tell!
I would be suprised if archive.today had something that was not in the wayback machine
Archive.today has just about everything the archived site doesn't want archived. Archive.org doesn't, because it lets sites delete archives.
I know that sometimes the behavior of each archiver service is a bit different. For example, it's possible that both Archive.today and the Internet Archive say they have a copy of a page, but then when you open up the IA version, you might see that it renders completely differently or not at all. It might be caused because the webpage has like two scrollbars, or maybe there's a redirect that happens when a link to the page is loaded. I notice this seems to happen on documentation pages that are hosted by Salesforce. It can be a bit of a pain if you want to save to save a backup copy online of a release note or something like that for everyone to easily reference in the future.
> it's possible that both Archive.today and the Internet Archive say they have a copy of a page, but then when you open up the IA version, you might see that it renders completely differently or not at all<p>AT archives the page as seen, even including a screenshot.<p>IA archives the page as loaded, then <i>when you view</i> hamfistedly injects its header bar and <i>executes the source JS</i>. As you'd expect the result is often wrecked - or tampered.
Wayback machine removes archives upon request, so there’s definitely stuff they don’t make publicly available (they may still have it).
You don't even need to do requests if you are the owner of the URL. Robot.txt changes are applied in retrospect, which means you can disallow crawls to /abc, request a re-crawl, and all snapshots from the past which match this new rule will be removed.
Trying to search the Wayback machine almost always gives me their made-up 498 error, and when I do get a result the interface for scrolling through dates is janky at best.
Accounts to bypass paywalls? The audacity to do it?
>> an analysis of existing links has shown that most of its uses can be replaced.<p>>Oh? Do tell!<p>They do. In the very next paragraph in fact:<p><pre><code> The guidance says editors can remove Archive.today links when the original
source is still online and has identical content; replace the archive link so
it points to a different archive site, like the Internet Archive,
Ghostarchive, or Megalodon; or “change the original source to something that
doesn’t need an archive (e.g., a source that was printed on paper)</code></pre>
[flagged]
> archive.today<p>Hopeless. Caught tampering the archive.<p>The whole situation is not great.
I just quoted the <i>very next paragraph</i> after the sentence you quoted and asked for clarification.<p>I did so. You're welcome.<p>As for the rest, take it up with Jimmy Wiles, not me.
> the community should figure out how to efficiently remove links to archive.today<p>You're part of the community! Prove him right!
FYI, archive.today is NOT the Internet Archive/Wayback Machine.
I prefer archive.today because the Internet Archive’s Wayback Machine allows retrospective removals of archived pages. If a URL has already been crawled and archived, the site owner can later add that URL to robots.txt and request a re-crawl. Once the crawler detects the updated robots.txt, previously stored snapshots of that page can become inaccessible, even if they were captured before the rule was added.<p>Unfortunately this happens more often than one would expect.<p>I found this out when I preserved my very first homepage I made as a child on a free hosting service. I archived it on archive.org, and thought it would stay there forever. Then, in 2017 the free host changed the robots.txt, closed all services, and my treasured memory was forever gone from the internet. ;(
>In emails sent to Patokallio after the DDoS began, “Nora” from Archive.today threatened to create a public association between Patokallio’s name and AI porn and to create a gay dating app with Patokallio’s name.<p>Oh good. That's definitely a reasonable thing to do or think.<p>The raw sociopathy of some people. Getting doxxed isn't good, but this response is unhinged.
It's a reminder how fragile and tenuous are the connections between our browser/client outlays, our societal perceptions of online norms, and our laws.<p>We live at a moment where it's trivially easy to frame possession of an unsavory (or even illegal) number on another person's storage media, without that person even realizing (and possibly, with some WebRTC craftiness and social engineering, even get them to pass on the taboo payload to others).
I mean, the admin of archive.today might face jail time if deanonymised, kind of understandable he's nervous. Meanwhile for Patokallio it's just curiosity and clicks
That was private negotiations, btw, not public statements.<p>In response to J.P's blog already framed AT as project grown from a carding forum + pushed his speculations onto ArsTechnica, whose parent company just destroyed 12ft and is on to a new victim. The story is full of untold conflicts of interests covered with soap opera around DDoS.
The FBI called out archive.today a couple months ago, there's clearly a campaign against them by the USA (4th Reich), which stands principally against any information repository they don't control or have influence over (its Russian owned). This is simply donors of the Trump regime who own media companies requesting this because its the primary way around paywalls for most people who know about it.
[dead]
[dead]
[flagged]
[flagged]
Anecdotally I generally see archive.is/archive.today links floating around "stochastic terrorist" sites and other hate cults.
They seem totally unrelated to the Internet Archive. They probably only ever got on Wikipedia by leeching of the IA brand and confusing enough people to use them
At this point Archive.today provides a better service (all things considered) compared to Wikipedia, at least when it comes to current affairs.
Why not show both? Wikipedia could display archive links alongside original sources, clearly labeled so readers know which is which. This preserves access when originals disappear while keeping the primary source as the main reference.
The objection is to this specific archieve service not archiving in general.
Wikipedia shouldn't allow links to sites which intentionally falsify archived pages and use their visitors to perform DDOS attacks.
They generally do. Random example, citation 349 on the page of George Washington: ""A Brief History of GW"[link]. GW Libraries. Archived[link] from the original on September 14, 2019. Retrieved August 19, 2019."
Anyone has a short summary as to who and why Archive.today acted via DDos? Isn't that something done by malicious actors? Or did others misuse Archive.today?
Never trust Leftypedia.
I will no longer donate to Wikipedia as long as this is policy.