33 comments

  • abhinai4 hours ago
    This is a very neat idea. I am not sure why the page needs to load 40mb of data and make me wait 5 mins before the first view. I'd probably also add some ranking criteria to surface good quality articles that maximize the "I learnt something new today" factor. Overall kudos to the developer for original thinking.
    • sampli1 hour ago
      Yeah. Should be able to load in the background once you start scrolling
  • adifi0003 minutes ago
    Nice loading indicator! People just don't know how to make those anymore. I think you mistitled your submission, though?
  • drivers992 hours ago
    I ran across a grammar mistake in one of the entries and clicked into the actual wikipedia entry to fix it. That was satisfying. Imagine being able to do that on social media.
  • pinkmuffinere5 hours ago
    Please fix the loading issue and I’ll return! I think you don’t need to pull all the data at initialization, you could lazily grab a couple from each category and just keep doing it as people scroll.
    • rebane20014 hours ago
      The loading issue is just a hug of death, the site&#x27;s currently getting multiple visitors per second, and that requires more than a gigabit of bandwidth to handle.<p>I sort of need to pull all the data at the initialization because I need to map out how every post affects every other - the links between posts are what take up majority of the storage, not the text inside the posts. It&#x27;s also kind of the only way to preserve privacy.
      • goodmythical4 hours ago
        I feel very strongly that you should be able to serve hundreds or thousands of requests at gbps speeds.<p>Why are you serving so much data personally instead of just reformatting theirs?<p>Even if you&#x27;re serving it locally...I mean a regular 100mbit line should easily support tens or hundreds of text users...<p>What am I missing?
        • rebane20014 hours ago
          &gt; Why are you serving so much data personally instead of just reformatting theirs?<p>Because then you only need to download 40MB of data and do minimal processing. If you were to take the dumps off of Wikimedia, you would need to download 400MB of data and do processing on that data that would take minutes of time.<p>And also it&#x27;s kind of rude to hotlink a half a gig of data on someone else&#x27;s site.<p>&gt; What am I missing?<p>40MB per second is 320mbps, so even 3 visitors per second maxes out a gigabit connection.
          • goodmythical4 hours ago
            no but...why are you passing 40mb from your server to my device in a lump like that?<p>All I&#x27;m getting from your serve is a title, a sentence, and an image.<p>Why not give me say the first 20 and start loading the next 20 when I reach the 10th?<p>That way you&#x27;re not getting hit with 40mb for every single click but only a couple of mb per click and a couple more per scroll for users that are actually using the service?<p>Look at your logs. How many people only ever got the first 40 and clicked off because you&#x27;re getting ddosed? Every single time that&#x27;s happened (which is more than a few times based on HN posts), you&#x27;ve not only lost a user but weakened the experience of someone that&#x27;s chosen to wait by increasing their load time by insisting that they wait for the entire 40MB download.<p>I am just having trouble understanding why you&#x27;ve decided to make me and your server sit through a 40MB transfer for text and images...
            • rebane20014 hours ago
              &gt; no but...why are you passing 40mb from your server to my device in a lump like that?<p>Because you <i>need</i> all of the cross-article link data, which is the majority of the 40mb, to run the algorithm. The algorithm does not run on the server, because I care about both user privacy and internet preservation.<p>Once the 40MB is downloaded, you can go offline, and the algorithm will still work. If you save the index.html and the 40MB file, you can run the entire thing locally.<p>&gt; actually using the service<p>This is a fun website, it is not a &quot;service&quot;.<p>&gt; you&#x27;ve not only lost a user but weakened the experience of someone that&#x27;s chosen to wait by increasing their load time<p>I make websites for fun. Losing a user doesn&#x27;t particularly affect me, I don&#x27;t plan on monetizing this, I just want people to have fun.<p>Yes, it is annoying that people have to wait a bit for the page to load, but that is only because the project has hundreds of thousands of more eyes on it than I expected it to within the first few hours. I expected this project to get a few hundred visits within the first few hours, in which case the bandwidth wouldn&#x27;t have been an issue whatsoever.<p>&gt; I am just having trouble understanding why you&#x27;ve decided to make me and your server sit through a 40MB transfer for text and images...<p>Running the algorithm locally, privacy, stability, preservation, ability to look at and play with the code, ability to go offline, easy to maintain and host etc.<p>Besides, sites like Twitter use up like a quarter of that for the JavaScript alone.
              • nickorlow3 hours ago
                I believe in privacy but generally people are fine with rec algorithms running on a server if it&#x27;s transparent enough&#x2F;self hostable. Mastodon&#x2F;DuckDuckGo&#x2F;HN&#x2F;etc all don&#x27;t need to download a huge blob locally. (If you do want it to run locally, hosting the blob on a CDN or packaging this as an app and letting someone else host it would probably improve the experience a lot)
                • rebane20012 hours ago
                  Mastodon&#x2F;HN do not have a personalized weighted algorithm. On HN you see what everyone else sees, and on Mastodon the feed is chronological. DuckDuckGo offers some privacy, but still sends your search queries to Bing.<p>Also, all three of the examples are projects that have years of dev effort and hosting infrastructure behind them - Xikipedia is a project I threw together in less than a day for fun, I don&#x27;t want to put effort into server-side maintenance and upkeep for such a small project. I just want a static index.html I can throw in &#x2F;var&#x2F;www&#x2F; and forget.<p>And re: hosting, my bare metal box is <i>fine</i>. It&#x27;s just slow right now because it&#x27;s getting a huge spike of attention. I don&#x27;t want to pay for a CDN, and I doubt I could host a file getting multiple gigabits per second of traffic for free.
                  • vages11 minutes ago
                    I really like how you have done things. Didn’t mind the waiting time.<p>Thank you for making my day a little brighter.
          • lazide4 hours ago
            Why not…. Load it on demand?
            • goodmythical4 hours ago
              That&#x27;s my point. So confused. Got a ton of users clicking off because of this.
              • fgfarben2 hours ago
                The point you&#x27;re missing is that this website is actually a submarine ad for the domain, xikipedia.org, which the owner is probably trying to sell.
                • rebane20012 hours ago
                  That&#x27;s a very silly claim considering I bought the domain the same day I released the project. I&#x27;m sure whoever would&#x27;ve been interested in buying the domain could&#x27;ve already swept it up for 10 bucks before me.
  • baxtr2 hours ago
    TIL:<p><i>The United States Virgin Islands are a group of islands in the Caribbean Sea. They are currently owned and under the authority of the United States Government. They used to be owned by Denmark (and called Danish West Indies). They were sold to the U.S. on January 17, 1917, because of fear that the Germans would capture them and use them as a submarine base in World War I.</i><p><a href="https:&#x2F;&#x2F;simple.wikipedia.org&#x2F;wiki&#x2F;United_States_Virgin_Islands" rel="nofollow">https:&#x2F;&#x2F;simple.wikipedia.org&#x2F;wiki&#x2F;United_States_Virgin_Islan...</a>
  • glenstein3 hours ago
    Love the concept. Wikitok also exists [1] but the recommendation aspect that you&#x27;re bringing you the table is a very intriguing original spin on it. I would be fascinated to see what a smart algorithm could discover on my behalf on Wikipedia given enough time.<p>I think it would be nice if you could do a non simple English version but nevertheless happy with what you&#x27;ve put together, and I&#x27;ve added a shortcut to my phone. Please don&#x27;t let the negativity stop you from continuing to work on it.<p>1. <a href="https:&#x2F;&#x2F;www.wikitok.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.wikitok.io&#x2F;</a>
    • noonething2 hours ago
      First thing I see: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Esophageal_cancer" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Esophageal_cancer</a><p>Thank you.
  • upstreamutopia32 minutes ago
    You know, I enjoyed this, it&#x27;s nice to get some random, interesting stuff to browse on occasion.
  • hamburglar5 hours ago
    Took several minutes to load for me, and when my download got to 100%, the browser (safari on ios) refreshed the page and started at 0% again.
  • 0xE1337DAD2 hours ago
    I love the concept. But the long load at startup really kills it. Even clicking off the site and reloading makes me have to go through the download all over again.
  • eccentricdz2 hours ago
    Built something similar for research papers: <a href="https:&#x2F;&#x2F;www.producthunt.com&#x2F;products&#x2F;soch" rel="nofollow">https:&#x2F;&#x2F;www.producthunt.com&#x2F;products&#x2F;soch</a>
  • orionfollett2 hours ago
    This is really cool. And in only 500 lines of code is really impressive. I would have thought this was much more.
  • icameron2 hours ago
    Page crashed after downloading and extracting. On safari iPhone that’s a few years old, latest iOS. I was really interested in trying &#x2F; why I waited Ed: tried again it crashed at 66% loading (after 100% loading)
  • joeyguerra4 hours ago
    I wonder if this would be a &quot;better&quot; way to build this thing: <a href="https:&#x2F;&#x2F;www.infoq.com&#x2F;news&#x2F;2026&#x2F;01&#x2F;duckdb-iceberg-browser-s3&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.infoq.com&#x2F;news&#x2F;2026&#x2F;01&#x2F;duckdb-iceberg-browser-s3...</a><p>DuckDB loaded in the browser via WebAssembly and Parquet files in S3.
  • ryan_j_naughton3 hours ago
    How does it actually work? Can you add an &quot;about&quot; page that goes into the algo? Or can you add more info on the readme on github? I&#x27;d love to learn more.
    • rebane20012 hours ago
      I might add a proper explanation at some point, but for now you can view-source the page and read the code, there really isn&#x27;t that much of it.
  • barcodehorse5 hours ago
    An issue I have with these apps that claim to be for doomscrolling is that you don&#x27;t open apps like Instagram or Facebook to doomscroll, you open them to check messages or stories. The doomscrolling is an afterthought. These things assume you can realize you&#x27;re doomscrolling and not only break out of it, but choose to hypnotize yourself in <i>their</i> app.
    • forgetbook4 hours ago
      This could be a product. I&#x27;d pay for an app that fwd&#x27;d messages from other apps and gave me a wikipedia feed to scroll on the elevator &#x2F; other places where the phone is a social respite
  • mathieudombrock6 hours ago
    This is unfortunately loading very, very slowly for me.
  • strich5 hours ago
    I was genuinely excited to try this and it sounded in theory like a lot of fun! Unfortunately yeah too slow to load.
  • nxobject3 hours ago
    Reminds one of Sesame Street - let’s put educational content in this new hyponotic medium!
  • with5 hours ago
    It&#x27;s ironic that doomscrollable social media feeds are built for low attention spans, because this website is the opposite. Gave up after 20 seconds.
  • wraptile3 hours ago
    This would actually be really fun if built around social feature like curators who could quote-repost the posts, popular&#x2F;trending sorting and a threaded comment system.
  • its_ubuntu4 hours ago
    So they took the worst aspect of Wikipedia (Wikipedia), and the worst aspect of &quot;social&quot; media (doom scrolling), and combined them? Brilliant concept. When can we expect the IPO?
  • sputknick4 hours ago
    If you load it in Chrome, it loads MUCH faster
  • esperent6 hours ago
    &gt; Xikipedia is loading... (3% of 40MB loaded)<p>I gave up after about a minute.
  • Der_Einzige4 hours ago
    I am so lucky to be basically immune to short form video garbage like TikTok, but I am not immune to Wikipedia&#x27;s allure.<p>I easily have over 100 tabs of wikipedia open at any one time, reading about the most random stuff ever. I&#x27;m the guy who will unironically look up the food I&#x27;m eating on wikipedia while I&#x27;m eating it.<p>No need to try to make it &quot;doomscrollable&quot; when it&#x27;s already got me by the balls.
  • singpolyma35 hours ago
    Please only continue if you are an adult? You realize Wikipedia has no age restrictions right...
  • efilife1 hour ago
    all images overflow for me
  • kurtis_reed3 hours ago
    It took forever to load
  • kachapopopow5 hours ago
    surprisingly... boring?
  • pedrozieg7 hours ago
    [dead]
  • RiceNBananas5 hours ago
    Cool story bro<p>WP is already shit, why should anyone doomscroll it?
  • reader92745 hours ago
    Man wikipedia is full of trash
  • closetkantian4 hours ago
    I like the concept, but I&#x27;m not going to be reading Simple English Wikipedia.