Yeah I can't figure out if this is something the author stands by or if it's just a project to mess around with goroutines or something. And it's unfair to criticise if it isn't meant to be good.<p>> The server reads all the documents into memory at start-up. The corpus occupies about 600 MB, so this is reasonable, though it pushes the limits of what a cloud server with 1 GB of RAM can handle. With 2 GB, it's no problem.<p>1200 books per 1GB server? Whole-internet search engines are older than 1GB servers.<p>> queries that take 2,000 milliseconds from disk can be done in 800 milliseconds from memory. That's still too slow, though, which is why fast-concordance uses [lots of threads]<p>No query should ever take either of those amounts of time. And the "optimisation" is to <i>just use more threads</i>. Which other consumers could have used to run their searches, but now can't.<p><a href="https://www.pingdom.com/blog/original-google-setup-at-stanford-university/" rel="nofollow">https://www.pingdom.com/blog/original-google-setup-at-stanfo...</a>