12 comments

  • forgotpwd1611 hours ago
    <p><pre><code> 74910,74912c187768,187779 &lt; [Example 1: If you want to use the code conversion facetcodecvt_utf8to output tocouta UTF-8 multibyte sequence &lt; corresponding to a wide string, but you don&#x27;t want to alter the locale forcout, you can write something like:\237 D.27.21954 \251ISO&#x2F;IECN4950wstring_convert&lt;std::codecvt_utf8&lt;wchar_t&gt;&gt; myconv; &lt; std::string mbstring = myconv.to_bytes\050L&quot;Hello\134n&quot;\051; --- &gt; &gt; [Example 1: If you want to use the code conversion facet codecvt_utf8 to output to cout a UTF-8 multibyte sequence &gt; corresponding to a wide string, but you don’t want to alter the locale for cout, you can write something like: &gt; &gt; § D.27.2 &gt; 1954 &gt; &gt; © ISO&#x2F;IEC &gt; N4950 &gt; &gt; wstring_convert&lt;std::codecvt_utf8&lt;wchar_t&gt;&gt; myconv; &gt; std::string mbstring = myconv.to_bytes(L&quot;Hello\n&quot;); </code></pre> Is indeed faster but output is messier. And doesn&#x27;t handle Unicode in contrast to mutool that does. (Probably also explains the big speed boost.)
    • TZubiri10 hours ago
      In my experience with parsing PDFs, speed has never been an issue, it has always been a matter of quality.
      • DetroitThrow10 hours ago
        I tried a small PDF and got a memory error. It&#x27;s definitely much faster than MuPDF on that file.
        • littlestymaar1 hour ago
          “The fastest PDF extractor is the one that crashes at the beginning of the file” or something.
    • lulzx11 hours ago
      fixed.
      • forgotpwd1610 hours ago
        Yeah, sorry for confusion. When said Unicode, meant foreign text rather (just) the unescaped symbols, e.g. Greek. At one random Greek textbook[0], zpdf output is (extract | head -15):<p><pre><code> 01F9020101FC020401F9020301FB02070205020800030209020701FF01F90203020901F9012D020A0201020101FF01FB01FE0208 0200012E0219021802160218013202120222 0209021D0212021D012E013202200222000301FA021A0220021C022002160213012E0222000F000301F90206012C 020301FF02000205020101FC020901F90003020001F9020701F9020E020802000205020A 01FC028C0213021B022002230221021800030200012E021902180216021201320221021A012E00030209021D0212021D012E013202200222000301FA021A0220021C022002160213012E0222000F000301F90206012C 0200020D02030208020901F90203020901FF0203020502080003012B020001F9012B020001F901FA0205020A01FD01FE0208 020201300132012E012F021A012F0210021B013202200221012E0222 0209021D0212021D012E013202200222000301FA021A0220021C022002160213012E0222000F000301F90206012C </code></pre> This for entire book. Mutool extracts the text just fine.<p>[0]: <a href="https:&#x2F;&#x2F;repository.kallipos.gr&#x2F;handle&#x2F;11419&#x2F;15087" rel="nofollow">https:&#x2F;&#x2F;repository.kallipos.gr&#x2F;handle&#x2F;11419&#x2F;15087</a>
        • lulzx3 hours ago
          works now!<p>ΑΛΕΞΑΝΔΡΟΣ ΤΡΙΑΝΤΑΦΥΛΛΙΔΗΣ Καθηγητής Τμήματος Βιολογίας, ΑΠΘ<p><pre><code> ΝΙΚΟΛΕΤΑ ΚΑΡΑΪΣΚΟΥ Επίκουρη Καθηγήτρια Τμήματος Βιολογίας, ΑΠΘ ΚΩΝΣΤΑΝΤΙΝΟΣ ΓΚΑΓΚΑΒΟΥΖΗΣ Μεταδιδάκτορας Τμήματος Βιολογίας, ΑΠΘ Γονιδιώματα Δομή, Λειτουργία και Εφαρμογές</code></pre>
          • forgotpwd162 hours ago
            Nice! Speed wasn&#x27;t even compromised. Still 5x when benching. Also saw now there&#x27;s page with tool compiled to wasm. Cool.
            • lulzx1 hour ago
              thanks! :)
        • lulzx9 hours ago
          sorry, I haven&#x27;t yet figured out non-latin with tounicode references.
      • TZubiri10 hours ago
        Lol, but there&#x27;s 100 competitors in the PDF text extraction space, some are multi million dollar industries: AWS textract, ABBY PDFreader, PDFBox, I think you may be underestimating the challenge here.
  • lulzx14 hours ago
    I built a PDF text extraction library in Zig that&#x27;s significantly faster than MuPDF for text extraction workloads.<p>~41K pages&#x2F;sec peak throughput.<p>Key choices: memory-mapped I&#x2F;O, SIMD string search, parallel page extraction, streaming output. Handles CID fonts, incremental updates, all common compression filters.<p>~5,000 lines, no dependencies, compiles in &lt;2s.<p>Why it&#x27;s fast:<p><pre><code> - Memory-mapped file I&#x2F;O (no read syscalls) - Zero-copy parsing where possible - SIMD-accelerated string search for finding PDF structures - Parallel extraction across pages using Zig&#x27;s thread pool - Streaming output (no intermediate allocations for extracted text) </code></pre> What it handles:<p><pre><code> - XRef tables and streams (PDF 1.5+) - Incremental PDF updates (&#x2F;Prev chain) - FlateDecode, ASCII85, LZW, RunLength decompression - Font encodings: WinAnsi, MacRoman, ToUnicode CMap - CID fonts (Type0, Identity-H&#x2F;V, UTF-16BE with surrogate pairs)</code></pre>
    • DannyBee7 hours ago
      FWIW - mupdf is simply not fast. I&#x27;ve done lots of pdf indexing apps, and mupdf is by far the slowest and least able to open valid pdfs when it came to text extraction. It also takes <i>tons</i> of memory.<p>a better speed comparison would either be multi-process pdfium (since pdfium was forked from foxit before multi-thread support, you can&#x27;t thread it), multi-threaded foxit, or something like syncfusion (which is quite fast and supports multiple threads). Or even single thread pdfium vs single thread your-code.<p>These were always the fastest&#x2F;best options. I can (and do) achieve 41k pages&#x2F;sec or better on these options.<p>The other thing it doesn&#x27;t appear you mention is whether you handle putting the words in reading order (IE how they appear on the page), or only stream order (which varies in its relation to apperance order) .<p>If it&#x27;s only stream order, sure, that&#x27;s really fast to do. But also not anywhere near as helpful as reading order, which is what other text-extraction engines do.<p>Looking at the code, it looks like the code to do reading order exists, but is not what is being benchmarked or used by default?<p>If so, this is really comparing apples and oranges.
    • tveita12 hours ago
      What kind of performance are you seeing with&#x2F;without SIMD enabled?<p>From <a href="https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf&#x2F;blob&#x2F;main&#x2F;src&#x2F;main.zig" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf&#x2F;blob&#x2F;main&#x2F;src&#x2F;main.zig</a> it looks like the help text cites an unimplemented &quot;-j&quot; option to enable multiple threads.<p>There is a &quot;--parallel&quot; option, but that is only implemented for the &quot;bench&quot; command.
      • lulzx12 hours ago
        I have now made parallel by default and added an option to enable multiple threads.<p>I haven&#x27;t tested without SIMD.
    • cheshire_cat12 hours ago
      You&#x27;ve released quite a few projects lately, very impressive.<p>Are you using LLMs for parts of the coding?<p>What&#x27;s your work flow when approaching a new project like this?
      • lulzx12 hours ago
        Claude Code.
      • littlestymaar12 hours ago
        &gt; Are you using LLMs for parts of the coding?<p>I can&#x27;t talk about the code, but the readme and commit messages are most likely LLM-generated.<p>And when you take into account that the first commit happened just three hours ago, it feels like the entire project has been vibe coded.
        • Neywiny11 hours ago
          Hard disagree. Initial commit was 6k LOC. Author could&#x27;ve spent years before committing. Ill advised but not impossible.
          • littlestymaar11 hours ago
            Why would you make Claude write your commit message for a commit you&#x27;ve spent years working on though?
            • Neywiny11 hours ago
              1. Be not good at or a fan of git when coding<p>2. Be not good at or a fan of git when committing<p>Not sure what the disconnect is.<p>Now <i>if</i> it were vibecoded, I wouldn&#x27;t be surprised. But benefit of the doubt
              • Jach9 hours ago
                We&#x27;re well beyond benefit of the doubt these days. If it looks like a duck... For me there wasn&#x27;t any doubt, the author&#x27;s first top comment here was evidence enough, then seeing the readme + random code + random commit message, it&#x27;s all obvious LLM-speak to me.<p>I don&#x27;t particularly care, though, and I&#x27;m more positive about LLMs than negative even if I don&#x27;t (yet?) use them very much. I think it&#x27;s hilarious that a few people asked for Python bindings and then bam, done, and one person is like &quot;..wha?&quot; Yes, LLMs can do that sort of grunt work now! How cool, if kind of pointless. Couldn&#x27;t the cycles have just been spent on trying to make muPDF better? Though I see they&#x27;re in C and AGPL, I suppose either is motivation enough to do a rewrite instead. (This is MIT Licensed though it&#x27;s still unclear to me how 100% or even large-% vibe-coded code deserves any copyright protection, I think all such should generally be under the Unlicense&#x2F;public domain.)<p>If the intent of &quot;benefit of the doubt&quot; is to reduce people having a freak out over anyone who dares use these tools, I get that.
                • lulzx8 hours ago
                  I have updated the licence to WTFPL.<p>I&#x27;ll try my best to make it a really good one!
    • littlestymaar1 hour ago
      &gt; I built<p>You didn&#x27;t. Claude did. Like it did write this comment.<p>And you didn&#x27;t even bother testing it before submitting, which is insulting to everyone.
    • jeffbee12 hours ago
      What&#x27;s fast about mmap?
      • kennethallen5 hours ago
        Two big advantages:<p>You avoid an unnecessary copy. Normal read system call gets the data from disk hardware into the kernel page cache and then copies it into the buffer you provide in your process memory. With mmap, the page cache is mapped directly into your process memory, no copy.<p>All running processes share the mapped copy of the file.<p>There are a lot of downsides to mmap: you lose explicit error handling and fine-grained control of when exactly I&#x2F;O happens. Consult the classic article on why sophisticated systems like DBMSs do not use mmap: <a href="https:&#x2F;&#x2F;db.cs.cmu.edu&#x2F;mmap-cidr2022&#x2F;" rel="nofollow">https:&#x2F;&#x2F;db.cs.cmu.edu&#x2F;mmap-cidr2022&#x2F;</a>
        • commandersaki1 hour ago
          <i>you lose explicit error handling</i><p>I&#x27;ve never had to use mmap but this is always been the issue in my head. If you&#x27;re treating I&#x2F;O as memory pages, what happens when you read a page and it needs to &quot;fault&quot; by reading the backing storage but the storage fails to deliver? What can be said at that point, or does the program crash?
        • saidinesh54 hours ago
          This is a very interesting link. I didn&#x27;t expect mmap to be less performant than read() calls.<p>I now wonder which use cases would mmap suit better - if any...<p>&gt; All running processes share the mapped copy of the file.<p>So something like building linkers that deal with read only shared libraries &quot;plugins&quot; etc ..?
      • rishabhaiover10 hours ago
        it allows the program to reference memory without having to manage it in the heap space. it would make the program faster in a memory managed language, otherwise it would reduce the memory footprint consumed by the program.
        • jeffbee10 hours ago
          You mean it converts an expression like `buf[i]` into a baroque sequence of CPU exception paths, potentially involving a trap back into the kernel.
          • rishabhaiover9 hours ago
            I don&#x27;t fully understand the under the hood mechanics of mmap, but I can sense that you&#x27;re trying to convey that mmap shouldn&#x27;t be used a blanket optimization technique as there are tradeoffs in terms of page fault overheads (being at the mercy of OS page cache mechanics)
            • StilesCrisis6 hours ago
              Tradeoffs such as &quot;if an I&#x2F;O error occurs, the program immediately segfaults.&quot; Also, I doubt you&#x27;re I&#x2F;O bound to the point where mmap noticeably better than read, but I guess it&#x27;s fine for an experiment.
            • jibal8 hours ago
              I think he&#x27;s conveying that he doesn&#x27;t know what he&#x27;s talking about. buf[i] generates the same code regardless of whether mmap is being used. The first access to a page will cause a trap that loads the page into memory, but this is also true if the memory is read into.
    • jonstewart11 hours ago
      What’s the fidelity like compared to tika?
      • lulzx11 hours ago
        The accuracy difference is marginal (1-2%) but the speed difference is massive.
  • xvilka3 hours ago
    Test it on major PDF corpora[1]<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;pdf-association&#x2F;pdf-corpora" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;pdf-association&#x2F;pdf-corpora</a>
  • manmal2 hours ago
    Is there the possibility to hook in OCR for text blocks flattened into an image, maybe with some callback? That’s my biggest gripe with dealing with PDFs.
  • mpeg12 hours ago
    very nice, it&#x27;d be good to see a feature comparison as when I use mupdf it&#x27;s not really just about speed, but about the level of support of all kinds of obscure pdf features, and good level of accuracy of the built-in algorithms for things like handling two-column pages, identifying paragraphs, etc.<p>the licensing is a huge blocker for using mupdf in non-OSS tools, so it&#x27;s very nice to see this is MIT<p>python bindings would be good too
    • lulzx12 hours ago
      added a comparison, will improve further. <a href="https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf?tab=readme-ov-file#comparison-with-mupdf" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf?tab=readme-ov-file#comparison-...</a><p>also, added python bindings.
      • mpeg11 hours ago
        thanks, claude, I guess haha<p>as others have commented, I think while this is a nice portfolio piece, I would worry about its longevity as a vibe coded project
        • chanbam10 hours ago
          If he made something legitimately useful, who cares how?
          • littlestymaar2 hours ago
            It seems that he didn&#x27;t even test it before submitting though…<p>The author has created 30 new projects on github, in half a dozen different programming language, over the past month alone, and he also happen to have an LLM-generated blog. I think it&#x27;s fair to say it&#x27;s not “legitimately useful” except as a way for the author to fill his resume as he&#x27;s looking for a job.<p>This kind of behavior is toxic.
            • mpeg1 hour ago
              Exactly this, I like to give the benefit of the doubt to people but pushing huge chunks of code this quickly shows the whole thing is vibe coded<p>I actually don’t mind LLM generated code when it’s been manually reviewed, but this and a quick look through other submissions makes me realise the author is simply trying to pad their resume with OSS projects. Respect the hustle, but it shows a lack of respect for other’s time to then submit it to show HN
              • lulzx13 minutes ago
                Fair point. I won&#x27;t submit here again until I&#x27;ve put in the work to make something that respects people&#x27;s time to evaluate it. Lesson learned. :)
  • amkharg266 hours ago
    Impressive performance gains! 5x faster than MuPDF is significant, especially for applications processing large volumes of PDFs. Zig&#x27;s memory safety without garbage collection overhead makes it ideal for this kind of performance-critical work.<p>I&#x27;m curious about the trade-offs mentioned in the comments regarding Unicode handling. For document analysis pipelines (like extracting text from technical documentation or research papers), robust Unicode support is often critical.<p>Would be interesting to see benchmarks on different PDF types - academic papers with equations, scanned documents with OCR layers, and complex layouts with tables. Performance can vary wildly depending on the document structure.
    • polyaniline5 hours ago
      What memory safety?
      • Retr0id2 hours ago
        (the comment was written by an llm bot)
  • fainpul4 hours ago
    These vibe coded tests are terrible:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf&#x2F;blob&#x2F;main&#x2F;python&#x2F;tests&#x2F;test_zpdf.py" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf&#x2F;blob&#x2F;main&#x2F;python&#x2F;tests&#x2F;test_zp...</a>
    • lulzx1 hour ago
      this is more like a quick test for python bindings, the zig files have tests within them for broad range of things.
  • agentifysh12 hours ago
    excellent stuff what makes zig so fast
    • AndyKelley12 hours ago
      It makes your development workflow smooth enough that you have the time and energy to do stuff like all the bullet points listed in <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46437289">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46437289</a>
      • forgotpwd1611 hours ago
        &gt;you have the time and energy to do stuff like all the bullet points listed<p>Don&#x27;t disagree but in specific case, per the author, project was made via Claude Code. Although could as well be that Zig is better as LLM target. Noticed many new vibe projects decide to use Zig as target.
    • observationist12 hours ago
      Not being slow - they compile straight to bytecode, they aren&#x27;t interpreted, and have aggressive, opinionated optimizations baked in by default, so it&#x27;s even faster than compiled c (under default conditions.)<p>Contrasted with python, which is interpreted, has a clunky runtime, minimal optimizations, and all sorts of choices that result in slow, redundant, and also slow, performance.<p>The price for performance is safety checks, redundancy, how badly wrong things can go, and so on.<p>A good compromise is luajit - you get some of the same aggressive optimizations, but in an interpreted language, with better-than-c performance but interpreted language convenience, access to low level things that can explode just as spectacularly as with zig or c, but also a beautiful language.
      • Zambyte11 hours ago
        Zig is safer than C under default conditions, not faster. By default does a lot of illegal behavior safety checking, such as array and slice bounds checking, numeric overflow checking, and invalid union access checking. These features are disabled by certain (non default) build modes, or explicitly disabled at a per scope level.<p>It may be easier to write code that runs faster in Zig than in C under similar build optimization levels, because writing high performance C code looks a lot like writing idiomatic Zig code. The Zig standard library offers a lot of structures like hash maps, SIMD primitives, and allocators with different performance characteristics to better fit a given use-case. C application code often skips on these things simply because it is a lot more friction to do in C than in Zig.
      • jibal8 hours ago
        &gt; they compile straight to bytecode<p>machine code, not <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Bytecode" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Bytecode</a><p>&gt; The price for performance is safety checks<p>In Zig, non-ReleaseFast build modes have significant safety checks.<p>&gt; luajit ... with better-than-c performance<p>No.
      • agentifysh11 hours ago
        will add this to the list, now learning new languages is less of a barrier with LLMs
  • odie553312 hours ago
    Now we just need Python bindings so I can use it in my trash language of choice.
    • lulzx12 hours ago
      added python bindings!
      • hiq11 hours ago
        Were you working on it already, or did it take you less than 17 minutes to commit <a href="https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf&#x2F;commit&#x2F;9f5a7b70eb4b53672c0e4d80b7438e7504066e43" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Lulzx&#x2F;zpdf&#x2F;commit&#x2F;9f5a7b70eb4b53672c0e4d8...</a> ?
  • pm22224 hours ago
    What’s the format that’s perhaps free, easy to parse and render? Build one please.
  • littlestymaar12 hours ago
    - First commit 3hours ago.<p>- commit message: LLM-generated.<p>- README: LLM-generated.<p>I&#x27;m not convinced that projects vibe coded over the evening deserve the HN front page…<p>Edit: and of course the author&#x27;s blog is also full of AI slop…<p>2026 hasn&#x27;t even started I already hate it.
    • ncgl10 hours ago
      Using Ai isn&#x27;t lazier than your regurgitated dismissal, to be fair.
    • dmytrish11 hours ago
      ...and it does not work. I tried it on ~10 random pdfs, including very simple ones (e.g. a hello world from typst), it segfaults on every single one.
      • forgotpwd1611 hours ago
        Tried few and works. Maybe you&#x27;ve older or newer Zig version than whatever project targets. (Mine is 0.15.2.)
        • dmytrish10 hours ago
          <p><pre><code> ~&#x2F;c&#x2F;t&#x2F;s&#x2F;zpdf (main)&gt; zig version 0.15.2 </code></pre> Sky is blue, water is wet, slop does not work.
    • kingkongjaffa11 hours ago
      Wait, but why?<p>If it&#x27;s really better than what we had before, what does it matter how it was made? It&#x27;s literally hacked together with the tools of the day (LLMs) isn&#x27;t that the very hacker ethos? Patching stuff together that works in a new and useful way.<p>5x speed improvements on pdf text extraction might be great for some applications I&#x27;m not aware of, I wouldn&#x27;t just dismiss it out of hand because the author used $robot to write the code.<p>Presumably the thought to make the thing in the first place and decide what features to add and not add was more important than how the code is generated?
      • utopiah2 hours ago
        &gt; If it&#x27;s really better than what we had before<p>That&#x27;s a very big if. The whole point is that what we had before was made slowly. This was made quickly. In itself it&#x27;s not better but what it typically means is hours and hours of testing. Going through painful problems that highlight idiosyncrasies of the problem space. Things that are really weird and specific to whatever the tool is trying to address.<p>In such cases we can be expect that with very little time very few things were tested and tested properly (including a comment mentioned how tests were also generated). &quot;We&quot; the audience of potentially interested users have then to do that work (as plenty did commenting on that post).<p>IMHO what you bring forward is precisely that :<p>- can the new &quot;solution&quot; actually pass ALL the tests the previous one did? More?<p>This should be brought to the top and the actual compromises can then be understood, &quot;we&quot; can then decide if it&#x27;s &quot;better&quot; for our context. In some cases faster with lossy output is actually better, in others absolutely not. The difference between the new and the old solutions isn&#x27;t binary and have no visibility on that is what makes such a process nothing more than yet another showcase that LLMs can indeed produce &quot;something&quot; that is absolutely boring while consuming a TON of resources, including our own attention.<p>TL;DR: there should be test &quot;harness&quot; made by 3rd parties (or from well known software it is the closest too) that an LLM generated piece of code should pass before being actually compared.
        • utopiah2 hours ago
          related <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46437688">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46437688</a>
  • nullorempty5 hours ago
    Tomorrow&#x27;s headlines<p>fpdf<p>jpdf<p>cpdf<p>cpppdf<p>bfpdf<p>ppdf<p>...<p>opdf