22 comments

  • refibrillator5 hours ago
    One of the cooler and lesser known features of JPEG XL is a mode to losslessly transcode from JPEG while achieving ~20% space reduction. It’s reversible too because the original entropy coded bitstream is untouched.<p>Notably GCP is rolling this out to their DICOM store API, so you get the space savings of JXL but can transcode on the fly for applications that need to be served JPEG.<p>Only know this because we have tens of PBs in their DICOM store and stand to save a substantial amount of $ on an absurdly large annual bill.<p>Native browser support is on our wishlist and our contacts indicate the chrome team will get there eventually.
    • geokon3 hours ago
      If it&#x27;s reversible, why not just store as JPEG XL and then convert back when it&#x27;s served? Does it take a lot of processing time?
      • OneDeuxTriSeiGo3 hours ago
        You can do that and that&#x27;s one of the big appeals. You can serve bost JXL and JPEG from the same source and you8 can actually serve downscaled versions of the JXL image from the original bytestream.<p>Also OP did say &quot;transcode on the fly&quot; to serve JPEG, not actually storing as JPEG.
      • stingraycharles3 hours ago
        Isn’t that what the comment you’re replying to is suggesting?
  • zorgmonkey11 hours ago
    It looks very likely chromium will be using jxl-rs crate for this feature [0]. My personal suspicion is that they&#x27;ve just been waiting for it to good enough to integrate and they didn&#x27;t want to promise anything until it was ready (hence the long silence).<p>[0] <a href="https:&#x2F;&#x2F;issues.chromium.org&#x2F;issues&#x2F;40168998#comment507" rel="nofollow">https:&#x2F;&#x2F;issues.chromium.org&#x2F;issues&#x2F;40168998#comment507</a>
    • goku1210 hours ago
      That was Mozilla&#x27;s stance. Google was thoroughly hostile towards it. They closed the original issue citing a lack of interest among users, despite the users themselves complaining loudly against it. The only thing I&#x27;m not sure about is why they decided to reopen it. They may have decided that they didn&#x27;t need this much bad PR. Or someone inside may have been annoyed by it just as much as we are.<p>PS: I&#x27;m a bit too sleepy to search for the original discussion. Apologies for not linking it here.
      • drysart5 hours ago
        &gt; The only thing I&#x27;m not sure about is why they decided to reopen it.<p>It&#x27;s almost certainly due to the PDF Association adding JPEG XL as a supported image format to the ISO standard for PDFs; considering Google&#x27;s 180 on JPEG XL support came just a few days after the PDF Association&#x27;s announcement.
        • thayne2 hours ago
          That would make sense, since they would then need support for JXL for the embedded PDF viewer anyway. Unless they want it to choke on valid PDFs that include JXL images.
      • ksec7 hours ago
        It wasn&#x27;t just a blatant lie for lack of interest, they also went out their way to benchmark it and somehow present it as inferior to AVIF.
        • aidenn04 hours ago
          IIRC they benchmarked it as &quot;not much better&quot; than AVIF, not inferior.
    • bmicraft9 hours ago
      That library had a hiatus with zero commits of over 1.5 years until recently iirc.<p>That this is working out is a combination of wishful thinking and getting lucky.
      • inejge8 hours ago
        &quot;Code frequency&quot; for jxl-rs shows no activity from Aug 2021 to Aug 2024, then steady work with a couple of spurts. That&#x27;s both a longer hiatus and a longer period of subsequent activity (a year+ ago isn&#x27;t &quot;recently&quot; in my book.) What data have you based your observation on?
        • bmicraft7 hours ago
          my fallible memory of roughly the same sources
  • MutableLambda12 hours ago
    Have you seen JPEG XL source code? I like the format, but the reference implementation in C++ looked pretty bad at least 2 years ago. I hope they rewrote it, because it surely looked like a security issue waiting to happen.
    • jsheard12 hours ago
      That&#x27;s why both Mozilla and Google have predicated their JXL support on a memory-safe implementation. There&#x27;s a Rust one in the works.<p>I think Google are aiming to replace all of Chromiums decoders with memory-safe ones anyway, even for relatively simple formats.
      • philistine10 hours ago
        If that&#x27;s their plan, I predict another situation exactly like this one where Google decides that removing support is the best move forward. Careful, BMP, Chrome is out to get you!
        • nine_k7 hours ago
          BMP decoding may seem easy and fun (I wrote a toy decoder back in the day), but the vulnerabilities are real: <a href="https:&#x2F;&#x2F;nvd.nist.gov&#x2F;vuln&#x2F;detail&#x2F;CVE-2025-32468" rel="nofollow">https:&#x2F;&#x2F;nvd.nist.gov&#x2F;vuln&#x2F;detail&#x2F;CVE-2025-32468</a><p>It&#x27;s not the format, it&#x27;s the C &#x2F; C++ unfortunate baggage.
    • chimeracoder12 hours ago
      &gt; Have you seen JPEG XL source code? I like the format, but the reference implementation in C++ looked pretty bad at least 2 years ago. I hope they rewrote it, because it surely looked like a security issue waiting to happen.<p>At this point, in 2025, any substantial (non-degenerative) image processing written in C++ is a security issue waiting to happen. That&#x27;s not specific to JPEG XL.
      • SoKamil12 hours ago
        &gt; any substantial (non-degenerative)<p>Why this quality poses security issues?
      • spookie11 hours ago
        Well, the first public implementation dates to 2020. And, the Cpp choice is obvious, simpler integration with the majority of existing image processing libs, tools and utilities. Not to mention GUI toolkits.<p>Nonetheless, we should really bear in mind how entrenched Cpp is. If you normalize CVEs by language popularity Java looks downright dangerous!
      • izacus12 hours ago
        And yet whole of HN is VERY VERY angry because Google won&#x27;t ship that pile of C++ into most popular software (and app framework) in the world.
        • usrnm12 hours ago
          The most popular software in question is also a giant pile of C++, btw.
          • izacus10 hours ago
            What are you saying here?
            • Lammy5 hours ago
              <a href="https:&#x2F;&#x2F;chromium.googlesource.com&#x2F;chromium&#x2F;src&#x2F;+&#x2F;refs&#x2F;heads&#x2F;main" rel="nofollow">https:&#x2F;&#x2F;chromium.googlesource.com&#x2F;chromium&#x2F;src&#x2F;+&#x2F;refs&#x2F;heads&#x2F;...</a>
        • mort966 hours ago
          Mozilla&#x27;s position for some time now has been, &quot;we aren&#x27;t opposed to shipping JXL support, but we&#x27;d want to ship a decent implementation in a memory safe language, not the reference C++ implementation&quot;. That position hasn&#x27;t been met with very much criticism.<p>Google&#x27;s position, on the other hand, has been a flat-out &quot;no, we will not ship JXL&quot;. <i>That&#x27;s</i> what has been met with criticism. Not an imagined reluctance to shipping a C++ JXL implementation.
        • ux26647811 hours ago
          Who is saying Google should ship the reference implementation? It&#x27;s a standard, and Google has the labor to write their own implementation.
          • jeffbee6 hours ago
            Google did write one. They wrote the bad one that we&#x27;re discussing.
          • izacus10 hours ago
            That sounds like an even more request for someone to do for free, doesn&#x27;t it?
            • ipdashc8 hours ago
              It&#x27;s Google, it&#x27;s one of the biggest tech companies in the world making boatloads of money, in part off their browser. They&#x27;re currently best known as one of the companies trying to create AI God. They really can&#x27;t write an... image format parser?
  • CharlesW11 hours ago
    JXL&#x27;s war is not with AVIF, which is already a de-facto standard which has near-universal browser support, is enshrined as an Apple image default, will only become more popular as AV1 video does, etc. It&#x27;s not going anywhere.<p>That&#x27;s not to say that JXL is bad or going away. It currently has poor browser support, but it&#x27;s now finding its footing in niche use cases (archival, prosumer photography, medical), and will eventually become ubiquitous enough to just <i>be</i> what the average person refers to as &quot;JPEG&quot; 10 years from now.<p>To address selected claims made in the post:<p>• <i>&quot;AVIF is &#x27;homegrown&#x27;&quot;</i> – AVIF is an open, royalty-free AOMedia standard developed by the Alliance for Open Media (Google, Microsoft, Amazon, Netflix, Mozilla, etc.).<p>• <i>&quot;AVIF is &#x27;inferior&#x27;&quot;</i> – AVIF is significantly better than JPEG&#x2F;WebP in compression efficiency at comparable quality, and comparable with JXL in many scenarios.<p>• <i>&quot;AVIF is ridiculous in this aspect, capping at 8,193×4,320.&quot;</i> — JXL&#x27;s theoretical maximum image size is bigger. The author cites AVIF&#x27;s Baseline profile (think embedded devices), but AVIF supports 16,384×8,704 per tile. It HEIF container format supports a grid of up to 65,535 tiles (so logical images sizes up to 1,073,725,440 wide <i>or</i> 283,111,200 tall).<p>So, JPEG XL is good. Yes, it&#x27;s far behind AVIF in terms of adoption and ecosystem, but that will improve. AVIF is likely to erase any current JXL quality advantages with AV2, but both JXL and AV1&#x2F;AV2 encoders will get better with time, so they&#x27;re likely to be neck-and-neck in quality for the foreseeable future.
    • von_lohengramm53 minutes ago
      &gt; JXL&#x27;s theoretical maximum image size is bigger.<p>This is all fine and good until you actually try encoding such an image with libjxl. What an absolute garbage codebase. I&#x27;m sure it&#x27;s gotten better since I&#x27;ve last used it, but it&#x27;s impressive how unoptimized, memory hungry, and of course wildly unsafe&#x2F;crashy it was. Many of the options just completely didn&#x27;t work, either due to exponential performance, crashes, or weird special-casing that breaks the moment you encode anything that&#x27;s dissimilar from the sample images used in the sham benchmark made by the libjxl creators. I don&#x27;t even think a high resolution image had ever been successfully encoded on higher effort levels, since I doubt that anyone trying to do so had the terabytes of RAM required.<p>I was genuinely flabbergasted when there was mass support for reviving it a couple years ago. I don&#x27;t think anyone advocating for it has actually used libjxl at all and were just internet hypemen. That seems to happen all too often nowadays.<p>This all being said, I&#x27;m mildly optimistic for a retry with jxl-rs. However, seeing much of the same contributors from libjxl on jxl-rs does make me quite cautious.
    • wherenow45 hours ago
      I seem to recall that a large part of the stated rationale at the time the Chrome Team decided to deprecate support for JXL was that they had support for both AVIF and JXL, and AVIF was good enough.<p>This might be the origin of the &quot;competition&quot; in the context of this Google decision&#x2F;reversal.
  • m348e91213 hours ago
    A full-resolution, maximum-size JPEG XL image (1,073,741,823 × 1,073,741,824):<p>Uncompressed: 3.5–7 exabytes Realistically compressed: Tens to hundreds of petabytes<p>Thats a serious high-res image
    • xnorswap12 hours ago
      At 600DPI that&#x27;s over a marathon in each dimension.<p>I do wonder if there are any DOS vectors that need to be considered if such a large image can be defined in relatively small byte space.<p>I was going to work out how many A4 pages that was to print, but google&#x27;s magic calculator that worked really well has been replaced by Gemini which produces this trash:<p><pre><code> Number of A4 pages=0.0625 square meters per A4 page * 784 square miles =13,200 A4 pages. </code></pre> No Gemini, you can&#x27;t equate meters and miles, even if they do both abbreviate to &#x27;m&#x27; sometimes.
      • threeducks12 hours ago
        &gt; I do wonder if there are any DOS vectors that need to be considered if such a large image can be defined in relatively small byte space.<p>You can already DOS with SVG images. Usually, the browser tab crashes before worse things happen. Most sites therefore do not allow SVG uploads, except GitHub for some reason.
      • Intralexical8 hours ago
        &quot;Google&#x27;s magic calculator&quot; was probably just a wrapper to GNU Units [0], which produces:<p><pre><code> $ units You have: (1073741823&#x2F;(600&#x2F;inch))**2 &#x2F; A4paper You want: Definition: 3.312752e+10 </code></pre> Equivalent tools: Qalc, Numbat<p>0: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36994418">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36994418</a>
        • BenjiWiebe2 hours ago
          It couldn&#x27;t have been a wrapper - it understood a tiny tiny fraction of the things that Gnu units does.
      • fwip12 hours ago
        Wolfram alpha is the better calculator for that sort of thing.
    • yread12 hours ago
      The only practical way to work with such large images is if they are tiled and pyramidal anyway
      • Magnap11 hours ago
        Which JXL supports, by the way. Tiling is mandatory for images bigger than 2048x2048, and you can construct images based on an 8x downscaled version, recursing that up to 4 times for up to 4096x downscaling.
      • Akronymus12 hours ago
        what does pyramidal mean in this context?
        • scheme27112 hours ago
          Probably, multiple resolutions of the same thing. E.g. a lower res image of the entire scene and then higher resolution versions of sections. As you zoom in, the higher resolution versions get used so that you can see more detail while limiting memory consumption.
        • magicalhippo3 hours ago
          JPEG and friends transforms the image data into the frequency domain. Regular old JPEG uses the discrete cosine transformation[1] for this on 8x8 blocks of pixels. This is why with heavily compressed JPEG images you can see blocky artifacts[2]. JPEG XL uses variable block size DCT.<p>Lets stick to old JPEG as it&#x27;s easier to explain. The DCT takes the 8x8 pixels of a block and transforms it to 8x8 magnitudes of different frequency components. In one corner you have the DC component, ie zero frequency, which represents the average of all 8x8 pixels. Around it you have the lowest non-zero frequency components. You have three of those, one which has a non-zero x frequency, one with a non-zero y frequency, and one where both x and y are non-zero. The elements next to those are the next-higher frequency components.<p>To reconstruct the 8x8 pixels, you run the inverse discrete cosine transformation, which is lossless (to within rounding errors).<p>However, due to Nyquist[3], you don&#x27;t need those higher-frequency components if you want a lower-resolution image. So if you instead strip away the highest-frequency components so you&#x27;re left with a 7x7 block, you can run the inverse transform on that to get a 7x7 block of pixels which perfectly represents a 7&#x2F;8 = 87.5% sized version of the original 8x8 block. And you can do this for each block in the image to get a 87.5% sized image.<p>Now, the pyramidal scheme takes advantage of this by rearranging how the elements in each transformed block is stored. First it stores the DC components of all the blocks the image. If you just used those, you&#x27;d get an image which perfectly represents a 1&#x2F;8th-sized image.<p>Next it stores all the lowest-frequency components for all the blocks. Using the DC and those you have effectively 2x2 blocks, and can perfectly reconstruct a quarter-sized image.<p>Now, if the decoder knows the target size the image will be displayed at, it can then just stop reading when it has sufficiently large blocks to reconstruct the image near the target size.<p>Note that most good old JPEG decoders supports this already, however since the blocks are stored one after another it still requires reading the entire file from disk. If you have a fast disk and not too large images it can often be a win regardless. But if you have huge images which are often not used in their full resolution, then the pyramidal scheme is better.<p>[1]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Discrete_cosine_transform" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Discrete_cosine_transform</a><p>[2]: <a href="https:&#x2F;&#x2F;eyy.co&#x2F;tools&#x2F;artifact-generator&#x2F;" rel="nofollow">https:&#x2F;&#x2F;eyy.co&#x2F;tools&#x2F;artifact-generator&#x2F;</a> (artifact intensity 80 or above)<p>[3]: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Nyquist%E2%80%93Shannon_sampling_theorem" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Nyquist%E2%80%93Shannon_sampli...</a>
        • jjcob12 hours ago
          I think it means encoded in such a way that you first have low res version, then higher res versions, then even higher res versions etc.
        • shadowgovt12 hours ago
          Replicated at different resolutions depending on your zoom level.<p>One patch at low resolution is backed by four higher-resolution images, each of which is backed by four higher-resolution images, and so on... All on top of an index to fetch the right images for your zoom level and camera position.
        • jjk712 hours ago
          Tiled at different zoom levels
      • wang_li12 hours ago
        We call those mipmaps.
    • flakes1 hour ago
      A selfie at that resolution would be some sort of super-resolution microscopy.
    • flir12 hours ago
      An image of earth at very roughly 4cmx4cm resolution? (If I&#x27;ve knocked the zero&#x27;s off correctly)
      • aidenn04 hours ago
        Each pixel would represent roughly 16cm^2 using a cylindrical equal-area projection. They would only be square at the equator though (representing less distance E-W and more distance N-S as you move away from the equator).<p>No projection of a sphere on a rectangle can preserve both direction and area.
    • cubefox12 hours ago
      Yes, but unlike AVIF, JPEG XL supports progressive decoding, so you can see the picture in lower quality long before the download has finished. (Ordinary JPEG also supports progressive decoding, but in a much less efficient manner, which means you have to wait longer for previews with lower quality.)
      • tyre12 hours ago
        I don’t think the issue with the exabyte image is progressive decoding, though it would at least get you an image of what is bringing down your machine while you wait for the inevitable!
    • mcdonje12 hours ago
      [flagged]
      • westmeal12 hours ago
        They still down voted anyway lol
        • mcdonje11 hours ago
          At least I didn&#x27;t give Dang extra work.
          • westmeal10 hours ago
            Lol yeah Dang has a lot of flame wars to deal with
  • dweekly13 hours ago
    Prior HN posts&#x2F;discussions:<p>Chromium Team Re-Opens JPEG XL Feature Ticket <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46018994">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46018994</a><p>FSF Slams Google over Dropping JPEG-XL in Chrome <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35589179">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=35589179</a><p>Google set to deprecate JPEG XL support in Chrome 110 <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33399940">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33399940</a><p>Chromium jpegxl issue closed as won&#x27;t fix <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40407475">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=40407475</a>
    • dang11 hours ago
      Lots more at <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36214955">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=36214955</a> and the links back from there, and I&#x27;m sure there are others between then and now. Too many to list!
    • ChrisArchitect13 hours ago
      [dupe]<p>Main recent discussion:<p><i>Google Revisits JPEG XL in Chromium After Earlier Removal</i><p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46021179">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46021179</a>
      • ChrisArchitect10 hours ago
        not to mention this other dupe with lots of discussion also from last week: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46033330">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46033330</a>
  • shevy-java12 hours ago
    &quot;in favor of the homegrown and inferior AVIF&quot;<p>I am using .avif since some years; all my old .jpg and .png files have been pretty much replaced by .avif, in particular fotos. I am not saying .avif is perfect, but IMO it is much better than .jpg or .avif.<p>I could have gone .webp or perhaps jpeg-xl but at the end of the day, I am quite happy with .avif as it is.<p>As for JPEG XL - I think the problem here is ... Google. Google dictates de-facto web-standards onto us. This is really bad. I don&#x27;t want a commercial entity control my digital life.
    • rottencupcakes9 hours ago
      &gt; I am not saying .avif is perfect, but IMO it is much better than .jpg or .avif<p>going crazy reading this sentence
      • mrbluecoat8 hours ago
        recursive logic is recursive logic is
    • aidenn03 hours ago
      For making compact high-quality jpeg files, consider trying jpegli[1], it does an impressive job.<p>More specifically, if I try a bunch of AVIF quantization options and manually pick the one that appears visually lossless, it beats jpegli, but if I select a quantization option that always looks visually lossless with AVIF, jpegli will win the average size, because I need to use some headroom for images that AVIF does less well on.<p>1: <a href="https:&#x2F;&#x2F;github.com&#x2F;google&#x2F;jpegli" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;google&#x2F;jpegli</a>
    • senbrow11 hours ago
      no one asked, but FYI in English it is more commmon to say &quot;for several years&quot; instead of &quot;since some years&quot; :)
      • phatfish10 hours ago
        German speakers usually have very good English, but this is one of their tells.
        • lsecondario9 hours ago
          Another one I&#x27;ve noticed is using &quot;I&#x27;ve&quot; as a contraction in e.g. &quot;I&#x27;ve a meeting to attend&quot;. Seems totally reasonable but for some reason native speakers just don&#x27;t use it that way.
          • rottencupcakes9 hours ago
            I’ve is only used when there is a verb to follow and the have is part of the verb’s construction.<p>As in “I’ve done it” or “I’ve seen it”<p>It would not be used before a noun, in the context of ownership, as in “I have a meeting”
          • darrenf7 hours ago
            Wait, what? Englishman in my 50s here and I use phrases like that all the time — “I’ll be missing standup cos I’ve a GP appointment”, “leaving at lunchtime as I’ve a train to catch”, “gotta dash, I’ve chores to do”. No one’s ever said I sound German!
            • mpyne7 hours ago
              I think it&#x27;s more fair to call it a distinguisher of American English vs. British English.<p>Even just reading &quot;I&#x27;ve a train to catch&quot; gives a British accent in my mind.
          • jamiek885 hours ago
            Nah that’s just Americans. Brits and Aussies say it all the time. Not sure about Canadians.
        • bxparks4 hours ago
          Could also be French speakers. They would say &quot;J&#x27;utilise le format .avif depuis quelques années.&quot; I think the &quot;depuis&quot; throws off the French speakers when they translate that literally as &quot;since some years&quot; instead of &quot;for some years&quot;.<p>Another common tell: I wake up in the morning in the US&#x2F;Pacific time zone, and see the European writers on HN using &quot;I have ran&quot; instead of &quot;I have run&quot;.
        • Grosvenor8 hours ago
          German speakers usually have very good English, but this is already one of their tells.<p>Fixed that for you.
  • ragall9 hours ago
    Quick reminder that it&#x27;s not &quot;Google&quot; that killed JXL before, it was the Chrome team. Jpeg XL was designed by a Google engineer (JyrkiAlakuijala here) who is not part of the Chrome team, but in Google Research in the Zurich office while the Chrome team, although it has offices all around the world, at its core is very insular and lives in the Mountain View bubble.
    • qingcharles7 hours ago
      Jyrki is highly talented. Also the author of the incredible Jpegli, which seemed to be a reaction to Google deep-sixing JpegXL, and also Brotli, WebP lossless and WOFF2 among other things.
    • jiggawatts7 hours ago
      <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Conway%27s_law" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Conway%27s_law</a>
  • egorfine11 hours ago
    A little bit related: RAW files from iPhone 17 Pro are compressed using JPEG-XL.
  • EMM_38612 hours ago
    Isn&#x27;t this due to the 100M+ line C++ multi-threaded dependency being a potential nightmare when you are dealing with images in browsers&#x2F;emails&#x2F;etc. as an attack surface?<p>I think both Mozilla and Google are OK with this - if it is written in Rust in order to avoid that situation.<p>I know the linked post mentions this but isn&#x27;t that the crux of the whole thing? The standard itself is clearly an improvement over what we&#x27;ve had since forever.
    • tensegrist12 hours ago
      100M+ is a bit more than i would expect for an image format. have i not been paying attention
      • aw162110712 hours ago
        According to tokei, the lib&#x2F; directory from the reference implementation [0] has 93821 lines of C++ code and 22164 lines of &quot;C Header&quot; (which seems to be a mix of C++ headers, C headers, and headers that are compatible with both C and C++). The tools&#x2F; directory adds 16314 lines of C++ code and 1952 lines of &quot;C Header&quot;.<p>So at least if GP was talking about libjxl &quot;100K+&quot; would be more accurate.<p>[0]: <a href="https:&#x2F;&#x2F;github.com&#x2F;libjxl&#x2F;libjxl" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;libjxl&#x2F;libjxl</a>
        • jiggawatts7 hours ago
          One of the best ways to measure code complexity is to zip up the source code. This eliminates a lot of the redundancies and is a more direct measure of entropy&#x2F;complexity than almost anything else.<p>By that metric, jpeg-xl is about 4x the size of the jpeg or png codebase.
          • tkfoss7 hours ago
            Interesting approach
            • jiggawatts3 hours ago
              It comes from the &quot;intelligence is a form of compression&quot; hypothesis that has been floating around in the ML space. Also, with a good compression algorithm it is a fairly direct measure of entropy, which is quite well correlated with what a developer might consider code size and&#x2F;or complexity.
        • palmotea11 hours ago
          &gt;&gt; 100M+ is a bit more than i would expect for an image format. have i not been paying attention<p>&gt; So at least if GP was talking about libjxl &quot;100K+&quot; would be more accurate.<p>M can mean thousands and I think it&#x27;s common to use it used that way in finance and finance-adjacent areas: <a href="https:&#x2F;&#x2F;www.chicagomanualofstyle.org&#x2F;qanda&#x2F;data&#x2F;faq&#x2F;topics&#x2F;Abbreviations&#x2F;faq0094.html" rel="nofollow">https:&#x2F;&#x2F;www.chicagomanualofstyle.org&#x2F;qanda&#x2F;data&#x2F;faq&#x2F;topics&#x2F;A...</a>:<p>&gt; A. You’ve identified two commonly used conventions in finance, one derived from Greek and the other from Latin, but neither one is standard.<p>Starting with the second convention, M is used for amounts in the thousands and MM for amounts in the millions (usually without a space between the number and the abbreviation—e.g., $150M for $150,000 and $150MM for $150 million). This convention overlaps with the conventions for writing roman numerals, according to which a thousand is represented by M (from mille, the Latin word for “thousand”). Any similarity with roman numerals ends there, however, because MM in roman numerals means two thousand, not a thousand thousands, or one million, as in financial contexts...<p><a href="https:&#x2F;&#x2F;www.accountingcoach.com&#x2F;blog&#x2F;what-does-m-and-mm-stand-for" rel="nofollow">https:&#x2F;&#x2F;www.accountingcoach.com&#x2F;blog&#x2F;what-does-m-and-mm-stan...</a>:<p>&gt; An expense of $60,000 could be written as $60M. Internet advertisers are familiar with CPM which is the cost per thousand impressions.<p>&gt; The letter k is also used represent one thousand. For example, an annual salary of $60,000 might appear as $60k instead of $60M.
          • WheatMillington11 hours ago
            I assume this is regional... I work in accounting and finance in New Zealand (generally following ordinary Western&#x2F;Commonwealth standards) and I&#x27;ve never heard of using M for thousands. If I used that I would confuse the hell out of everyone around me.
            • mkaic10 hours ago
              &quot;It&#x27;s... a regional dialect.&quot;<p>&quot;What region?&quot;<p>&quot;Er, upstate New York.&quot;<p>&quot;Really. Well, I&#x27;m from Utica and I&#x27;ve never heard anyone use the phrase &#x27;100M&#x27; to mean &#x27;100 thousand&#x27;&quot;<p>&quot;Oh, no, not in Utica. It&#x27;s an Albany expression.&quot;
            • qingcharles7 hours ago
              In some areas M is <i>mille</i> as in the Latin&#x2F;French&#x2F;Italian word for thousand, e.g.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Cost_per_mille" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Cost_per_mille</a>
          • dataflow10 hours ago
            Okay, but this is... not finance? And the article itself wrote 100K. Rewriting that as 100M does nobody a favor.
          • sealeck9 hours ago
            I don&#x27;t think many (if any) programmers would imagine 100M lines of code to mean 100,000 lines of code and not 1,000,000...
          • uselesswords9 hours ago
            Technically right is the worst kind of right
      • munificent11 hours ago
        The article says 100K, not 100M. I&#x27;m guessing that&#x27;s what the parent comment meant.<p>100MLOC for an image format would be bananas. You could fit the entire codebases of a couple of modern operating systems, a handful of AAA videogames, and still have room for several web apps and command line utilities in 100MLOC.
        • JyrkiAlakuijala11 hours ago
          the article includes test code and encoder code, that is not the way how we compute the decoder size<p>the decoder is something around 30 kloc
      • crooked-v12 hours ago
        It&#x27;s a container format that does about a bajillion things - lossy, lossless, multiple modes optimized for different image types (photography vs digital design), modern encode&#x2F;decode algorithms, perceptual color space, adaptive quantization, efficient ultra-high-resolution decoding and display, partial and complete animation, tile handling, everything JPEG does, and a bunch more.
        • furyofantares12 hours ago
          The Linux kernel is 40M lines of code after 34 years of development.<p>OP might have well have said &quot;infinite lines of code&quot; for JPEGXL and wouldn&#x27;t have been much less accurate. Although I&#x27;m guessing they meant 100k.
      • GaggiX11 hours ago
        They wanted to say 100K instead of 100M
    • dataflow12 hours ago
      You mean 100K+? A large chunk of which they say is testing code?
    • JyrkiAlakuijala11 hours ago
      This is some strange misinformation.<p>The C++ JPEG XL decoder is ~30&#x27;000 lines, i.e., 3000x smaller than you claim. A non-multithreaded, non-simdified code would be much simpler, around 8000 to 10000 lines of code.<p>It is not difficult to measure from the repository. The compiled compressed binary for an APK is 5x smaller than that of full AVIF. The complete specification at under 100 pages is ~13x more compact than that of full AVIF.
      • charleslmunger1 hour ago
        &gt;The compiled compressed binary for an APK<p>This doesn&#x27;t undermine your argument at all, but we should not be compressing native libs in APKs.<p><a href="https:&#x2F;&#x2F;developer.android.com&#x2F;guide&#x2F;topics&#x2F;manifest&#x2F;application-element#extractNativeLibs" rel="nofollow">https:&#x2F;&#x2F;developer.android.com&#x2F;guide&#x2F;topics&#x2F;manifest&#x2F;applicat...</a>
    • ajcp12 hours ago
      -&gt; They were concerned about the increased attack surface resulting from including the current 100K+ lines C++ libjxl reference decoder, even though most of those lines are testing code.<p>Seems like Google has created a memory-safe decoder for it in Rust or something.
    • bmicraft9 hours ago
      Google is one of the parties involved in the creating of jxl. If it&#x27;s their own fault they didn&#x27;t write a decoder in a memory safe language sooner.
    • cornstalks12 hours ago
      libjxl is is &lt;112,888 lines of code, about 3 orders of magnitude less than you&#x27;re 100M+ claim.
      • sunaookami10 hours ago
        Do people really not know what a hyperbole is?
        • cornstalks10 hours ago
          100M+ lines of code isn&#x27;t a hyperbole for some codebases, though. google3 is estimated at about 2 billion lines of code, for example.<p>Maybe it was hyperbole. But if it was it wasn&#x27;t obvious to me, unfortunately.
    • theoldgreybeard11 hours ago
      because memory safety is the only attack vector, as we all know
      • UltraSane9 hours ago
        It is a very big one and eliminating it is a huge improvement in security. You can then spend more time fixing all the other sources of security problems.
    • otabdeveloper411 hours ago
      &gt; ...but now in le Rust!!1<p>I look forward to the next generation of rubes rewriting this all in some newer &quot;&quot;safe&quot;&quot; language in three decades.
      • UltraSane3 hours ago
        Because a language happily letting you try to access an array index far past its end isn&#x27;t stupid at all.
    • MaxBarraclough11 hours ago
      &gt; I think both Mozilla and Google are OK with this - if it is written in Rust in order to avoid that situation.<p>It would need to be written in the <i>Safe Rust</i> subset to give safety assurances. It&#x27;s an important distinction.
      • dgacmu11 hours ago
        99% safe with 1% unsafe mixed in is far, far better than 100k loc of c++ -- look at Google&#x27;s experience with rust in Android. It&#x27;s not perfect and they had one &quot;almost vulnerability&quot; but the rate of vulnerabilities is much, much lower even with a bit of unsafe mixed in.
        • MaxBarraclough7 hours ago
          Agreed, and Google developers can probably be trusted to &#x27;act responsibly&#x27;, but too often people forget the distinction. Some Rust codebases are wildly unsafe, and too often people see <i>written in Rust</i> and falsely conclude it&#x27;s a memory-safe codebase.
  • binary13212 hours ago
    Starting to feel like this whole &quot;standards&quot; thing is a giant farce
    • criddell12 hours ago
      Well, there are <i>de jure</i> standards (what the w3c says a browser should do) and <i>de facto</i> standards (what Chrome does).
      • shadowgovt12 hours ago
        As it ever was. Standards are a three-edged sword: spec, intent of spec, and implementations of spec.
    • izacus12 hours ago
      Which standard requires support of JXL?
      • scheme27112 hours ago
        The PDF association apparently recently added jpeg xl to the pdf spec and indicated that it&#x27;s the preferred solution for HDR content.
        • jsheard11 hours ago
          Then again PDF also technically supports embedded audio, video, 3D graphics, and arbitrary Javascript. If Flash hadn&#x27;t died it would probably still support that too. It&#x27;s a clown car format where everyone besides Adobe just tacitly agrees to ignore huge chunks of the spec.
          • josefx11 hours ago
            &gt; It&#x27;s a clown car format<p>As is the destiny of any document format in wide spread use, PDF had flash, doc had ActiveX.<p>Also this text is formatted using a mark down language fully capable of embedding entire applications.
          • kmeisthax2 hours ago
            PDF had Flash support? I thought the Flash Xtra for Shockwave was nuts...
        • izacus10 hours ago
          Web standard I meant. The OP didn&#x27;t talk about PDFs from context.
    • lgl12 hours ago
      Obligatory xkcd: <a href="https:&#x2F;&#x2F;xkcd.com&#x2F;927" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;927</a>
  • moffkalast12 hours ago
    &gt; Yes, right, “not enough interest from the entire ecosystem”. Sure.<p>Well tbf, the only time I ever hear about JPEG XL is when people complain about Chrome not having it. I think that might be its only actual use case.
    • CharlesW11 hours ago
      The biggest &quot;win&quot; for JPEG XL so far was last year&#x27;s adoption by Apple for ProRAW, and prosumer photography is will likely be JPEG XL&#x27;s primary mainstream use case. Pros will continue to shoot in &quot;actual RAW&quot;, and consumers will (and this is not an insult) continue to have no interest in the technical details of the compressed media formats being used.<p><a href="https:&#x2F;&#x2F;petapixel.com&#x2F;2024&#x2F;09&#x2F;18&#x2F;why-apple-uses-jpeg-xl-in-the-iphone-16-and-what-it-means-for-your-photos&#x2F;" rel="nofollow">https:&#x2F;&#x2F;petapixel.com&#x2F;2024&#x2F;09&#x2F;18&#x2F;why-apple-uses-jpeg-xl-in-t...</a>
      • hmbfcvib36 minutes ago
        Don’t conflate non-linear and linear image formats.
  • shmerl4 hours ago
    Good, but mass adoption is a lot slower in sites than in browsers it seems. It&#x27;s like pulling teeth making sites to actually support even AVIF which is already widely supported in browsers. A ton of inertia even on sites like GitHub and GitLab. Try using AVIF on Wikipedia? Tough luck.<p>Imagine how long it will take for JPEG XL that didn&#x27;t even reach wide browsers support yet.<p>Side note - comparing JPEG XL and AVIF features wise is sort of pointless if AVIF will continue to evolve based on AV2 and etc.
    • hxtk3 hours ago
      There’s also the issue of non-browser support. I recently advocated for replacing some GIFs with WEBM because WEBM was faster to encode and took up 3% as much space. Technically it sounded great. Then we talked to users.<p>It turns out some users wanted to embed moving pictures in Word documents, which you can only do with a GIF because it’s an image format that happens to move, so Word treats it as an image (by rendering it to the page). If it’s a video format, Word treats it as an attachment that you have to click on so it’ll open Media Player and show you.
  • Finnucane13 hours ago
    Cool, that means it&#x27;ll appear in ebook reading systems in five to ten years.
    • PaulHoule13 hours ago
      It&#x27;ll be in PDF sooner, and my experience is that PDF &gt;&gt; any other system for ebooks. I liked the <i>idea</i> of EPUB but when I recently installed an EPUB reader to read some files I was shocked at how awful it looked whereas for 15 years I&#x27;ve been reading PDF files on tablets with relish.
      • mubou213 hours ago
        Have you ever tried reading a PDF ebook on a phone? Small font size, doesn&#x27;t fill the entire screen (phones are taller), margins make it appear even smaller... even if you have good eyesight it&#x27;s a pain. The whole point of PDF is to preserve a page layout as authored. EPUB is meant to adapt to your device.
      • kace9112 hours ago
        &gt;and my experience is that PDF &gt;&gt; any other system for ebooks.<p>Are you speaking just about technical books?<p>Because I can’t imagine anyone trying to read a novel in epub vs pdf on a phone or epub reader and going with the latter.
        • PaulHoule11 hours ago
          I am mostly reading on a tablet, not a phone. I think if you are reading on a phone you are already screwed —- if people are “reading” on phones I think 80% of it is that you just read less.
          • kace9110 hours ago
            That’s a pretty judgemental statement out of nowhere - and completely ignored the ebook readers part, which are devices literally created for this purpose.<p>As for phones, screens nowadays are almost the same size as readers and with more resolution. E-ink is more comfortable for longer sessions, but if you find such a size unusable you might just have poor eyesight.
          • klempner2 hours ago
            As someone who is super nearsighted, the smaller screen on a phone is great for reading, especially in contexts like bedtime reading where I want to have my glasses off.<p>I have read many hundreds of books this way.<p>The problem with a tablet is that most tablets, especially the sort that are good for seeing entire as-printed pages at once, are too big for me to keep the entire screen in focus without wearing glasses. (with that said, foldables improve things here, since the aspect ratio bottleneck is typically width so being able to double the width on the fly makes such things more readable.
      • NoMoreNicksLeft12 hours ago
        The worst epubs are bad because some jackass took some poorly OCRed text and dumped it into the format. The best (retail) epubs are on par with the best PDFs except you don&#x27;t have to pan-and-scan to read a fucking page. It just reflows.<p>For novels I want and prefer epubs, but also non-novels if they were released in the last 5 years or so. PDF isn&#x27;t magic, and there are bad pdfs out there too, scans of photo-copied books and other nonsense.
        • PaulHoule11 hours ago
          There is a mode for PDF files that reflows and is logically similar to EPUB in that there is an HTML-derived data model and you have images embedded in the PDF much as they are embedded in the EPUB. Of course if you hate how complex PDF is it is more to hate.
        • Finnucane11 hours ago
          I oversee ebook production for a uni press so I am familiar with how the proverbial sausage is made. Which is why I still mainly prefer print books.
          • NoMoreNicksLeft9 hours ago
            There might be something said for academic texts with their tables of figures and diagrams and so forth. But even then, PDF can be nasty.
      • majora200712 hours ago
        That&#x27;s interesting, I absolutely hate PDF. Lack of metadata for collecting, format is difficult to support, doesn&#x27;t layout well on mobile, and very limited customization (like dark mode, changing text size, etc).<p>Only benefit is browsers have built-in support for the format.
        • leosanchez12 hours ago
          One thing I like about PDF is the annotations (notes &amp; highlights) are embedded in the PDF itself. That is not the case for EPUB files, each EPUB reader stores annotations in its own proprietary format.
          • Zardoz8410 hours ago
            EPUB it&#x27;s a glorified HTML page in a zip file.
        • swiftcoder12 hours ago
          &gt; Lack of metadata for collecting<p>PDFs have pretty excellent support for metadata. If the collection software doesn&#x27;t support at least Dublin Core, that may be kind of their own fault...
    • IshKebab13 hours ago
      That seems optimistic...
  • ballpug10 hours ago
    Compressing image files from 100k+ lines of C++ in libjxl repository, which contains JPEG XL reference implementation.<p>Encoding and decoding JPEG XL file is: #djxl input.jxl output.png.
  • pornel12 hours ago
    AV2 is in the works, so I guess we&#x27;ll have AVIF2 soon, and another AVIF2 vs JPEG XL battle.
    • dralley12 hours ago
      There&#x27;s no particular reason for an image format based on video codec keyframes to ever support a lot of the advanced features that JPEG XL supports. It might compress better than AVIF 1, but I doubt it would resolve the other issues.
  • rootnod312 hours ago
    Do we now need <a href="https:&#x2F;&#x2F;unkilledbygoogle.com" rel="nofollow">https:&#x2F;&#x2F;unkilledbygoogle.com</a>?
  • IncreasePosts11 hours ago
    I believe the appropriate term is ununaliving. Please communicate with care.
    • cpburns200911 hours ago
      That would make more sense than &quot;unkill&quot;.
  • bmacho11 hours ago
    Great now unkill xhtml&#x2F;xml+xstl
  • theturtle9 hours ago
    [dead]
  • ocdtrekkie13 hours ago
    As a monopoly, Google should be barred from having standards positions and be legally required to build and support the web standards as determined by other parties.<p>The insanity that the web platform is just &quot;whatever Google&#x27;s whims are&quot; remains insane and mercurial. The web platform should not be as inconsistent as Google&#x27;s own product strategies, wonder if XSLT will get unkilled in a few months.
    • simonw13 hours ago
      Having key browser implementers not involved in the standards processes is what lead us to the W3C wasting several years chasing XHTML 2.0.
      • dpark12 hours ago
        I kind of liked xhtml, though clearly it was not necessary for the web to be successful. I think the bigger issue is that W3C pursued this to the detriment of more important investments.<p>Reading over the minutes for the last W3C WG session before WHATWG was announced, the end result seems obvious. The eventual WHATWG folks were pushing for investment in web-as-an-app-platform and everyone else was focused on in retrospect very unimportant stuff.<p>“Hey, we need to be able to build applications.”<p>“Ok, but first we need <i>compound documents</i>.”<p>There was one group who thought they needed to build the web as Microsoft Word and another that wanted to create the platform on which Microsoft Word could be built.
        • josefx10 hours ago
          &gt; and another that wanted to create the platform on which Microsoft Word could be built.<p>Apparently they failed. The web version of Word is still far from having feature parity. Of course doc is one of those everything and the kitchen sink formats, so implementing it on top of a platform that was originally intended to share static documents is kind of a tall order.
          • arccy8 hours ago
            that&#x27;s just microsoft not being good. Google Docs exists and is pretty good.
            • circuit105 hours ago
              OnlyOffice is HTML5-based too
      • xg1512 hours ago
        There is a difference between having them &quot;involved&quot; and them being the only authority in the entire process.
      • ocdtrekkie12 hours ago
        There are other key browser implementers. Google should not have more than an advisory role in any standards organization.
        • dpark12 hours ago
          The other key browser implementers are also part of WHATWG.<p>Who do you suppose should be in charge of web standards? I can’t imagine the train wreck of incompetence if standards were driven by bureaucrats instead of stakeholders.
          • xg1512 hours ago
            How about the users and web authors?
            • dpark12 hours ago
              Saying web users should define web standards is like saying laptop users should design CPUs. They lack the expertise to do this meaningfully.<p>Web authors? Maybe. WHATWG was created specifically because W3C wasn’t really listening to web authors though.<p>I don’t think there are a lot of scenarios where standards <i>aren’t</i> driven by implementers, though. USB, DRAM, WiFi, all this stuff is defined by implementers.
              • aleph_minus_one11 hours ago
                &gt; WHATWG was created specifically because W3C wasn’t really listening to web authors though.<p>Rather: WHATWG was founded because the companies developing browsers (in particular Google) believed that what the W3C was working on for XHTML 2.0 was too academic, and went into a different direction than their (i.e. in particular Google&#x27;s) vision for the web.
                • dpark11 hours ago
                  Literally the WHATWG founders wanted to focus on web applications, which they said web authors were asking for, and they got voted down.<p>Google was not involved in the founding of WHATWG, though certainly the WHATWG vision was better aligned with Google than with what the W3C was doing.
                  • xg1510 hours ago
                    They only paid the salary of its chief editor (Ian Hickson) for a significant amount of time...<p>But that&#x27;s not very relevant actually. The WHATWG is more like a private arbitrator, not like a court or parliament.<p>Their mission is to document browser features and coordinate them in such a way that implementation between browsers doesn&#x27;t diverge too much. It&#x27;s NOT their mission to decide which features will or will not be implemented or even to design new features. That&#x27;s left to the browser vendors.<p>And the most powerful browser vendor is Google.
                    • dpark8 hours ago
                      This is such a bizarre response to me saying Google was not part of the founding WHATWG group. It’s like you want to have an argument but don’t have anything to argue about.<p>“Oh, yeah? Well they paid Hickson’s salary. And the WHATWG doesn’t matter anyway. And also Google is really powerful.”<p>Um, ok.<p>WHATWG was founded in 2004 by Mozilla, Opera, and Apple. Google had no browser at that point and didn’t hire Ian Hickson until 2005.<p>Google is currently a WHATWG member and clearly wields a great deal of influence there. And yeah, the 4 trillion dollar internet giant is powerful. No argument there.
                • magicalist11 hours ago
                  &gt; <i>Rather: WHATWG was founded because the companies developing browsers (in particular Google) believed that what the W3C was working on for XHTML 2.0 was too academic, and went into a different direction than their (i.e. in particular Google&#x27;s) vision for the web.</i><p>Mozilla, Opera and Apple. Google didn&#x27;t have a browser then, hadn&#x27;t even made the main hires who would start developing Chrome yet and hixie was still at Opera.
            • shadowgovt12 hours ago
              Ask users what they want and they say &quot;faster horses,&quot; not cars.<p>Users are a key information source but they don&#x27;t know how to build a web engine, they don&#x27;t know networks, and they don&#x27;t know security; and therefore can&#x27;t dictate the feature set.
            • jpadkins12 hours ago
              web users make their choice via choice in browsers.
          • ocdtrekkie12 hours ago
            And those implementers should make decisions, Google should be bound by the FTC to supporting their recommendations.<p>Honestly, what&#x27;s really funny here is how absolutely horrified people are by the suggestion a single company which has a monopoly shouldn&#x27;t also define the web platform. I really think anyone who has any sort of confusion about what I commented here to take a long, hard look at their worldview.
            • dpark12 hours ago
              &gt; And those implementers should make decisions, Google should be bound by the FTC to supporting their recommendations.<p>Is your proposal essentially that Mozilla defines web standards Google is legally bound to implement them?<p>&gt; what&#x27;s really funny here is how absolutely horrified people are by the suggestion<p>Not horrified, but asking what the alternative is. I don’t think you’ve actually got a sensible proposal.<p>Cooperation in the WHATWG is voluntary. Even <i>if</i> there were some workable proposal for how to drive web standards without Google having any decision making power, they could (and presumably would) decline to participate in any structure that mandated what they have to build in Chrome. Absent legal force, no one can make Google cede their investment in web standards.
              • ocdtrekkie12 hours ago
                We have the legal force to do this. Google has already been determined to be abusing their illegal monopoly they have with Chrome. The penalty phase is ongoing, but consider that even forcing Google to sell Chrome was originally considered as a possible penalty.<p>Requiring Google implement the standards as agreed by Apple, Mozilla, and Microsoft is not remotely outside the realm of the legal force that could be applied.
                • dpark11 hours ago
                  There’s something not quite right about saying one member of an oligopoly should be forced to follow the dictates of the other members of an oligopoly. I don’t feel like this actually solves anything.<p>I feel like Mozilla would end up being a Google proxy in this case as they fear losing their funding and Apple and Microsoft would be incentivized to abuse their position to force Google not to do the best thing for the public but the best thing for Apple and Microsoft.
                  • moron4hire10 hours ago
                    Yeah, that feels like State-sponsored formalizing of oligopolies into a cartels. We&#x27;d like it if they went in the complete opposite direction of less power, not more.
                  • ocdtrekkie11 hours ago
                    I agree there&#x27;s already a significant proxy risk with Mozilla (though Mozilla does consider many Google web proposals harmful today), but that is also no less true today, and in fact, today that means Google holds two votes not one.<p>I would again agree Microsoft and Apple will heavily endorse their own interests, Microsoft much more so in terms of enterprise requirements and Apple much more so in terms of privacy-concerned consumers. The advertising firm influence will be significantly dimished and that is a <i>darn shame</i>.
            • fngjdflmdflg12 hours ago
              &gt;what&#x27;s really funny here is how absolutely horrified people are by the suggestion a single company which has a monopoly shouldn&#x27;t also define the web platform<p>They don&#x27;t. In general browser specs are defined via various standards groups like WHATWG. As far as I know there is no standard for what image formats must be supported on a web browser,[0] which is why in this one case any browser can decide to support an image format or not.<p>[0] <a href="https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTML&#x2F;Reference&#x2F;Elements&#x2F;img#supported_image_formats" rel="nofollow">https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTML&#x2F;Reference&#x2F;...</a>
    • SquareWheel13 hours ago
      Which other parties? Because Mozilla&#x27;s stance on JPEG XL and XSLT are identical to Google&#x27;s. They don&#x27;t want to create a maintenance burden for features that offer little benefit over existing options.
      • mubou213 hours ago
        Didn&#x27;t Mozilla basically say they would support it if Google does? Mozilla doesn&#x27;t have the resources to maintain a feature that no one can actually use; they&#x27;re barely managing to keep up with the latest standards as it is.
        • philipallstar13 hours ago
          They have many millions to spend on engineers. They should do that.
          • DrewADesign12 hours ago
            Just come up with some way to make it a huge win for Pocket integration or the like.
        • jamesnorden12 hours ago
          Yeah, they need those resources to pay the CEO!
        • josefx10 hours ago
          &gt; maintain a feature that no one can actually use;<p>If only there was a way to detect which features a browser supports. Something maybe in the html, the css, javascript or the user agent. If only there was a way to do that, we would not be stuck in a world pretending that everything runs on IE6. &#x2F;s
      • jfindper13 hours ago
        &gt;<i>Because Mozilla&#x27;s stance on JPEG XL and XSLT are identical to Google&#x27;s.</i><p>Okay, and do they align on every other web standard too?
        • johncolanduoni13 hours ago
          Usually it’s Mozilla not wanting to implement something Google wants to implement, not the other way around.
          • jfindper9 hours ago
            Indeed, you&#x27;re making my point.<p>SquareWheel implied that Mozilla doesn&#x27;t count as an &quot;other party&quot; because they are aligned with Google on this specific topic.<p>My comment was pointing out that just because they are aligned on this doesn&#x27;t mean they are aligned on everything, so Mozilla <i>is</i> an &quot;other party&quot;.<p>And, as you have reinforced, Google and Mozilla are not always in alignment.
      • Fileformat12 hours ago
        Which is why Firefox is steadily losing market share.<p>If Mozilla wanted Firefox to succeed, they would stop playing &quot;copy Chrome&quot; and support all sorts of things that the community wants, like JpegXL, XSLT, RSS&#x2F;Atom, Gemini (protocol, not AI), ActivityPub, etc.<p>Not to mention a built-in ad-blocker...
        • dralley12 hours ago
          With all due respect, this is a completely HN-brained take.<p>No significant number of users chooses their browser based on support for <i>image codecs</i>. Especially not when no relevant website will ever use them until Safari and Chrome support them.<p>And websites which already do not bother supporting Firefox very much will bother even less if said browser by-default refuses to allow them to make revenue. They may in fact go even further and put more effort into trying to block said users unless they use a different browser.<p>Despite whatever HN thinks, Firefox lost marketshare on the basis of:<p>A) heavy marketing campaigns by Google including backdoor auto-installations via. crapware installers like free antivirus, Java and Adobe, and targeted popups on the largest websites on the planet (which are primarily google properties). The Chrome marketing budget alone nearly surpasses Mozilla&#x27;s entire budget and that&#x27;s not even accounting for the value of the aforementioned self-advertising.<p>B) being a slower, heavier browser at the time, largely because the extension model that HN loved so much and fought the removal of was an architectural anchor, and beyond that, XUL&#x2F;XPCOM extensions were frequently the cause of the most egregious examples of bad performance, bloat and brokenness in the first place.<p>C) being &quot;what their cellphone uses&quot; and Google being otherwise synonymous with the internet, like IE was in the late 1990s and early 2000s. Their competitors (Apple, Microsoft, Google) all own their own OS platforms and can squeeze alternative browsers out by merely being good enough or integrated enough not to switch for the average person.
          • Fileformat10 hours ago
            I don&#x27;t disagree with you, but given (A) how will Firefox ever compete?<p>One possible way is doing things that Google and Chrome don&#x27;t (can&#x27;t).<p>Catering to niche audiences (and winning those niches) gives people a reason to use it. Maybe one of the niches takes off. Catering to advanced users not necessarily a bad way to compete.<p>Being a feature-for-feature copy of Chrome is not a winning strategy (IMHO).
            • dralley9 hours ago
              &gt;Being a feature-for-feature copy of Chrome is not a winning strategy (IMHO).<p>Good thing they aren&#x27;t? Firefox&#x27;s detached video player feature is far superior to anything Chrome has that I&#x27;m aware of. Likewise for container tabs, Manifest V2 and anti-fingerprinting mode. And there are AI integrations that do make sense, like local-only AI translation &amp; summaries, which could be a &quot;niche feature&quot; that people care about. But people complain about that stuff too.
              • Fileformat7 hours ago
                And these aren&#x27;t niche&#x2F;advanced features? I&#x27;m using Firefox now, and did not know about them. If I&#x27;m using them, it is only accidentally or because they are the defaults.<p>But I&#x27;m agreeing with you! These features are important to you, an advanced user. The more advanced users for Firefox, the better.
        • dpark12 hours ago
          &gt; all sorts of things that the community wants, like JpegXL, XSLT, RSS&#x2F;Atom, Gemini (protocol, not AI), ActivityPub, etc.<p>What “community” is this? The typical consumer has no idea what any of this is.
          • Fileformat10 hours ago
            I agree with you. But a typical consumer will already be using Chrome, and has no reason to use Firefox.<p>If one of these advanced&#x2F;niche technologies takes off, suddenly they will have a reason to use Firefox.
            • dpark9 hours ago
              For Firefox to win back significant share, they need to do more than embrace fringe scenarios that normal people don’t care about. They need some compelling reason to switch.<p>IE lost the lead to Firefox when IE basically just stopped development and stagnated. Firefox lost to Chrome when Firefox became too bloated and slow. Firefox simply will not win back that market until either Chrome screws up majorly or Firefox delivers some significant value that Google cannot immediately copy.
    • m-schuetz10 hours ago
      Nah, google paved the way forward with vital developments like WebGPU und import maps. I stopped using and supporting Firefox because they refused to improve the internet.
      • josefx10 hours ago
        Not everyone is using their browser to mine dogecoin.
        • m-schuetz10 hours ago
          I&#x27;m using mine to develop 3D apps, which became way to cumbersome and eventually impossible since Firefox dragged its feet on inplementing important stuff.
    • nilamo13 hours ago
      Barred by who? There is no governing body who can do such a thing, currently. As it is, nothing stops any random person or organization from creating any new format.
      • xg1512 hours ago
        And this will land in Chrome how?
    • jeffbee12 hours ago
      Nobody is stopping you from using jpegxl.
      • dpark11 hours ago
        This is a vacuous statement. No one is stopping me from using JPEG XL in the same sense that no one is stopping me from using DIMG10K, a format I just invented. But if I attempt to use either of these in my website today, Chrome will not render them.<p>In a very real sense Google is currently stopping web authors from using JPEG XL.
        • jeffbee11 hours ago
          The web was designed from the start to solve this problem and you can serve alternate formats to user agents which will select the one they support.
          • dpark11 hours ago
            Your statement here amounts to “you can serve JPEG XL to other browsers, just not Chrome”.<p>Yeah, that’s what I said.
            • jeffbee10 hours ago
              This is the way of web. Sites don&#x27;t get to dictate what the user agent does. The clue is in the name: user agent.
              • dpark10 hours ago
                Okay. So putting it together…<p>If the user agent does not support JPEG XL, then you cannot use it.<p>“Nobody is stopping you from using jpegxl” except Google.
      • xg1512 hours ago
        Then what is this article about?
        • jeffbee12 hours ago
          It&#x27;s a meta-commentary about the death of critical thinking and the ease with which mindless mobs can be whipped.<p>From the jump, the article commits a logical error, suggesting that Google killed jpegxl because it favors avif, which is &quot;homegrown&quot;. jpegxl, of course, was <i>also</i> written by Google, so this sentence isn&#x27;t even internally consistent.
    • bigbuppo13 hours ago
      Well, they said they would unkill xslt if someone would rewrite and maintain it so that it&#x27;s not the abandonware horrorshow it was.<p>As for JPEG XL, of course they unkilled it. WEBP has been deprecated in favor of JPEG XL.
      • dpark12 hours ago
        I don’t think they actually said that about xslt at all. From what I saw they basically said usage is low enough that they do not care about it.<p>Can you point to somewhere that Google or anyone else indicated that they would support xslt once there’s a secure, supported version?
      • LegionMammal97812 hours ago
        &gt; Well, they said they would unkill xslt if someone would rewrite and maintain it so that it&#x27;s not the abandonware horrorshow it was.<p>Who said this? I was never able to find any support among the browser devs for &quot;keep XSLT with some more secure non-libxslt implementation&quot;.
      • lloydatkinson13 hours ago
        Webp deprecated? According to what?
        • bigbuppo10 hours ago
          It&#x27;s all arbitrary. WEBP is deprecated, just like GIF is deprecated.
        • lern_too_spel12 hours ago
          VP8 is in all major browsers due to WebRTC, and webp uses little more code than the VP8 keyframe decoder, so it also has baseline support and is unlikely to be deprecated any time soon. <a href="https:&#x2F;&#x2F;caniuse.com&#x2F;?search=vp8" rel="nofollow">https:&#x2F;&#x2F;caniuse.com&#x2F;?search=vp8</a><p>Similarly, AVIF uses little more code than the AV1 keyframe decoder, so since every browser supports AV1, every browser also supports AVIF.
      • ryanmcbride13 hours ago
        honestly hate webp so happy about this
        • excusable12 hours ago
          I don&#x27;t know much about webp. Just have checked the wiki, it looks nice. So for which reason you hate it?
          • majora200712 hours ago
            I don&#x27;t know much about webp other than you get about 50% savings in compression vs png&#x2F;jpeg, but it does have some hard limits on sizes of images. It doesn&#x27;t do well with webtoon reading formats (long strip format).<p>Otherwise, I love webp and use it for all my comics&#x2F;manga.
            • objectcode6 minutes ago
              Even nowadays, webp seems to be good specifically for its lossless mode. It seems to create files that are substantially more efficient even when compared with advanced png encoders. For comics, png should probably be used over jpeg, so webp is likely indeed an upgrade, aside from compatibility.<p>For photographs, jpeg has really been optimized without reducing compatibility, and also in another less compatible way (incompatible viewers can display it without erroring out, but the colors are wrong) and there&#x27;s such an encoder in the JPEG XL repo.
          • ryanmcbride9 hours ago
            It was mostly about compatibility but looks like photoshop supports it now so I guess I can now officially say I don&#x27;t really care one way or the other.