15 comments

  • Animats20 minutes ago
    I&#x27;ve wanted something like this for level of detail processing.<p>This is a render from Second Life, in which all the texture images were shrunk down to one pixel, the lowest possible level of detail, producing a monocolor image. For distant objects, or for objects where the texture is still coming in from the net, there needs to be some default color. The existing system used grey for everything. I tried using an average of all the pixels, and, as the original poster points out, the result looks murky.[1] This new approach has real promise for big-world rendering.<p>[1] <a href="https:&#x2F;&#x2F;media.invisioncic.com&#x2F;Mseclife&#x2F;monthly_2023_05&#x2F;monocolor1.jpg.002a5e610b7ac93e282a7e916dc46c38.jpg" rel="nofollow">https:&#x2F;&#x2F;media.invisioncic.com&#x2F;Mseclife&#x2F;monthly_2023_05&#x2F;monoc...</a>
  • iamcalledrob7 hours ago
    As a designer, I&#x27;ve built variants of this several times throughout my career.<p>The author&#x27;s approach is really good, and he hits on pretty much all the problems that arise from more naive approaches. In particular, using a perceptual colorspace, and how the most representative colour may not be the one that appears the most.<p>However, image processing makes my neck tingle because there are a lot of footguns. PNG bombs, anyone? I feel like any library needs to either be defensively programmed or explicit in its documentation.<p>The README says &quot;Finding main colors of a reasonably sized image takes about 100ms&quot; -- that&#x27;s way too slow. I bet the operation takes a few hundred MB of RAM too.<p>For anyone that uses this, scale down your images substantially first, or only sample every N pixels. Avoid loading the whole thing into memory if possible, unless this handled serially by a job queue of some sort.<p>You can operate this kind of algorithm much faster and with less RAM usage on a small thumbnail than you would on a large input image. This makes performance concerns less of an issue. And prevents a whole class of OOM DoS vulnerabilities!<p>As a defensive step, I&#x27;d add something like this <a href="https:&#x2F;&#x2F;github.com&#x2F;iamcalledrob&#x2F;saferimg&#x2F;blob&#x2F;master&#x2F;asset&#x2F;pngbomb.png" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;iamcalledrob&#x2F;saferimg&#x2F;blob&#x2F;master&#x2F;asset&#x2F;p...</a> to your test suite and see what happens.
    • jaen6 hours ago
      I really wish people would read the article, the library does exactly this:<p>&gt; Okmain downsamples the image by a power of two until the total number of pixels is below 250,000.
      • iamcalledrob6 hours ago
        Somehow I missed that, oops. I see that the library samples a maximum of 250K pixels from the input buffer (I jumped over to the project readme)<p>That being said, this is sampling the fixed-size input buffer for the purposes of determining the right colour. You still have to load the bitmap into memory, with all the associated footguns that arise there. The library just isn&#x27;t making it worse :) I suppose you could memmap it.<p>Makes me wonder if the sub-sampling is actually a bit of a red herring, as ideally you&#x27;d want to be operating on a small input buffer anyway. Or some sort of interface on top of the raw pixel data, so you can load what&#x27;s needed on-demand.
      • vasco6 hours ago
        That&#x27;s 500x500, I&#x27;m sure you can get good results at 32x32 or 64x64 but then part of your color choice is also getting done by the downsampling algorithm. I wonder if you could get away with just using a downsampling algorithm into a 1x1 and just use that as the main color.
        • PaulHoule6 hours ago
          That last one is talked about in the article -- it sucks!<p>I think if you were going to &quot;downsample&quot; for the purpose of creating a color set you could just scan through the picture and randomly select 10% (or whatever) of the pixels and apply k-means to that and not do any averaging which costs resources and makes your colors muddy.
          • dgroshev5 hours ago
            Random sampling makes a lot of intuitive sense, but unfortunately doesn&#x27;t work well. I just answered over at lobsters: <a href="https:&#x2F;&#x2F;lobste.rs&#x2F;s&#x2F;t43mh5&#x2F;okmain_you_have_image_you_want_colour#c_he3k1t" rel="nofollow">https:&#x2F;&#x2F;lobste.rs&#x2F;s&#x2F;t43mh5&#x2F;okmain_you_have_image_you_want_co...</a><p>I should probably add this nuance to the post itself.<p>Edit: added a footnote
            • SigmundurM3 hours ago
              How about sampling more from the more &quot;prominent&quot; areas of the image[0] and less from less &quot;prominent&quot; areas?<p>[0]: <a href="https:&#x2F;&#x2F;dgroshev.com&#x2F;blog&#x2F;okmain&#x2F;img&#x2F;distance_mask.png?hash=a921b80a" rel="nofollow">https:&#x2F;&#x2F;dgroshev.com&#x2F;blog&#x2F;okmain&#x2F;img&#x2F;distance_mask.png?hash=...</a>
    • chrisweekly4 hours ago
      your gh link returned 404<p>EDIT: then (when url refreshed) triggered a redir loop culminating in a different error (&quot;problem occurred repeatedly&quot;)...<p>ah, ofc, your intent was to demonstrate a problematic asset.
      • TheJoeMan2 hours ago
        Realizing I intentionally opened a png bomb made me chuckle, like what did I think was going to happen?
    • latexr7 hours ago
      &gt; I&#x27;ve built variants of this several times throughout my career.<p>Got any to share? A self-contained command-line tool to get a good palette from an image is something I’d have a use for.
      • dylan6045 hours ago
        Fred&#x27;s dominantcolor script for imagemagick might work for you:<p><a href="https:&#x2F;&#x2F;www.fmwconcepts.com&#x2F;imagemagick&#x2F;dominantcolor&#x2F;index.php" rel="nofollow">https:&#x2F;&#x2F;www.fmwconcepts.com&#x2F;imagemagick&#x2F;dominantcolor&#x2F;index....</a>
      • PaulHoule6 hours ago
        Back in the late 1980s people thought about color quantization a lot because a lot of computers of the time had 16 or 256 colors you could choose out of a larger palette and if you chose well you could do pretty well with photographic images.
    • dgroshev4 hours ago
      Author here: the library just accepts RGB8 bitmaps, probably coming either from Rust&#x27;s image crate [1] or Python&#x27;s Pillow [2], which are both mature and widely used. Dealing with codecs is way out of scope.<p>As for loading into memory at once: I suppose I could integrate with something like libvips and stream strips out of the decoded image without holding the entire bitmap, but that&#x27;d require substantially more glue and complexity. The current approach works fine for extracting dominant colours once to save in a database.<p>You&#x27;re right that pre-resizing the images makes everything faster, but keep in mind that k-means still requires a pretty nontrivial amount of computation.<p>[1]: <a href="https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;image" rel="nofollow">https:&#x2F;&#x2F;crates.io&#x2F;crates&#x2F;image</a><p>[2]: <a href="https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;pillow&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pypi.org&#x2F;project&#x2F;pillow&#x2F;</a>
      • hedgehog2 hours ago
        If you ever did want to wrap this in code processing untrusted images there&#x27;s a library called &quot;glycin&quot; designed for that purpose (it&#x27;s used by Loupe, the default Gnome image viewer).<p><a href="https:&#x2F;&#x2F;gnome.pages.gitlab.gnome.org&#x2F;glycin&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gnome.pages.gitlab.gnome.org&#x2F;glycin&#x2F;</a>
  • llimllib6 hours ago
    OKPalette by David Aerne is my favorite tool for this, it chooses points sensibly but then also lets you drag around or change the number of colors you want: <a href="https:&#x2F;&#x2F;okpalette.color.pizza&#x2F;" rel="nofollow">https:&#x2F;&#x2F;okpalette.color.pizza&#x2F;</a>
  • kristjan3 hours ago
    I&#x27;ve been doing something similar! I&#x27;ve got a Home Assistant dashboard on my desk and wanted the media controls to match the current album art. I need three colors: background, foreground, and something vibrant to set my desk lamp to [1].<p>The SpotifyPlus HA integration [2] was near at hand and does a reasonably good job clustering with a version of ColorThief [3] under the hood. It has the same two problems you started with though: muddying when there&#x27;s lots of gradation, even within a cluster; and no semantic understanding when the cover has something resembling a frame. A bit swapped from okmain&#x27;s goal, but I can invert with the best of them and will give it a shot next time I fiddle. Thanks for posting!<p>[1] <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;kristjan&#x2F;b305b83b0eb4455ee8455be108a0f703" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;kristjan&#x2F;b305b83b0eb4455ee8455be108a...</a> [2] <a href="https:&#x2F;&#x2F;github.com&#x2F;thlucas1&#x2F;homeassistantcomponent_spotifyplus" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;thlucas1&#x2F;homeassistantcomponent_spotifypl...</a> [3] <a href="https:&#x2F;&#x2F;github.com&#x2F;thlucas1&#x2F;SpotifyWebApiPython&#x2F;blob&#x2F;master&#x2F;spotifywebapipython&#x2F;vibrant&#x2F;colorthieffast.py" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;thlucas1&#x2F;SpotifyWebApiPython&#x2F;blob&#x2F;master&#x2F;...</a>
  • slazaro3 hours ago
    It reminds me a bit of this post from the Facebook engineering blog (2015) [1] where they discuss embedding a very tiny preview of images into the html itself so they show immediately while loading the page, especially with very slow connections.<p>[1] <a href="https:&#x2F;&#x2F;engineering.fb.com&#x2F;2015&#x2F;08&#x2F;06&#x2F;android&#x2F;the-technology-behind-preview-photos&#x2F;" rel="nofollow">https:&#x2F;&#x2F;engineering.fb.com&#x2F;2015&#x2F;08&#x2F;06&#x2F;android&#x2F;the-technology...</a>
  • bee_rider6 hours ago
    I’m surprised the baseline to compare against is shrinking the image to one pixel, that seems extremely hacky and very dependent on what your image editor happens to do (and also seems quite wasteful… the rescaling operation must be doing a lot of extra pointless work keeping track of the position of pixels that are all ultimately going to be collapsed to one point).<p>So, making a library that provides an alternative is a great service to the world, haha.<p>An additional feature that might be nice: the most prominent colors seem like they might be a bad pick in some cases, if you want the important part of the image to stand out. Maybe a color that is the close (in the color space) to the edges of your image, but far away (in the color space) from the center of your image could be interesting?
    • mungoman25 hours ago
      Tbh shrinking the image is probably the cheapest operation you can do that still lets every pixel influence the result. It’s just the average of all pixels, after suitable color conversion.
      • bombcar5 hours ago
        It might work decently well, but I wonder if it makes it &quot;visually&quot; match - sometimes the perfect average is not what our eyes see as the color.
      • LoganDark4 hours ago
        The author of the article seems to assume there is no color conversion (e.g., the resizing of the image is done with sRGB-encoded values rather than converting them to linear first). Which is a stupid way to do it but I&#x27;d believe most handwritten routines are just that.
  • lemonad6 hours ago
    This is nice! I looked into this quite a lot some years back when I was trying to summarize IKEA catalogs using color and eventually wrote an R package if you want to look into an alternative to e.g. k-means: <a href="https:&#x2F;&#x2F;github.com&#x2F;lemonad&#x2F;colorhull" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lemonad&#x2F;colorhull</a> (download <a href="https:&#x2F;&#x2F;github.com&#x2F;lemonad&#x2F;ikea-colors-through-time&#x2F;blob&#x2F;master&#x2F;Report&#x2F;report.html" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;lemonad&#x2F;ikea-colors-through-time&#x2F;blob&#x2F;mas...</a> for more details on how it works)
  • eloisius4 hours ago
    Really interesting read. Thanks for sharing. Is the performance bottleneck around the resizing to 250k pixels? Would it still work if you sampled 15,625 4x4 patches evenly around the image to gather those pixels instead of resizing?
  • bawolff4 hours ago
    In the past when i tried just using image magick&#x27;s built in -kmeans for this, i found chosing the second most prominent colour often looked really good. The primary was too much of the same thing.
  • latexr8 hours ago
    I’d be interested in trying this out as a command-line tool. It would be useful on its own and the fastest way to evaluate results.
    • jcynix5 hours ago
      ImageMagick is a wonderful command line tool, IMO. You could use it to extract various information, e.g. the 5 most used colors of an image, as in<p><pre><code> convert $IMG -colors 5 -depth 8 -format &quot;%c&quot; histogram:info: | sort -nr </code></pre> If needed you can easily remove colored borders first (trim subcommand with fuzz option) or sample only xy% from the image&#x27;s center, or where the main subject might be.
    • woodrowbarlow7 hours ago
      looks like it&#x27;s a rust lib with a python wrapper. making a CLI tool should be just a few lines of code.
      • latexr6 hours ago
        Yeah, but then I’d have to be working with Python (which I don’t enjoy) and be pulling in dependencies (which I avoid) to have a custom system with moving parts (Python interpreter, library, script) (which I don’t want).<p>A rust CLI would make a lot of sense here. Single binary.
        • sorenjan2 hours ago
          <p><pre><code> &gt; uvx --with pillow --with okmain python -c &quot;from PIL import Image; import okmain; print(okmain.colors(Image.open(&#x27;bluemarble.jpg&#x27;)))&quot; [RGB(r=79, g=87, b=120), RGB(r=27, g=33, b=66), RGB(r=152, g=155, b=175), RGB(r=0, g=0, b=0)] </code></pre> It would make sense to add an entrypoint in the pyproject.toml so you can use uvx okmain directly.
        • blipvert6 hours ago
          This sounds like a job for &lt;ta-ta-ta-taaaa&gt; contrib-directory-man!
          • latexr4 hours ago
            So your solution to “I’d be interested in having a small ready-made tool and try this out” is “spend a bunch of time to get acquainted with the code base of something you may not even like, create a separate tool, and submit it without even knowing if it’ll be accepted”?<p>That’s like having someone looking at a display of ice cream in a supermarket saying “I’d be interested in trying a few samples before committing” and then getting a reply like “here are the recipes for all the ice creams, you can try to make them at home and taste them for yourself”.<p>I know I could theoretically spend my weekend working on a CLI tool for this or making ice cream. Every developer knows that, there’s no reason to point that out except snark. But you know who might do it even faster and better and perhaps even enjoy it? The author.<p>Look, the maintainer owes me nothing. I owe them nothing. This project has been shared to HN by the author and I’m making a simple, sensible, and sensical suggestion for something which I would like to see and believe would be an improvement overall, and I explained why. The author is free to agree or disagree, reply or ignore. Every one of those options is fine.
            • cvwright2 hours ago
              You’re not wrong, but you probably could have built the thing with Claude in the time it took you to write this comment.
    • dgroshev4 hours ago
      Good idea, I&#x27;ll add a CLI tool over the weekend.
  • vova_hn23 hours ago
    &gt; simple 1x1 resize<p>How is it &quot;simple&quot;? There are like a ton of different downscaling algorithms and each of them might produce a different result.<p>Cool article otherwise.
    • gzread3 hours ago
      At 1x1 I don&#x27;t expect any difference. It would be the average of all pixels in the image if you don&#x27;t unevenly weight them (which you might decide when choosing a main color, but no downscaling algorithm would do) and the only difference is whether you remembered to gamma-correct.
      • vova_hn23 hours ago
        Nearest-neighbor interpolation may pick just one pixel closest to the center.
  • GauntletWizard2 hours ago
    I really like this approach. I worked on this problem (create a nice background for an image) for a couple weeks many years ago while organizing my desktop wallpaper collection, and never came up with a good answer. Unfortunately, I think that it&#x27;s been &quot;solved&quot; in the tiktok era; an enlarged and blurred version of the image is used to fill the background space.<p>The blurred mirror is inoffensive to almost everyone, and yet it always strikes me as gauche. Easy to ignore and yet I feel that it adds a lot of useless visual noise.
  • airstrike4 hours ago
    See also <a href="https:&#x2F;&#x2F;github.com&#x2F;material-foundation&#x2F;material-color-utilities&#x2F;blob&#x2F;main&#x2F;dev_guide%2Fextracting_colors.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;material-foundation&#x2F;material-color-utilit...</a>
  • useftmly4 hours ago
    [flagged]
    • dgroshev4 hours ago
      Picking the best colour is a difficult problem, I don&#x27;t think there&#x27;s a good general answer short of ML magic. I assembled an adversarial collection of images:<p><a href="https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;IMG_1347.jpeg" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;IMG_134...</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;pendant_unicorn.jpg" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;pendant...</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;pendant_virgin.jpg" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;pendant...</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;red_moon.jpg" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;red_moo...</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;supremus_55.jpg" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;si14&#x2F;okmain&#x2F;blob&#x2F;main&#x2F;test_images&#x2F;supremu...</a><p>For every heuristic, I can think of an image that breaks it. On the other hand, I just wanted to do better than the 1x1 trick, and I think the library clears that bar.