First I saw that it's written in Perl. Then I realized that the last release was 11 years ago and that the repository domains are hardcoded in the one-file script.
Does it still work, though?<p>Where else would you put the repository domains?
Are you asking if this tool can find something on ubuntu 26.04 when the urls it has were hardcoded 11 years ago?
The URL to search for packages in Ubuntu for example hasn't changed to my knowledge. Are you assuming it's only looking for packages in releases that were current at the time?
The site it hardcodes is <a href="https://packages.ubuntu.com" rel="nofollow">https://packages.ubuntu.com</a>, so yes I would expect it to work fine
In about a hundred or so separate microservices, of course…
The last commit was four years ago.
Who has?<p>Nixpkgs has. :)<p>Nowadays the only search like this I need to run is<p><pre><code> nix-locate -r 'bin/foo$'
</code></pre>
It would be nice to have a CLI alternative to Repology, though.
Another great tool, built on top of nix-locate, is comma. So for any program foo, if you have foo installed, you can run it like this:<p><pre><code> foo
</code></pre>
And if you don't have it installed, you can run it (without installing!) like this:<p><pre><code> , foo
</code></pre>
And if multiple different packages provide a program named bin/foo then comma lets you interactively choose the one you want, and remembers your choice so you don't have to specify again unless you choose to via the -d flag.
I've been using <a href="https://search.nixos.org/" rel="nofollow">https://search.nixos.org/</a> this whole time to find packages. Thanks for dropping this!
....<p><pre><code> function repology() {
curl -L --user-agent 'hackernews' \
"http://repology.org/api/v1/project/$@"
}</code></pre>
Latest release: May 19, 2015<p>Abandoned, but forkable (since FOSS), and a decent idea.<p>Probably nowadays this gets done in Node, parsing the package search websites. Preferably, this would be done via an API though.
> Probably nowadays this gets done in Node, parsing the package search websites. Preferably, this would be done via an API though.<p>Repology provides an API but it's unstable: <a href="https://repology.org/api/v1" rel="nofollow">https://repology.org/api/v1</a>
Yes, agree. The idea and concept is cool! Imo worth it to keep an eye on it and play with it.<p>First thought, which came to my mind, was a security use case to get it to a point for sbom handling and tracking. In particular, respective to all the recent package vulnerabilities.
I've been working on a GUI task manager for Linux and I've been wanting to put a "Funding" or ownership meta data next to the process or process group in the view so people can know where the upstream code lives, how to support the project, and what organizational unit "owns" that process.<p>So I actually vibe coded a script that does this against a sqlite db I've been considering to bundle with my task manager so it can know this stuff on the fly.<p>But yea this is a key missing component in Linux user space. Windows let's you encode organizational stuff into an exe but on Linux binaries don't really have that.
Shame Homebrew for Linux is getting no love from any of the tools / lists mentioned here.<p>Since switching to that and flatpak my distro choice is "what sticks closest to the upstream of [my preferred DE]"
There is also <a href="https://pkgs.org" rel="nofollow">https://pkgs.org</a> ..
Oh nice, I just implemented something like this for installing from any package manager uv-style <a href="https://abxpkg.archivebox.io/" rel="nofollow">https://abxpkg.archivebox.io/</a>, but I haven't added a "search" command yet, I should add that!
This would pair nicely with distrobox or Bedrock Linux:)
Related:<p>List of linux package search databases:<p><a href="https://github.com/sxiii/awesome-package-search" rel="nofollow">https://github.com/sxiii/awesome-package-search</a>
"Just gimme thething, I don't care where from" is a great way to get supply chain vulns
This is exactly the kind of boring CLI tool that earns its keep. Package names and availability differ just enough across distros to waste time in tiny annoying increments.
This kind of busy work should suit an AI agent:<p>Go and find me all the repolists and package/software metadata for any distro and OS ever released. Write the results to a local SQLite. Incrementally update, but don't hammer the sources to death. Provide a web UI and CLI.
Or you know, you could do that with a ~100 long script. You don't have to use LLMs for everything, especially when you're not dealing with freeform text at all, use data types and data structures, we've created the concepts for a reason.
Sure. But then I would have to use my brain to actually write code. I thought we were past that already. Also, if it's an agent that keeps scouring the net autonomously for more distros, then I wouldn't have to update the sources manually on my 100 line script.