Days since last ffmpeg CLI wrapper: 0<p>It's incredible what lengths people go to to avoid memorizing basic ffmpeg usage. It's really not that hard, and the (F.) manual explains the basic concepts fairly well.<p>Now, granted, ffmpeg's defaults (reencoding by default and only keeping one stream of each type unless otherwise specified) aren't great, which can create some footguns, but as long as you remember to pass `-c copy` by default you should be fine.<p>Also, <i>hiding</i> those footguns is likely to create more harm than it fixes. Case in point: "ff convert video.mkv to mp4" (an extremely common usecase) maps to `ffmpeg -i video.mkv -y video.mp4` here, which does a full reencode (losing quality and wasting time) for what can usually just be a simple remux.<p>Similarly, "ffmpeg extract audio from video.mp4" will unconditionally reencode the audio to mp3, again losing quality. The quality settings are also hardcoded and hidden from the user.<p>I can sympathize with ffmpeg syntax looking complicated at first glance, but the main reason for this is just that multimedia is <i>really complicated</i> and that some of this complexity is <i>necessary</i> in order to not make stupid mistakes that lose quality or waste CPU resources. I truly believe that these ffmpeg wrappers that try to make it seem overly simple (at least when it's <i>this</i> simple, i.e. not even exposing quality settings or differentiating between reencoding and remuxing) are more hurtful than helpful. Not only can they give worse results, but by hiding this complexity from users they also give users the wrong ideas about how multimedia works. "Abstractions" like this are exactly how beliefs like "resolution and quality are the same thing" come to be. I believe the way to go should be <i>educating</i> users about video formats and proper ffmpeg usage (e.g. with good cheat sheets), not by hiding complexity that really should not be hidden.<p>Edit: Reading through my comment again, I have to apologize for the slightly facetious opening statement, even if I quality it later on. The fact that so many ffmpeg wrappers exists <i>is</i> saying something about its apparent difficulty, but as I argue above, a) there are reasons for this (namely, multimedia itself just being complicated), and b) I believe there are good and bad ways to "fix" this, with oversimplified wrappers being more on the "bad" side.
> It's really not that hard,<p>I've learned not to say this. Different things are easy/hard for each of us.<p>Reminds me of a discussion where someone argued, "why don't all the poor/homeless people just go get good jobs?"<p>Edit: I know your comment was meant to inspire/motivate us to try harder. Maybe it's easier than it appears.
Empathy is really not that hard.
I would agree with this statement before LLMs. Reading manuals can take time, be messy, and are sometimes hard to understand.<p>Now, I can simply ask any LLM to write the command, and understand any following issues or questions.<p>For example, my OS records videos as WEBM. Using the default settings for transforming to MP4 usually fails from a resolution ratio issue. I would be deadlocked using this library.<p>It really isn't that hard anymore.
ChatGPT is pretty good at generative commands
Yes, I use ffmpeg about once a year, in about 350 years I really ought to have all the syntax figure out.
> It's really not that hard,<p>if you are doing it often that's true. But for people like me who do it once every month or two it really is hard to memorize, especially if it's not exactly the same task.<p>What I would love would be an interactive script that asked me what I was trying to do and constructed a command line for me while explaining what it would do and the meaning of each argument. And of course it should favour commands that do not re-encode where possible.
I also use ffmpeg once a month. My new plan: build my own scripts like the ones in op. But self built, only for that operation or three that I do.
I swear I want this as a general tool for all command-line tools.<p>Start the tool, and just list all of the options in order of usage popularity to toggle on as desired, with a brief explanation, and a field to paste in arguments like filenames or values when needed. If an option is commonly used with another (or requires it), provide those hints (or automatically add the necessary values). If a value itself has structure (e.g. is itself a shell command), drill down recursively. Ensure that quotes and spaces and special characters always get escaped correctly.<p>In other words, a general-purpose command-line builder. And while we're at it, be able to save particular "templates" for fast re-use, identifying which values should be editable in the future.<p>I can't be the first person to think of this, but I've never come across anything like it and don't understand why not. It doesn't require AI or anything. Maybe it's the difficulty involved in creating the metadata for each tool, since man pages aren't machine-readable. But maybe that's where AI can help -- not in the tool itself, but to create the initial database of tool options, that can then be maintained by hand?<p>(Navi [1] does the templating part, but not the "interactive builder" part.)<p>[1] <a href="https://github.com/denisidoro/navi" rel="nofollow">https://github.com/denisidoro/navi</a>
I’m trying to understand the “In order of usage popularity” thing — this implies telemetry in CLIs, doesn’t it? Wouldn’t the order of options change/fluctuate over time?<p>Or if no telemetry but based on local usage, it would promote/reinforce the options you already can recall and do use, hiding the ones you can’t/don’t?
You could make it opt-in telemetry in the tool itself, that would probably be good enough.<p>But also, you could probably be just as accurate by asking an LLM to order the options by popularity based on their best guess based on all the tutorials they've trained on.<p>Or just scrape Stack Overflow for every instance of a command-line invocation for each tool and count how many times each option is used.<p>Ranking options by usage is the least complicated part of this, I think. (And it only matters for the popular options anyways -- below a certain threshold they can just be alphabetical.)
> But also, you could probably be just as accurate by asking an LLM to order the options by popularity based on their best guess based on all the tutorials they've trained on.<p>> Or just scrape Stack Overflow for every instance of a command-line invocation for each tool and count how many times each option is used.<p>Even trusting the developer's intuition is better than nothing, at least if you make sure the developer is prompted to <i>think</i> about it. (For major projects, devs might also be aware that certain features are associated with a large fraction of issue reports, for example.)
Just do a best-guess list. Or do a survey. Or just scrape the most common features used across Github repos.
Indeed why not have —tui option and some basic menu? Even a simplified scripting with reasonable API would be better.<p>I find myself bothering exactly zero times to memorise this obnoxiously long command line. Claude fills in, and I can explore features better. What’s not to like? That I’m getting dumber for not memorising pages of cli args?<p>Love the project, but as with every Swiss knife this conversation is a thing and relevant. We had similar one reg JQ syntax and I’m truly convinced JQ is wonderful and useful tool. But I’m not gonna bother learning more DSLs…
And they change quite frequently, from our POV.<p>That said, I started wrtiting scripts when I use ffmpeg some time ago. At least then I have a non-zero starting point next time.
>It's really not that hard<p>It is only a couple of thousand options[0], just memorize them! It super simple, barely an inconvenience!<p>[0]<a href="https://gist.github.com/tayvano/6e2d456a9897f55025e25035478a3a50" rel="nofollow">https://gist.github.com/tayvano/6e2d456a9897f55025e25035478a...</a>
> It's incredible what lengths people go to to avoid memorizing basic ffmpeg usage. It's really not that hard, and the (F.) manual explains the basic concepts fairly well.<p>I'm usually the one telling everyone else that various Python packaging ecosystem concepts (and possibly some other things) are "really not that hard". Many FFMpeg command lines I've encountered come across to me like examples of their own, esoteric programming language.<p>> Case in point: "ff convert video.mkv to mp4" (an extremely common usecase) maps to `ffmpeg -i video.mkv -y video.mp4` here, which does a full reencode (losing quality and wasting time) for what can usually just be a simple remux.... Similarly, "ffmpeg extract audio from video.mp4" will unconditionally reencode the audio to mp3, again losing quality.<p>That sounds like a bug report / feature request rather than a problem with the approach.<p>> The quality settings are also hardcoded and hidden from the user.<p>This is intended so that users don't have to understand what quality settings are available and choose a sensible default.<p>> and that some of this complexity is necessary in order to not make stupid mistakes<p>For example, the case of avoiding re-encodes to switch between container formats could be handled by just maintaining a mapping.<p>In fact, I've felt the lack of that mapping recently when I wanted to extract audio from some videos and apply a thumbnail to them, because different audio formats have different rules for how that works (or you might be forced to use some particular container format, and have to research which one is appropriate).
> It's incredible what lengths people go to to avoid memorizing basic ffmpeg usage. It's really not that hard<p>It's not hard - just not a good use of our time. For 99% of HN users, ffmpeg is not a vital tool.<p>I have to use it less than twice a year. Now I just go and get an LLM to tell me the command I need.<p>And BTW, I spend a lot of time memorizing things (using spaced repetition). So I'm not averse to memorizing. ffmpeg simply doesn't warrant a place in my head.
If you use it from time to time it would be very challenging to remember the million of different options ffmpeg has.
“It's really not that hard”<p>I’m going to guess your job does not involve much UX design?
I'm not saying it couldn't be better (and I even gave examples), my point is that the drawbacks of such a wrapper outweigh the benefits, at least when it's such an oversimplified one. I've said in other replies how I'd be very interested in e.g. an alternative libav* frontend with better defaults and more consistent argument syntax, but I don't think that this invalidates my criticism of the linked project.
[dead]
You're getting a lot of flak due to how you started off your comment, but I mostly agree with you.<p>In my opinion there are two kinds of users:
1. Users who use FFmpeg regularly enough to know/understand the parameters.
2. Users who only use FFmpeg once in a while to do something specific.<p>This wrapper is superfluous for users in group number 1.
But group number 2 does not really get much out of it either, for the reasons you've mentioned.<p>As a member of group 2, I usually want to do something very specific (e.g. remove an audio track, convert only the video, remux to a different container, etc.).
A simple English wrapper does not help me here because it is not powerful enough; the defaults are usually not what I want.
What I need is a tool that will take a more detailed English statement of what I want to achieve and spit out the FFmpeg command with explanations for what each parameter does and how it achieves my goal.
We have this today: AI; and it mostly works (once you've gone through several iterations of it hallucinating options that do not exist...).
Totally disagree, I have a wrapper I wrote myself for converting things, often for sharing the odd little clip online or such. It produces a <i>complex</i> command that is not easy to just type out, that does multiple things to maximise compatibility like<p>- making sure pixel are square while resizing if the video resolution is too large<p><pre><code> ("scale=w=if(gt(iw*sar\\,ih)\\,min(ceil(iw*sar/2)*2\\,{})\\,ceil(iw*sar*min(ih\\,{})/ih/2)*2):h=if(gt(ih\\,iw*sar)\\,min(ceil(ih/2)*2\\,{})\\,ceil(ih*min(iw*sar\\,{})/iw/sar/2)\*2):out_range=limited,zscale,setsar=1")
</code></pre>
- dealing with some HDR or high gamut thing I can't really remember that can result from screen recording on macos using some method I was using at some point<p>- setting this one tag on hevc files that macos needs for them to be recognised as hevc but isn't set by default<p>- calculating the target bitrate if I need a specific filesize and verifying the encode actually hit that size and retrying if not (doesn't always work first time with certain hardware encoders even if they have a target or max bitrate parameter)<p>- dealing with 2-pass encoding which is fiddly and requires two separate commands and the parameters are codec specific<p>- correctly activating hardware encoding for various codecs<p>- etc<p>And this is just for the basic task of "make this into a simple mp4"
Yes, absolutely. Multimedia is complicated.<p>But my issue with the linked tool is that it does <i>none</i> of the things you mentioned. All it does it make already very easy things even easier. Is it really that much harder to remember `ffmpeg -i inputfile outputfile.ext` than `ff convert inputfile to ext`?<p>I've explained this in other replies here but I am neither saying that ffmpeg wrappers are automatically bad, nor that ffmpeg cannot be complicated. I am only saying that <i>this specific tool</i> does not really help much.
“It’s really not that hard”, well a lot of people have better things to do than remember parameters to commands we barely use.
Some people just want to use an intuitive tool with better QoL, even if it leads to compromises, to do a job swiftly without going over documentation/learning a ton of new things. Not everything has to be an educational experience. ffmpeg exists in its original form like you prefer, but some folks want to use lossless cut. Nothing wrong with that IMO.<p>Personally I think it’s great that it’s such a universally useful tool that it has been deployed in so many different variations.
> Some people just want to use a tool to do a job swiftly. Not everything has to be educational.<p>> some folks want to use lossless cut<p>In that case I would encourage you to ruminate on what the following in the post you're replying to means and what the implications are:<p>> "ff convert video.mkv to mp4" (an extremely common usecase) maps to `ffmpeg -i video.mkv -y video.mp4` here, which does a full reencode (losing quality and wasting time) for what can usually just be a simple remux<p>Depending on the size of the video, the time it would take you to "do the job swiftly" (i.e. not caring about how the tools you are using actually work) might be more than just reading the ffmpeg manual, or at the very least searching for some command examples.
> > some folks want to use lossless cut
> In that case I would encourage you to ruminate on what the following in the post you're replying to means and what the implications are:<p>You may have misunderstood the comment: "lossless cut" is the name of an ffmpeg GUI front end. They're not discussing which exact command line gives lossless results.
The thing is that when a video is being re-encoded, so long as I'm not trying to play games on my computer at the same time, I'm free to go do something else. It does not command any of my attention while its working, whereas sitting and reading the man pages commands my attention absolutely.
As the other person said (and this is my mistake for not capitalizing), Lossless Cut is a popular CLI wrapper for ffmpeg with a (somewhat) intuitive interface. Someone is going to be able to pick up and use that a lot faster than they are ffmpeg. I think a lot of folks forget how daunting most people find using a terminal, yet a lot of those people still want something to do a simple lossless trim of an existing video or some other little tweak. It’s good that they have both options (and more).
Yes, I am not opposed to ffmpeg wrappers in and of themselves. Some decent ffmpeg wrappers definitely exist. But I argue in my comment above that this <i>specific</i> tool does <i>not</i> have better QoL - again, since it reencodes unconditionally with quality settings that are usually not configurable.
> Days since last ffmpeg CLI wrapper: 0<p>>It's incredible what lengths people go to to avoid memorizing basic ffmpeg usage. It's really not that hard, and the (F.) manual explains the basic concepts fairly well.<p>Not really sure how else I was supposed to interpret your comment but clarification taken.<p>> But I argue in my comment above that this specific tool does not have better QoL<p>For some folks it may be better/more intuitive. It doesn’t hurt anybody by existing.<p>We all compromise with different tools in our lives in different ways. It just reads to me like an odd axe to grind.<p>Simply put: What is so bad about the existence of this project?
> Not really sure how else I was supposed to interpret your comment<p>Yes, that was a bit facetious of me, I apologize for that.<p>> What is so bad about the existence of this project?<p>Being very blunt: The fact that it reinforces the <i>extremely common</i> misconception that a) converting between containers like mkv and mp4 will always require reencoding and that b) there is a single way to reencode a video (hence suggesting that there is no "bad" way to reencode a video), seeing as next to no encoding settings are exposed.
You are overthinking this way too much, to the point that it is sounding like you are purposefully creating out-of-context problems to justify your way too long rant.<p>As the kids these days say: just take the L, man.
I get what you’re saying but at the end of the day you just need to think about how most people use a tool like this. They’re looking for a simple solution to some specific problem and then they’re likely never using it again. They don’t want to deal with a full-on NLE and iMovie or whatever they have stocked is not cutting it. It’s not worth getting bent out of shape about it ultimately. There are tons of people who use ffmpeg as intended in its original form and more or less understand everything that is going on. The reason we have so many wrappers and variations all centered around ffmpeg is because of how useful it is, so it’s clearly here to stay.<p>I personally use lossless cut more than ffmpeg in the terminal just because I don’t have to really think about it and it can do most of what I need, which is simply removing or attaching things together without re-encoding. I use it maybe once every month or two, because it’s just not something I need to use a ton, so it doesn’t make sense for me to get down and dirty with the original. Ultimately I get what I need and I’m happy!
You know, writing code that doesn't leak memory is really not that hard.<p>There. I've debunked Java, Python, PHP, Perl, and Rust.<p>(Or maybe, just maybe, tools should make our lives easier.)
sure here's a command that a program I wrote to record my practicing and produce different mixes uses<p><pre><code> /usr/bin/ffmpeg -i "/path/to/musicfile.mp3" -i "/path/to/covertune.mp3" \
-filter_complex [1:a]volume=1[track1];[0a][track1]amix=normalize=false[output] \
-map [output] -b:a 192k -metadata title=15:17:01 -metadata "artist=Me, 2025" \
-metadata album=2025-12-23 "/path/to/file.mix.mp3"
</code></pre>
chance of my coming up with that without deep poring over docs and tons of trial and error, or using claude (which is pretty much what I do nowadays): zero
But the chances of you being able to achieve the same with the linked tool are also zero. That's all I am really saying. I'm not arguing that ffmpeg can get very complex (I was talking about "basic" ffmpeg usage in my original comment), just that `ff convert inputfile to ext` is not really simpler than `ffmpeg -i inputfile -o outputfile.ext`, which is all that this (<i>this specific</i>) tool is really doing.
so you know how to swap audio with -map without having to look it up?
[dead]
[dead]
When converting video to gif, I always use palettegen, e.g.<p><pre><code> ffmpeg -i input.mp4 -filter_complex "fps=15,scale=640:-2:flags=lanczos,split[a][b];[a]palettegen=reserve_transparent=off[p];[b][p]paletteuse=dither=sierra2_4a" -loop 0 output.gif
</code></pre>
See also: this blog post from 10 years ago [1]<p>[1] <a href="https://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html" rel="nofollow">https://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html</a>
In many cases today “gif” is a misnomer anyway and mp4 is a better choice. Not always, not everywhere supports actual video.<p>But one case I see often: If you’re making a website with an animated gif that’s actually a .gif file, try it as an mp4 - smaller, smoother, proper colors, can still autoplay fine.
I've been thinking of integrating pngquant as an ffmpeg filter, it would make it possible to generate even better pallettes. That would get ffmpeg on par with gifski.
Does ffmpeg's gif processing support palette-per-frame yet? Last time I compared them (years ago, maybe not long after that blog post), this was a key benefit of gifski allowing it to get better results for the same filesize in many cases (not all, particularly small images, as the total size of the palette information can be significant).
Gifski (<a href="https://gif.ski/" rel="nofollow">https://gif.ski/</a>) might be a good alternative to look to that's gif-pallete aware.
It’s a shame this isn’t the default.
I use `split[s0][s1];[s0]palettegen=max_colors=64[p];[s1][p]paletteuse=dither=bayer` personally, limiting the number of colors is a great way to transparently (to a certain point, try with different values) improve compression, as is bayer (ordered) dithering which is almost mandatory to not explode output filesizes.
Those command flags just roll off the tongue like two old friends catching up!<p>/s
I like it and would like to see an entire Linux OS being done in a similar manner. Or shell / wrapper / whatever.<p>A sane homogeneous cli for once, that treats its user as a human instead of forcing them to remember the incompatible invocation options of `tar` and `dd` for absolutely no reason.<p><pre><code> zip my-folder into my-zip.tar with compression level 9
write my-iso ./zip.zip onto external hard drive
git delete commit 1a4db4c
convert ./video.mp4 and ./audio.mp3 into ./out.mp4
merge ./video.mp4 and ./audio.mp3 to ./out.mp4 without re-encoding
</code></pre>
And add amazing autocomplete, while allowing as many wordings as possible. No need for LLMs.<p>One can dream.
I think you may enjoy [Nushell](<a href="https://www.nushell.sh" rel="nofollow">https://www.nushell.sh</a>)
> write my-iso ./zip.zip onto external hard drive<p>Dang! not <i>that one</i>, the other one!<p>> zip my-folder into my-zip.tar with compression level 9<p>What do you mean, I don't have write permissions in the current working directory? I meant for you to put the output in $HOME, i mean /tmp, i mean /var/tmp, i mean on the external hard drive, no other other one.<p>> git delete commit 1a4db4c<p>What did you do? I didn't mean delete it and erase it from the reflog and run gc! I just mean "delete it" the way any one would ever mean that! I can never get it back now!
Why not use Windows or macOS then? You don't need to use shells there.<p>I would prefer not to change the technical aspects of Linux. I actually cherish it.
See my more generalized CLI helper which does exactly this:<p><a href="https://github.com/dheera/scripts/blob/master/helpme" rel="nofollow">https://github.com/dheera/scripts/blob/master/helpme</a><p>Example usage:<p><pre><code> helpme ffmpeg assemble all the .jpg files into an .mp4 timelapse video at 8fps
helpme zip my-folder into my-zip.tar with compression level 9
helpme git delete commit 1a4db4c
...
</code></pre>
This originated from an ffmpeg wrapper I wrote but then realized it could be used for all commands:<p><a href="https://news.ycombinator.com/item?id=40410637">https://news.ycombinator.com/item?id=40410637</a>
The one good usecase I've found for AI chatbots, is writing ffmpeg commands. You can just keep chatting with it until you have the command you need. Some of them I save as an executable .command, or in my .txt note.
LLMs are an amazing advance in natural language parsing.<p>The problem is someone decided that and the contents of Wikipedia was all something needs to be intelligent haha
The confusion was thinking that language is the same thing as intelligence.
You and me are great examples of that. We are both extremely stupid and yet we can speak.
This seems like a glib one liner but I do think it is profoundly insightful as to how some people approach thinking about LLMs.<p>It is almost like there is hardwiring in our brains that makes us instinctively correlate language generation with intelligence and people cannot separate the two.<p>It would be like if for the first calculators ever produced instead of responding with 8 to the input 4 + 4 = printed out "Great question! The answer to your question is 7.98" and that resulted in a slew of people proclaiming the arrival of AGI (or, more seriously, the ELIZA Effect is a thing).
And reddit, that bastion of human achievement.
As pessimistic about it as I am, I do think LLMs have a place in helping people turn their text description into formal directives. (Search terms, command-line, SQL, etc.)<p>... <i>Provided that</i> the user sees what's being made for them and can confirm it and (hopefully) <i>learn</i> the target "language."<p>Tutor, not a do-for-you assistant.
I agree apart from the learning part. The thing is unless you have some very specific needs where you need to use ffmpeg a lot, there’s just no need to learn this stuff. If I have to touch it once a year I have much better things to spend my time learning than ffmpeg command
Agreed. I have a bunch of little command-line apps that I use 0.3 to 3 times a year* and I'm never going to memorize the commands or syntax for those. I'll be happy to remember the names of these tools, so I can actually find them on my own computer.<p>* - Just a few days ago I used ImageMagick for the first time in at least three years. I downloaded it just to find that I already had it installed.
There is no universe where I would like to spend brain power on learning ffmpeg commands by heart.
No one learns those. What people do is just learning the UX of the cli and the terminology (codec, opus, bitrate, sampling,…)
The thing about ffmpeg is there's no substitute for learning. It's pretty common that something simple like "ff convert" simply doesn't work and you have to learn about resolution, color space, profiles, or container formats. An LLM can help but earlier this year I spent a lot of time looking at these sorts of edge cases, and I can easily make any LLM wildly hallucinate by asking questions about how to use ffmpeg to handle particular files.
Do most devs even look at the source code for packages they install? Or the compiled machine code? I think of this as just a higher level of abstraction. Confirm it works and not worry about the details of how it works
For the kinds of things you’d need to reach for an LLM, there’s no way to trust that it actually generated what you actually asked for. You could ask it to write a bunch of tests, but you still need to read the tests.<p>It isn’t fair to say “since I don’t read the source of the libraries I install that are written by humans, I don’t need to read the output of an llm; it’s a higher level of abstraction” for two reasons:<p>1. Most Libraries worth using have already been proven by being used in actual projects. If you can see that a project has lots of bug fixes, you know it’s better than raw code. Most bugs don’t show up unless code gets put through its paces.<p>2. Actual humans have actual problems that they’re willing to solve to a high degree of fidelity. This is essentially saying that humans have both a massive context window and an even more massive ability to prioritize important things that are implicit. LLMs can’t prioritize like humans because they don’t have experiences.
I don’t because I trust the process to get the artifacts. Why? Because it’s easy to replicate and verify. Just like how proof works in math.<p>You can’t verify LLM’s output. And thus, any form of trust is faith, not rational logic.
I don't install 3rd party dependencies if I can avoid them. Why? Because although someone could have verified them, there's no guarantee that anybody actually did, and this difference has been exploited by attackers often enough to get its own name, a "supply-chain attack".<p>With an LLM’s output, it is short enough that I can* put in the effort to make sure it's not obliviously malicious. Then I save the output as an artefact.<p>* and I do put in this effort, unless I'm deliberately experimenting with vibe coding to see what the SOTA is.
> Because although someone could have verified them, there's no guarantee that anybody actually did<p>In the case of npm and the like, I don't trust them because they are actually using insecure procedures, which is proven to be so. And the vectors of attacks are well known. But I do trust Debian and the binaries they provide as the risks are for the Debian infrastructure to be compromised, malicious code in in the original source, and cryptographic failures. All threes are possibles, but there's more risk of bodily harm to myself that them happening.
It you stretch it little further, those formal directives also include language and vocabulary of a particular domain (legalese, etc…).
The "provided" isn't provided, of course, especially the learning part, that's not what you'd turn to AI for vs more reliable tutoring alternatives
One that older AI struggled with was the "bounce" effect: play from 0:00 to 0:03, then backwards from 0:03 to 0:00, then repeat 5 times.
Just tried it and got this, is it correct?<p>> Write an ffmpeg command that implements the "bounce" effect: play from 0:00 to 0:03, then backwards from 0:03 to 0:00, then repeat 5 times.<p><pre><code> ffmpeg -i input.mp4 \
-filter_complex "
[0:v]trim=0:3,setpts=PTS-STARTPTS[f];
[f]reverse[r];
[f][r]concat=n=2:v=1:a=0[b];
[b]loop=loop=4:size=150:start=0
" \
output.mp4</code></pre>
Thanks, but no luck. I tested it on a 3 second video, and got a 6 second video. I.e. it bounced 1 time, not 5 times.<p>Maybe this should be an AI reasoning test.<p>Here is what eventually worked, iirc (10 bounces):<p><pre><code> ffmpeg -i input.mkv -filter_complex "split=2[fwd][rev_in]; [rev_in]reverse[rev]; [fwd][rev]concat=n=2,split=10[p1][p2][p3][p4][p5][p6][p7][p8][p9][p10]; [p1][p2][p3][p4][p5][p6][p7][p8][p9][p10]concat=n=10[outv]" -map "[outv]" -an output.mkv</code></pre>
But doesnt something like this interface kind of show the inefficiency of this? Like we can all agree ffmpeg is somewhat esoteric and LLMs are probably really great at it, but at the end of the day if you can get 90% of what you need with just some good porcelain, why waste the energy spinning up the GPU?
Requiring the installation of a massive kraken like node.js and npm to run a commandline executable hardly screams efficiency...
Because FFmpeg is a swiss army knife with a million blades and I don't think any easy interface is really going to do the job well. It's a great LLM use case.
But you only need to find the correct tool once and mark it in some way. Aka write a wrapper script, jot down some notes. You are acting like you’re forced to use the cli each time.
I know everybody uses a subscription for these things, but doesn't it at least <i>feel</i> expensive to use an LLM like this? Like turning on the oven to heat up a single slice of pizza.
No, LLMs are extremely useful for dealing with ffmpeg. Also I don't think they're sufficient, they get confused too easily and ffmpeg is extremely confusing.
ChatGPTs free tier is just fine for me.
Because getting 90% might not be good enough, and the effort you need to expend to reach 97% costs much more than the energy the GPU uses.
Because the porcelain is purpose built for a specific use case. If you need something outside of what its author intended, you'll need to get your hands dirty.<p>And, realistically, compute and power is cheap for getting help with one-off CLI commands.
Can't access the githup repo <a href="https://github.com/josharsh/ezff" rel="nofollow">https://github.com/josharsh/ezff</a>
Same here, I get a 404 from github. The said link is at the bottom of the submitted npmjs page.
yeah me too but npm has the code tab <a href="https://www.npmjs.com/package/ezff?activeTab=code" rel="nofollow">https://www.npmjs.com/package/ezff?activeTab=code</a>
I would definitely use an LLM, to see what the suggested options do and tweak them.<p>Using a different package name could be helpful. I searched for ezff docs and found a completely different Python library. Also ez-ffmpeg turns up a Rust lib which looks great if calling from Rust.
LLMs are a great interface for ffmpeg. Sometimes it takes 2-3 attempts/fixes ("The subtitles in the video your command generated are offset: i see the subtitles from the beginning of the movie but the video is cut from the middle of the movie as requested, fix the command") but it generally creates complex commands much more quickly than manual work (reading the man page, crafting the command, debugging it) would.
> it handles 20 common patterns ... that cover 90%<p>Could you elaborate on this? I see a lot of AI-use and I'm wondering if this is claude speaking or you
npm? Have we learned nothing from the weekly node/npm security breaches? Not putting that hot mess anywhere near my dev box, thanks.
The total upheaval of the current computing paradigm that AI will bring, if nothing else, is<p>"Hey computer, can you convert that funny kitchen cooking scene in this movie to a .gif I can share online?"<p>You're wasting your time on a dead man walking paradigm doing anything else. "Plain English" <i>actually</i> means plain English now.
No AI is appealing but there is the cliff problem. If there is one small thing the mini language can't handle, the user would have no chance solving it themselves. They might as well start with an LLM solution first.<p>One workaround is that when there is syntax error, let user optionally switch to LLM?
That's the problem ideally solved by typed data, i.e., some UI where instead of trying to memorize whether it's thumb/s/nails you can read the closed list of alternatives, read contextual help and pick one
Somehow it seems ffmpeg has become the "Can it run crysis" of UX design
This looks handy.. along with the odd gist of "convert mkv to mp4" that I have to use every other week.<p>Quite telling that these tools need to exist to make ffmpeg actually usable by humans (including very experienced developers).
i figure out the niche ffmpeg commands various chain filters, etc
then expose them from my python cli tool with words similar to what this gentleman above has done.<p>If one has fewer such commands its as simple as just bash aliases and just adding it to ~/.bashrc<p>alias convertmkvtomp4='ffmpeg command'<p>then just run it anytime with just that alias phrase
i use ffmpeg a lot so i have my own dedicated cli snippet tool for me, to quickly build out complex pipeline in easier language<p>the best part is i have --dry-run then exposes the flow + explicit commands being used at each step, if i need details on whats happening and verbose output at each step
I have a text file with some common commands, so no tools needed.<p>But yea ffmpeg is awesome software, one of the great oss projects imo. working with video is hellish and it makes it possible.
I can only speak to my experience, but I spent a long time being puzzled by video editor user interfaces, until I ran into ScreenFlow about ten years ago. For whatever reason, the UI clicked, and I've used it ever since. It's a single purchase, not monthly, and relatively affordable. <a href="https://www.telestream.net/screenflow/overview.htm" rel="nofollow">https://www.telestream.net/screenflow/overview.htm</a>
I like the idea, but a CLI utility dependent on Node.js is not a good thing frankly.
I agree. Apart from having to use npm (and its package repository being susceptible to security issues), I’d prefer something a lot simpler. Could’ve been a Rust program or a Go program (a single executable) that could be built locally or installed (using several different methods and offering a choice).
That ship sailed some time ago.
I actually just use Claude code. “Stabilize the video x.mp4 and keep my daughter Astra as the subject. Convert it to a GIF that is under a megabyte”. It does a great job.<p>It will sample images from the video then go crop the video to that, stabilize if required, and then make me an optimized GIF that I can put in my weekly journal.
Very cool idea since ffmpeg is one of those tools that has a few common tasks but most users would need to look up the syntax every time to implement them (or make an alias). In line with the ease of use motivation, you might consider supporting tab completion.
Inspiring! I just asked Cursor to make llmwrap inspired by this, it's like rlwrap (readline wrap) but with LLMs!<p><a href="https://github.com/sirodoht/llmwrap" rel="nofollow">https://github.com/sirodoht/llmwrap</a>
GitHub repo link returns 404.
I have a little script that I use on the CLI to do this kind of stuff (calls an LLM to figure out how to do CLI stuff) but you can just as easily now use any of the coding agents.
That's beautiful! I see a .claude folder in your code, I am curious if you've "vibecoded" the whole project or just had claude there for some tasks! not that it matters or takes away from your work but just pure curiosity as someone who enjoys betting on the LLM output XD
This is very nice. When I use ffmpeg recently I usually ask an LLM first but it often takes a few tries to get the exact incantation right.<p>On a side note (I’m not a web developer), why would a command line tool like this be written and distributed using node.js? Seems like an unnecessary risk to use JavaScript for a basic (local) command line tool. Couldn’t this be written more simply in like Rust or something?
Small English nitpick:<p>> ff slow down video.mp4 by 2x<p>How do you slow something down by 2x? x is a multiplier. 2 is a number greater than 1. Multiplying by a number greater than 1 makes the result LARGER.<p>If you’re talking about “stretch movie duration to 2x”, <i>say that instead</i>.<p>Saying something is 2x smaller or 2x shorter or 2x cheaper doesn’t make sense. 2x what? What is the unit of “cheap” or “short” or “small”?<p>How much is “1 slow down”? How many “slow down” are in the movie where you want twice as many of them? Doesn’t make sense does it? So how can something be slowed by 2x? That also doesn’t make sense.<p>I know what is trying to be said. I know what is meant. Please just say it right. Things like throw us autistic people for a freaking loop, man. This really rustles our jimmies.<p>Language is for communicating. If we aren’t all on the same page about how to say stuff, you spend time typing and talking and writing and reading and your message doesn’t make it across the interpersonal language barrier.<p>I don’t want to see people wasting their time trying to communicate good ideas with bad phrasing. I want people to be able to say what they mean and move on.<p>I also don’t want to nitpick things like this, but I don’t want phrases like “slow down by 2x” to be considered normal English, either, because they aren’t.
Isn’t it somewhat common to say something like “slow this down by a factor of 2”?
Reminds me of a thing Steve Mould mentioned in a video about a claim in a book "The temperature outside an aeroplane is six times colder than the temperature inside a freezer."<p><a href="https://www.youtube.com/watch?v=C91gKuxutTU" rel="nofollow">https://www.youtube.com/watch?v=C91gKuxutTU</a> - Stand-up comedy routine about bad science
I was surprised that macOS (QuickTime/Preview, iMovie) can't read .mp4 files. Not sure if it was due to H.265 or the audio codec. I tried using ffmpeg to convert to .mov but that also failed to open, since I guess MOV is just another container format.<p>Is there an easier way?
MP4 is container, not format, so if you have unsupported format packed into MP4 container it won’t be played. Example is trying to play AV1 video codec on devices with M2 chip or older. It won’t play. But it will play on devices with M3 chip and newer. Easiest solution is to use other player so that you can watch any MP4 file but with software decoding where hardware decoding is not available. Examples of such players are MPV or VLC.
Yes, VLC works fine for playing. The user wanted to edit some mp4 videos with iMovie (vs ffmpeg).<p>I think it was an M4 Mac. Does iMovie need a codec pack? I know some PC OEMs don't ship an h.265 codec, pointing users to a $0.99 download. Thought Mac would include it, being aimed at content creators. Hoping for a cheaper solution than Adobe Premiere.
IMHO the de-facto video player for macOS is [IINA](<a href="https://iina.io/" rel="nofollow">https://iina.io/</a>).
Try something like: ffmpeg -i in.mp4 -c:v h264 -c:a aac out.mp4<p>To re-encode the content into H.264+AAC, rather than simply "muxing" the encoded bitstreams from the MP4 container into a new MOV container.
vlc
I would love to see something like this for OpenSSL
interesting approach, i solved similar problem by creating visual tool to generate ffmpeg commands but its not the same(it cant do conversion etc.)<p>I like that you took no AI approach, i am looking for something like this i.e. understanding intent and generating command without using AI but so far regex based approaches have proved to be inadequate. I also tried indexing keywords and creating index of keywords with similar meaning that improved the situation a bit but without something heavy like bert its always a subpar experience.
Thanks, will definitely check this out<p>Has anyone else been avoiding typing FFmpeg commands by using file:// URLs with yt-dlp
Sometimes an idea comes along thats so obvious it makes me angry. I have been struggling with ffmpeg commands for over well a decade. All the time I wasted googling and creating scripts so I wouldn't have to regoogle and this could have existed literally from day one
There is no need for a wrapper or memorizing syntax in our new llm world.
claude cli for ffmpeg is op lol
Uhm... Millibit, Millibyte, Megabit, Megabyte?
See also:<p><a href="https://github.com/dheera/scripts/blob/master/helpme" rel="nofollow">https://github.com/dheera/scripts/blob/master/helpme</a><p><pre><code> helpme ffmpeg assemble all the .jpg files into an .mp4 timelapse video at 8fps
</code></pre>
This evolved from an ffmpeg wrapper I wrote before:<p><a href="https://news.ycombinator.com/item?id=40410637">https://news.ycombinator.com/item?id=40410637</a>
[dead]
[flagged]