Destructors are vastly superior to the finally keyword because they only require us to remember a single time to release resources (in the destructor) as opposed to every finally clause. For example, a file always closes itself when it goes out of scope instead of having to be explicitly closed by the person who opened the file. Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks. Not to mention how branching and conditional initialization complicates things. You can often pair up constructors with destructors in the code so that it becomes very obvious when resource acquisition and release do not match up.
I couldn't agree more. And in the rare cases where destructors do need to be created inline, it's not hard to combine destructors with closures into library types.<p>To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.<p>[1]: <a href="https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.html" rel="nofollow">https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...</a>
The scope guard statement is even better!<p><a href="https://dlang.org/articles/exception-safe.html" rel="nofollow">https://dlang.org/articles/exception-safe.html</a><p><a href="https://dlang.org/spec/statement.html#ScopeGuardStatement" rel="nofollow">https://dlang.org/spec/statement.html#ScopeGuardStatement</a><p>Yes, D also has destructors.
I use this library a lot for scope guards in C++ <a href="https://github.com/Neargye/scope_guard" rel="nofollow">https://github.com/Neargye/scope_guard</a>, especially for rolling back state on errors, e.g.<p>In a function that inserts into 4 separate maps, and might fail between each insert, I'll add a scope exit after each insert with the corresponding erase.<p>Before returning on success, I'll dismiss all the scopes.<p>I suppose the tradeoff vs RAII in the mutex example is that with the guard you still need to actually call it every time you lock a mutex, so you can still forget it and end up with the unreleased mutex, whereas with RAII that is not possible.
Scope guards are neat, particularly since D has had them since 2006! (<a href="https://forum.dlang.org/thread/dtr2fg$2vqr$4@digitaldaemon.com" rel="nofollow">https://forum.dlang.org/thread/dtr2fg$2vqr$4@digitaldaemon.c...</a>) But they are syntactically confusing since they look like a function invocations with some kind of aliased magic-value passed in.
A writable file closing itself when it goes out of scope is usually not great, since errors can occur when closing the file, especially when using networked file systems.<p><a href="https://github.com/isocpp/CppCoreGuidelines/issues/2203" rel="nofollow">https://github.com/isocpp/CppCoreGuidelines/issues/2203</a>
You need to close it and check for errors as part of the happy path. But it's great that in the error path (be that using an early return or throwing an exception), you can just forget about the file and you will never leak a file descriptor.<p>You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
Any fallible cleanup function is awkward, regardless of error handling mechanism.
The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?
You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.<p>For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?<p>If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
Just panic. What's the caller realistically going to do with that information?
> The entire point of the article is that you cannot throw from a destructor.<p>You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
You can throw <i>in</i> a destructor but not <i>from</i> one, as the quoted text rightly notes.
So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors
> So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errors<p>It's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".<p>Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.<p>Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.
That tastes like leftover casserole instead of pizza.
But they're addressing different problems<p>Sure destructors are great but you still want a "finally" for stuff you can't do in a destructor
Python has that too, it's called a context manager, basically the same thing as C++ RAII.<p>You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.
It's not the same thing at all because you have to remember to use the context manager, while in C++ the user doesn't need to write any extra code to use the destructor, it just happens automatically.
To be fair, that's just an artifact Python's chosen design. A different language could make it so that acquiring the object whose context is being managed could require one to use the context manager. For example, in Python terms, imagine if `with open("foo") as f:` was the <i>only</i> way to call `open`, and gave an error if you just called it on its own.
How do you return a file in the happy path when using a context manager?<p>If you can't, it's not remotely "basically the same as C++ RAII".
Destructors and finally clauses serve different purposes IMO. Most of the languages that have finally clauses also have destructors.<p>> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.<p>I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
> Most of the languages that have finally clauses also have destructors.<p>Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.<p>Which languages am I missing which have both try..finally and destructors?
In C# the closest analogue to a C++ destructor would probably be a `using` block. You’d have to remember to write `using` in front of it, but there are static analysers for this. It gets translated to a `try`–`finally` block under the hood, which calls `Dispose` in `finally`.<p><pre><code> using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
</code></pre>
Or, to avoid nesting:<p><pre><code> using var foo = new Foo(); // same but scoped to closest current scope
</code></pre>
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)<p>I think Java has something similar called try with resources.
Java's is<p><pre><code> try (var foo = new Foo()) {
}
// foo.close() is called here.
</code></pre>
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error.
That approach doesn't allow you to move the file into some long lived object or return it in the happy path though, does it?
As someone coming from RAII to C#, you get used to it, I'd say. You "just" have to think differently. Lean into records and immutable objects whenever you can and IDisposable interface ("using") when you can't. It's not perfect but neither is RAII. I'm on a learning path but I'd say I'm more productive in C# than I ever was in C++.
I agree with this. I don't dislike non-RAII languages (even though I do prefer RAII). I was mostly asking a rhetorical question to point out that it really isn't the same at all. As you say, it's not a RAII language, and you have to think differently than when using a RAII language with proper destructors.
Pondering - is there a language similar to C++ (whatever that means, it's huge, but I guess a sprinkle of don't pay for what you don't use and being compiled) which has no raw pointers and such (sacrificing C compatibility) but which is otherwise pretty similar to C++?
Rust is the only one I really know of. It's many things to many people, but to me as a C++ developer, it's a C++ with a better template model, better object lifetime semantics (destructive moves <3) and without all the cruft stemming from C compat and from the 40 years of evolution.<p>The biggest essential differences between Rust and C++ are probably the borrow checker (sometimes nice, sometimes just annoying, IMO) and the lack of class inheritance hierarchies. But both are RAII languages which compile to native code with a minimal runtime, both have a heavy emphasis on generic programming through templates, both have a "C-style syntax" with braces which makes Rust feel relatively familiar despite its ML influence.
You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).<p>In addition, if the caller itself is a long-lived object it can remember the object and implement dispose itself by delegating. Then the user of the long-lived object can manage it.
Technically CPython has deterministic destructors, __del__ always gets called immediately when ref count goes to zero, but it's just an implementation detail, not a language spec thing.
I don't view finalizers and destructors as different concepts. The notion only matters if you actually need cleanup behavior to be deterministic rather than just eventual, or you are dealing with something like thread locals. (Historically, C# even simply called them destructors.)
There's a huge difference in programming model. You can rely on C++ or Rust destructors to free GPU memory, close sockets, free memory owned through an opaque pointer obtained through FFI, implement reference counting, etc.<p>I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
> I don't view finalizers and destructors as different concepts.<p>They are fundamentally different concepts.<p>See <i>Destructors, Finalizers, and Synchronization</i> by Hans Boehm - <a href="https://dl.acm.org/doi/10.1145/604131.604153" rel="nofollow">https://dl.acm.org/doi/10.1145/604131.604153</a>
It would suffice to say I don't always agree with even some of the best in the field, and they don't always agree with each other, either. Anders Hejlsberg isn't exactly a random n00b when it comes to programming language design and still called the C# equivalent a "destructor", though it is now known as a finalizer in line with other programming languages. They are things that clean up resources at the end of the life of an object; the difference between GC'd languages and RAII languages is that in a GC'd runtime the lifespan of an object is non-deterministic. That may very well change the programming model, as it does in many other ways, but it doesn't make the two concepts "fundamentally different" by any means. They're certainly related concepts...
Sometimes „eventually“ is „At the end of the process“. For many resources this is not acceptable.
I always wonder whether C++ syntax ever becomes readable when you sink more time into it, and if so - how much brain rewiring we would observe on a functional MRI.
It does... until you switch employers. Or sometimes even just read a coworker's code. Or even your own older code. Actually no, I don't think anyone achieved full readability enlightenment. People like me just hallucinated it after doing the same things for too long.
Sadly, that is exactly my experience.
And yet, somehow Lisp continues to be everyone's sweetheart, even though creating literal new DSLs for every project is one of the features of the language.
Lisp doesnt have much syntax to speak of. All of the DSLs use the same basic structure and are easy to read.<p>Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.
The syntax and the concepts (const, move, copy, etc) are orthogonal. You could possibly write a lisp / s-exp syntax for c++ and all it would make better would be the macros in the preprocessor. The DSL doesn't have to be hard to read if it uses unfamiliar/uncommon project specific concepts.
Yes, sure.<p>What i mean is that in cpp all the numerous language features are exposed through little syntax/grammar details. Whereas in Lisps syntax and grammar are primitive, and this is why macros work so well.
It's because DSLs there reduce cognitive load for the reader rather than add up to it.
I continue to believe Lisp is perfect, despite only using it in a CS class a decade ago. Come to think of it, it might just be that Lisp is a perfect DSL for (among other things) CS classes…
In my opinion, C++ syntax is pretty readable. Of course there are codebases that are difficult to read (heavily abstracted, templated codebases especially), but it's not really that different compared to most other languages. But this exists in most languages, even C can be as bad with use of macros.<p>By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
It does get easy to read, but then you unlock a deeper level of misery which is trying to work out the semantics. Stuff like implicit type conversions, remembering the rule of 3 or 5 to avoid your std::moves secretly becoming a copy, unwittingly breaking code because you added a template specialization that matches more than you realized, and a million others.
This is correct - it does get easy to read but you are constantly considering the above semantics, often needing to check reference or compiler explorer to confirm.<p>Unless you are many of my coworkers, then you blissfully never think about those things, and have Cursor reply for you when asked about them (-:
"using namespace std;" goes a long way to make C++ more readable and I don't really care about the potential issues. But yeah, due to a lack of a nice module system, this will quickly cause problems with headers that unload everything into the global namespace, like the windows API.<p>I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
This is just a low-effort comment.<p>> whether C++ syntax ever becomes readable when you sink more time into it,<p>Yes, and the easy approach is to learn as you need/go.
It's very readable, especially compared to Rust.
[dead]
I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.
I think Swift’s <i>defer</i> (<a href="https://docs.swift.org/swift-book/documentation/the-swift-programming-language/statements/#Defer-Statement" rel="nofollow">https://docs.swift.org/swift-book/documentation/the-swift-pr...</a>) was inspired by/copied from go (<a href="https://go.dev/tour/flowcontrol/12" rel="nofollow">https://go.dev/tour/flowcontrol/12</a>), but they may have taken it from an even earlier language that I’m not aware of.<p>Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.<p>Secondly, if you write<p><pre><code> foo
defer revert_foo
</code></pre>
, when scanning the code, it’s easier to verify that you didn’t forget the <i>revert_foo</i> part than when there are many lines between <i>foo</i> and the <i>finally</i> block that calls <i>revert_foo</i>.<p>A disadvantage is that <i>defer</i> breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
The oldest defer-like feature I can find reference to is the ON_BLOCK_EXIT macro from this article in the December 2000 issue of the <i>C/C++ Users Journal:</i><p><a href="https://jacobfilipp.com/DrDobbs/articles/CUJ/2000/cexp1812/alexandr/alexandr.htm" rel="nofollow">https://jacobfilipp.com/DrDobbs/articles/CUJ/2000/cexp1812/a...</a><p>A similar macro later (2006) made its way into Boost as BOOST_SCOPE_EXIT:<p><a href="https://www.boost.org/doc/libs/latest/libs/scope_exit/doc/html/index.html" rel="nofollow">https://www.boost.org/doc/libs/latest/libs/scope_exit/doc/ht...</a><p>I can't say for sure whether Go's creators took inspiration from these, but it wouldn't be surprising if they did.
Yeah, it's especially handy in UI code where you can have asynchronous operations but want to have a clear start/end indication in the UI:<p><pre><code> busy = true
Task {
defer { busy = false }
// do async stuff, possibly throwing exceptions and whatnot
}</code></pre>
I'll disagree here. I'd much rather have a Python-style context manager, even if it introduces a level of indentation, rather than have the sort of munged-up control flow that `defer` introduces.
I can see your point, but that (<a href="https://book.pythontips.com/en/latest/context_managers.html" rel="nofollow">https://book.pythontips.com/en/latest/context_managers.html</a>) requires the object you’re using to implement <i>__enter__</i> and <i>__exit__</i> (or, in C#, implement <i>IDisposable</i> (<a href="https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/statements/using" rel="nofollow">https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...</a>), in Java, implement <i>AutoCloseable</i> (<a href="https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html" rel="nofollow">https://docs.oracle.com/javase/tutorial/essential/exceptions...</a>); there likely are other languages providing something similar).<p>Defer is more flexible/requires less boilerplate to add callsite specific handling. For an example, see <a href="https://news.ycombinator.com/item?id=46410610">https://news.ycombinator.com/item?id=46410610</a>
I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.<p><a href="https://docs.rs/defer-rs/latest/defer_rs/" rel="nofollow">https://docs.rs/defer-rs/latest/defer_rs/</a>
<p><pre><code> #include <iostream>
#define RemParens_(VA) RemParens__(VA)
#define RemParens__(VA) RemParens___ VA
#define RemParens___(...) __VA_ARGS__
#define DoConcat_(A,B) DoConcat__(A,B)
#define DoConcat__(A,B) A##B
#define defer(BODY) struct DoConcat_(Defer,__LINE__) { ~DoConcat_(Defer,__LINE__)() { RemParens_(BODY) } } DoConcat_(_deferrer,__LINE__)
int main() {
{
defer(( std::cout << "Hello World" << std::endl; ));
std::cout << "This goes first" << std::endl;
}
}</code></pre>
Why would that be preferable to just using an RAII style scope_exit with a lambda
Meh, I was going to use the preprocessor for __LINE__ anyways (to avoid requiring a variable name) so I just made it an "old school lambda." Besides, scope_exit is in C++23 which is still opt-in in most cases.
And here I thought we were trying to finally kill off pre-processor macros.
"We have syntax macros at home"
Calling arbitrary callbacks from a destructor is a bad idea. Sooner or later someone will violate the requirement about exceptions, and your program will be terminated immediately. So I'd only use this pattern in -fno-exceptions projects.<p>In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
This is a good “how C++ does it” explanation, but I think it’s more accurate to say destructors implement finally-style cleanup in C++, not that they are finally. finally is about operation-scoped cleanup; destructors are about ownership. C++ just happens to use the same tool for both.
> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost.<p>Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)<p>The error you want to log or report to the user is almost certainly the <i>original</i> exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.<p>Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.<p>(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)
This part is not correct. I can't speak for the other languages, but in Python the exception that is originally thrown is the one that creates the traceback. If the finally block also throws an exception, then the traceback includes that as additional information. The author includes an addendum, yet he is still wrong about which exception is first raised.
I believe this might be slightly imprecise also.<p>The traceback is actually shown based on the last-thrown exception (that thrown from the finally in this example), but includes the previous "chained exceptions" and prints them first. From CPython docs [1]:<p>> When raising a new exception while another exception is already being handled, the new exception’s __context__ attribute is automatically set to the handled exception. An exception may be handled when an except or finally clause, or a with statement, is used. [...] The default traceback display code shows these chained exceptions in addition to the traceback for the exception itself. [...] In either case, the exception itself is always shown after any chained exceptions so that the final line of the traceback always shows the last exception that was raised.<p>So, in practice, you will see both tracebacks. However, if you, say, just catch the exception with a generic "except Exception" or whatever and log it without "__context__", you will miss the firstly thrown exception.<p>[1]: <a href="https://docs.python.org/3.14/library/exceptions.html#exception-context" rel="nofollow">https://docs.python.org/3.14/library/exceptions.html#excepti...</a>
In other words: Footgun #17421 Exhibit A.
I was hoping absl::Cleanup would get a shoutout. I worked hard to make it ergonomic and performant. For those looking for something (imo) better than the standard types, check it out!
> Update: Adam Rosenfield points out that Python 3.2 now saves...<p>how old is this post that 3.2 is "now"?
I'm quite sure that an exception thrown in finally block in java will have the original as suppressed, not discarded
Who needs finally when we have goto?
The submitted title is missing the salient keyword <i>"finally"</i> that motivates the blog post. The actual subtitle Raymond Chen wrote is: <i>"C++ says “We have try…finally at home.”"</i><p>It's a snowclone based on the meme, <i>"Mom, can we get <X>? No, we have <X> at home."</i> : <a href="https://www.google.com/search?q=%22we+have+x+at+home%22+meme" rel="nofollow">https://www.google.com/search?q=%22we+have+x+at+home%22+meme</a><p>In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"<p>To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."<p>So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
> So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".<p>Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.<p>It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.
HN has some heuristics to reduce hyperbole in submissions which occasionally backfire amusingly.
Yeah it's a huge mistake IMO. I see it fucking up titles so frequently, and it flies in the face of the "do not editorialise titles" rule:<p><pre><code> [...] please use the original title, unless it is misleading or linkbait; don't editorialize.
</code></pre>
It is <i>much</i> worse, I think, to regularly drastically change the meaning of a title automatically until a moderator happens to notice to change it back, than to allow the occasional somewhat exaggerated original post title.<p>As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.
Submissions with titles that undergo this treatment should get a separate screen where both titles are proposed, and the ultimate choice belongs to the submitter.
Personally, I would rather we have a lower bar for killing submissions quickly with maybe five or ten flags and less automated editorializing of titles.
While I disagree with you that it's "a huge mistake" (I think it works fine in 95% of cases), it strikes me that this sort of semantic textual substitution is a perfect task for an LLM. Why not just ask a cheap LLM to de-sensationalize any post which hits more than 50 points or so?
You can always contact hn@ycombinator.com to point out errors of this nature and have it corrected by one of the mods.
A better approach would be to not so aggressively modify headlines.<p>Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.
It has been up with the incorrect title for over 7 hours now. That's most of the Hacker News front-page lifecycle. The system for correcting bad automatic editorialisation clearly isn't working well enough.
It's rare to see the mangling heuristics improve a title these days. There was a specific type of clickbait title that was overused at the time, so a rule was created. And now that the original problem has passed, we're stuck with it.
You have a few minutes to change the title after the submission, I do it all the time.
I intentionally shortened the title because there is a length limit. Perhaps I didn't do it the right way because I was unfamiliar with the mentioned meme. Sorry about that.
I'm curious about the actual origin now, given that a quick search shows only vague references or claim it is recent, but this meme is present in Eddie Murphys "Raw" from 1987, so it is at least that old.
Sounds like a perfect fit for some Deep Research.<p>Edit: A deep research run by Gemini 3.0 Pro says the origin is likely to be stand-up comedy routines between 1983–1987 and particularly mentions Eddie Murphy, and the 1983 socioeconomic precursor "You ain't got no McDonald's money" in Delirious (1983) culminating in the meme from in Raw (1987). So Eddie might very well be the original origin.
That's why you shouldn't use memes in the titles of technical articles. The intelligibility of your intent is vastly reduced.
[dead]