I have what I thought was a broad knowledge base of rust an experience in it over many domains, but I haven't heard of most of those. Have been getting by with `&`, and `&mut` only from those tables!<p>Incidentally, I think this is one of Rust's best features, and I sorely miss it in Python, JS and other languages. They keep me guessing whether a function will mutate the parent structure, or a local copy in those languages!<p>Incidentally, I recently posted in another thread here how I just discovered the 'named loop/scope feature, and how I thought it was great, but took a while to discover. A reply was along the effect of "That's not new; it's a common feature". Maybe I don't really know rust, but a dialect of it...
<i>> Incidentally, I recently posted in another thread here how I just discovered the 'named loop/scope feature, and how I thought it was great, but took a while to discover. A reply was along the effect of "That's not new; it's a common feature". Maybe I don't really know rust, but a dialect of it...</i><p>I assume I'm the one who taught you this, and for the edification of others, you can do labeled break not only in Rust, but also C#, Java, and JavaScript. An even more powerful version of function-local labels and break/continue/goto is available in Go (yes, in Go!), and a yet more powerful version is in C and C++.<p>The point being, the existence of obscure features does not a large or complex language make, unless you're willing to call Go a large and complex language. By this metric, anyone who's never used a goto in Go is using a dialect of Go, which would be silly; just because you've never had cause to use a feature of a language does not a dialect make.
Coming from 14 years of Perl, and dabbling in Perl 6, I don’t consider Rust a “large language”… but like Perl (and to an extent C++) I do find people craft their own dialects over time via osmosis.<p>And I don’t see anything bad about this!<p>After 11 years of full-time Rust, I have never needed to use Pin once, and it’s only having to do FFI have I even had to reach for unsafe.<p>Unless you memorise the Rust Reference Manual and constantly level up with each release, you’ll never “know” the whole language… but IMHO this shouldn’t stop you from enjoying your small self-dialect - TMTOWTDI!
Wow, I had no idea JavaScript has labeled break! Thanks for the comment.
It's a terrible feature, really. If you need a labeled break what you really need is more decomposition. I'm pretty sure that Dijkstra would have written a nice article about it, alas, he is no longer with us.
I doubt he would, goto heavy code really jumped all over the place in incomprehensible manners. Labelled break/goto is used in perhaps 1% of loops due to actually implementing some algorithm that would've needed extra flags,etc. and EVEN THEN don't break the scoped readability.<p>There's a huge difference between reining in real world chaos vs theoretical inelagancies (ESPECIALLY if fixing that would introduce other complexity to work around the lack of it).
It depends on how big or trivial the loops you need are. If they're only 3 or 4 lines, extracting their body into a function doesn't improve anything.
Dijkstra was wrong on this one. Breaks and continues help to keep the code more readable by reducing the amount of state that needs to be tracked.<p>Yes, from the purely theoretical standpoint, you can always rewrite the code to use flags inside the loop conditions. And it even allows formal analysis by treating this condition as a loop invariant.<p>But that's in theory. And we all know the difference between the theory and practice.
Dijkstra did not say anything about "break" and "continue".<p>Moreover, "continue" has been invented only in 1974, as one of the few novel features of the C programming language, some years after the Dijkstra discussion.<p>Both simple "break" and "continue" are useful, because unlike "goto" they do not need labels, yet the flow of control caused by them is obvious.<p>Some languages have versions of "break" and "continue" that can break or continue multiple nested loops. In my opinion, unlike simple "break" and "continue" the multi-loop "break" and "continue" are worse than a restricted "goto" instruction. The reason is that they must use labels, exactly like "goto", but the labels are put at the beginning of a loop, far away from its end , which makes difficult to follow the flow of control caused by them, as the programmer must find first where the loop begins, then search for its end. Therefore they are definitely worse than a "goto".<p>Instead of having multi-loop break and continue, it is better to have a restricted "goto", which is also useful for handling errors. Restricted "goto" means that it can jump only forwards, not backwards, and that it can only exit from blocks, not enter inside blocks.<p>These restrictions eliminate the problems caused by "goto" in the ancient non-structured programming languages, which were rightly criticized by Dijkstra.
Many of the things like "&own" are ideas being discussed, they don't exist in the language yet. As far as I know only &, &mut and raw pointers (mut and const) exist in stable rust at this point. The standard library has some additional things like NonNull, Rc, etc.
I doubt that anybody truly knows Rust. And this is aggravated by the fact that features keep getting added. But here are two simple strategies that I found very effective in keeping us ahead of the curve.<p>1. Always keep the language reference with you. It's absolutely not a replacement for a good introductory textbook. But it's an unusually effective resource for anybody who has crossed that milestone. It's very effective in spontaneously uncovering new language features and in refining your understanding of the language semantics.<p>What we need to do with it is to refer it occasionally for even constructs that you're familiar with - for loops, for example. I wish that it was available as auto popups in code editors.<p>2. Use clippy, the linter. I don't have much to add here. Your code will work without it. But for some reason, clippy is an impeccable tutor into idiomatic Rust coding. And you get the advantage of the fact that it stays in sync with the latest language features. So it's yet another way to keep yourself automatically updated with the language features.
I feel like other languages also have the issue of complexity and changing over time. I doubt I know all of C++ post C++14 for example (even though that is my day job). Keeping up with all the things they throw into the standard library of Python is also near impossible unless you write python every day.<p>Rust has an unusually short release cycle, but each release tends to have fewer things in it. So that is probably about the same when it comes to new features per year in Python or C++.<p>But sure, C moves slower (and is smaller to begin with). If that is what you want to compare against. But all the languages I work with on a daily basis (C++, Python and Rust) are sprawling.<p>I don't have enough experience to speak about other languages in depth, but as I understand it Haskell for example has a lot of extensions. And the typescript/node ecosystem seems to move crazy fast and require a ton of different moving pieces to get anything done (especially when it comes to the build system with bundlers, minifiers and what not).
Languages should be small, not large. I find that every language I've ever used that tries to throw everything and the kitchensink at you eventually deteriorates into a mess that spills over into the projects based on that language in terms of long term instability. You should be able to take a 10 year old codebase, compile it and run it. Backwards compatibility is an absolute non-negotiable for programming languages and if you disagree with that you are building toys, not production grade systems.
I'm not sure what this is arguing against here. Anyone who follows Rust knows that it's relatively modest when it comes to adding new features; most of the "features" that get added to Rust are either new stdlib APIs or just streamlining existing features so that they're less restrictive/easier to use. And Rust has a fantastic backwards compatibility story.
I had C++, python and ruby in mind, but yes, GP also mentioned Rust in the list of 'sprawling' languages, and they are probably right about that: Rust started as a 'better C replacement' but now it is trying to dominate every space for every programming language (and - in my opinion - not being very successful because niche languages exist for a reason, it is much easier to specialize than to generalize).<p>I wasn't particularly commenting on Rust's backward compatibility story so if you're not sure what I was arguing about then why did you feel the need to defend Rust from accusations that weren't made in the first place?
Tbh I think `rust-toolchain` solves most of these issues
Egad, no. This is how you get C++, whose core tenet seems to be “someone used this once in 1994 so we can never change it”.<p>Even adding a new keyword will break some code out there that used that as a variable name or something. Perfect backward compatibility means you can never improve anything, ever, lest it causes someone a nonzero amount of porting effort.
No, you get C++ because you're Bjarne Stroustrup and trying to get people to sign on to the C++ bandwagon (A better C! Where have I heard that before?) and so you add every feature they ask for in the hope that that will drive adoption. And you call it object oriented (even if it really isn't) because that's the buzz-word du-jour. Just like Async today.
I’ll accept that, too. But from the outside it seems like they do that by finding bizarre, previously invalid syntax and making that the way to spell the new feature. `foo[]#>££{23}` to say “use implicit parallelism on big endian machines with non-power-of-2 word sizes”? Let’s do it!
> Languages should be small, not large.<p>Yes. At the very least, features should carry a lot of weight and be orthogonal to other features. When I was young I used to pride myself on knowing all the ins and outs of modern C++, but over time I realized that needing to be a “language lawyer” was a design shortcoming.<p>All that being said I’ve never seen the functionality of Rust’s borrow checker reduced to a simpler set of orthogonal features and it’s not clear that’s even possible.
I suspect the problem is that every feature makes it possible for an entire class of algorithms to be implement much more efficiently and/or clearly with a small extension to the language.<p>Many people encounter these algorithms after many other people have written large libraries and codebases. It’s much easier to slightly extend the language than start over or (if possible) implement the algorithm in an ugly way that uses existing features. But enough extensions (and glue to handle when they overlap) and even a language which was initially designed to be simple, is no longer.<p>e.g., Go used to be much simpler. But in particular, lack of generics kept coming up as a pain point in many projects. Now Go has generics, but arguably isn’t simple anymore.
If you want or have to build a large program, something must be large, be it the language, its standard library, third party code, or code you write.<p>I think it’s best if it is one of the first two, as that makes it easier to add third party code to your code, and will require less effort to bring newcomers up to speed w.r.t. the code. As an example, take strings. C doesn’t really have them as a basic type, so third party libraries all invent their own, requiring those using them to add glue code.<p>That’s why standard libraries and, to a lesser extent, languages, tend to grow.<p>Ideally that’s with backwards compatibility, but there’s a tension between moving fast and not making mistakes, so sometimes, errors are made, and APIs ‘have’ to be deprecated or removed.
It's a balance thing. You can't make a language without any features, but you can be too small ('Brainfuck') and you can definitely be too large ('C++'). There is a happy medium in there somewhere and the lack of a string type was perceived as a major shortcoming of C, but then again, if you realize that they didn't even have structs in the predecessor to C (even though plenty of languages at the time did have similar constructs) they got enough of it right that it ended up taking off.<p>C and personal computing hit their stride at roughly the same time, your choices were (if you didn't feel like spending a fortune) Assembly, C, Pascal and BASIC for most systems that mere mortals could afford. BASIC was terribly slow, Pascal and C a good match and assembler only for those with absolutely iron discipline. Which one of the two won out (C or Pascal) was a toss up, Pascal had it's own quirks and it was mostly a matter of which of the two won out in terms of critical mass. Some people still swear by Pascal (and usually that makes them Delphi programmers, which will be around until the end because the code for the heat-death of the universe was writting in it).<p>For me it was Mark Williams C that clinched it, excellent documentation, good UNIX (and later Posix) compatibility and whatever I wrote on the ST could usually be easily ported to the PC. And once that critical mass took over there was really no looking back, it was C or bust. But mistakes were made, and we're paying the price for that in many ways. Ironically, C enabled the internet to come into existence and the internet then exposed mercilessly all of the inherent flaws in C.
Haskell's user-facing language gets compiled down to Haskell "core" which is what the language actually <i>can do</i>. So any new language feature has a check in with sanity when that first transformation gets written.
George Orwell showed us that small languages constrain our thinking.<p>A small language but with the ability to extend it (like Lisp) is probably the sweet spot, but lol look at what you have actually achieved - your own dialect that you have to reinvent for each project - also which other people have had to reinvent time after time.<p>Let languages and thought be large, but only used what is needed.
I can take anything I wrote in C since ~1982 or so and throw it at a modern C compiler and it will probably work, I may have to set some flags but that's about it. I won't have to hunt up a compiler from that era, so the codebase remains unchanged, which increases the chances that I'm not going to introduce new bugs (though the old ones will likely remain).<p>If I try the same with a python project that I wrote less than five years ago I'm very, very lucky if I don't end up with a broken system by the time all of the conflicts are resolved. For a while we had Anaconda which solved all of the pain points but it too seems to suffer from dependency hell now.<p>George Orwell was a writer of English books, not a programmer and whatever he showed us he definitely did not show us that small <i>programming</i> languages constrain our thinking. That's just a very strange link to make, programming languages are not easily compared with the languages that humans use.<p>What you could say is that a programming languages' 'expressivity' is a major factor in how efficient it is in taking ideas and having them expressed in a particular language. If you take that to an extreme (APL) you end up with executable line-noise. If you take it to the other extreme you end up some of the worst of Java (widget factory factories). There are a lot of good choices to be found in the middle.
> Backwards compatibility is an absolute non-negotiable for programming languages<p>What programming language(s) satisfy this criteria, if any?
Rust does. You have editions to do breaking changes at the surface level. But that is per crate (library) and you can mix and match crates with different editions freely.<p>Thry do reserve the right to do breaking changes for security fixes, soundness fixes and inference changes (i.e. you may need to add an explicit type that was previously inferred but is now ambiguous). These are quite rare and usually quite small.
I'd normally agree that what you say is good enough in practice, but I question whether it meets GP's "absolute non-negotiable" standards. That specific wording is the reason I asked the question in the first place; it seemed to me that there was some standard that apparently wasn't being met and I was wondering where exactly the bar was.
[flagged]
Ada does. It has been through 5 editions so far and backwards compatibility is always maintained except for some small things that are documented and usually easy to update.
I'd normally be inclined to agree that minor things are probably good enough, but "absolute non-negotiable" is a rather strong wording and i think small things technically violate a facial reading, at least.<p>On the other hand, I did find what I think are the relevant docs [0] while looking more into things, so I got to learn something!<p>[0]: <a href="https://docs.adacore.com/gnat_rm-docs/html/gnat_rm/gnat_rm/compatibility_and_porting_guide.html" rel="nofollow">https://docs.adacore.com/gnat_rm-docs/html/gnat_rm/gnat_rm/c...</a>
> except for some small things that are documented<p>I can't think of any established language that doesn't fit that exact criteria.<p>The last major language breakage I'm aware of was either the .Net 2 to 3 or Python 2 to 3 changes (not sure which came first). Otherwise, pretty much every language that makes a break will make it in a small fashion that's well documented.
Java rules here. You can take any Java 1.0 (1995) codebase and compile it as-is on a recent JDK. Moreover, you can also use any ancient compiled Java library and link it to modern Java app. Java source and bytecode backward compatibility is fantastic.
* Terms and conditions apply<p>Java is very good here, but (and not totally it's fault) it did expose internal APIs to the userbase which have caused a decent amount of heartburn. If your old codebase has a route to `sun.misc.unsafe` then you'll have more of a headache making an upgrade.<p>Anyone that's been around for a while and dealt with the 8->9 transition has been bit here. 11->17 wasn't without a few hiccups. 17->21 and 21->25 have been uneventful.
Java has had some breaking changes (e.g., [0, 1]), though in practice I have to say my experience tends to agree and I've been fortunate enough to never run into issues.<p>[0]: <a href="https://stackoverflow.com/q/1654923" rel="nofollow">https://stackoverflow.com/q/1654923</a><p>[1]: <a href="https://news.ycombinator.com/item?id=28542853">https://news.ycombinator.com/item?id=28542853</a>
Go, PHP, Ruby, JavaScript ... I'd say majority, actually.
It's probably borderline due to the opt-in mechanism, but Go did make a technically backwards-incompatible change to how its for loops work in 1.22 [0].<p>PHP has had breaking changes [1].<p>Ruby has had breaking changes [2] (at the very least under "Compatibility issues")<p>Not entirely sure whether this counts, but ECMAScript has had breaking changes [3].<p>[0]: <a href="https://go.dev/blog/loopvar-preview" rel="nofollow">https://go.dev/blog/loopvar-preview</a><p>[1]: <a href="https://www.php.net/manual/en/migration80.incompatible.php" rel="nofollow">https://www.php.net/manual/en/migration80.incompatible.php</a><p>[2]: <a href="https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-released/" rel="nofollow">https://www.ruby-lang.org/en/news/2025/12/25/ruby-4-0-0-rele...</a><p>[3]: <a href="https://tc39.es/ecma262/2025/#sec-additions-and-changes-that-introduce-incompatibilities-with-prior-editions" rel="nofollow">https://tc39.es/ecma262/2025/#sec-additions-and-changes-that...</a>
The interesting thing about Go's loopvar change is that nobody was able to demonstrate any real-world code that it broke (*1), while several examples were found of real-world code (often tests) that it <i>fixed</i> (*2). Nevertheless, they gated it behind go.mod specifying a go version >= 1.22, which I personally think is overly conservative.<p>*1: A great many examples of synthetic code were contrived to argue against the change, but none of them ever corresponded to Go code anyone would actually write organically, and an extensive period of investigation turned up nothing<p>*2: As in, the original behavior of the code was actually incorrect, but this wasn't discovered until after the loopvar change caused e.g. some tests to fail, prompting manual review of the relevant code; as a tangent, this raises the question of how often tests just conform to the code rather than the other way around
You certainly won't find me arguing against that change, and the conservatism is why I called it borderline. The only reason I bring it up is because of the "absolute non-negotiable" bit, which I took to probably indicate a very exacting standard lest it include most widespread languages anyways.
Yes, I think it's also a good example of how "absolute" backwards compatibility is not necessarily a good thing. Not only was the old loopvar behavior probably the biggest noob trap in Go (*), it turned out not to be what <i>anyone</i> writing Go code in the wild actually wanted, even people experienced with the language. Everyone seems to have: a) assumed it always worked the way it does now, b) wrote code that wasn't sensitive to it in the first place, or c) worked around it but never benefitted from it.<p>*: strongest competitor for "biggest noob trap" IMO is using defer in a loop/thinking defer is block scoped
There is no such thing as perfection in the real world. Close enough is good enough.
Yes, most of them.<p>C# for instance isn't such a "small language", it has grown, but code from older versions, that does not use the newer features will almost always compile and work as before.<p>breaking changes are for corner cases, e.g. <a href="https://github.com/dotnet/roslyn/blob/main/docs/compilers/CSharp/Compiler%20Breaking%20Changes%20-%20post%20DotNet%205.md" rel="nofollow">https://github.com/dotnet/roslyn/blob/main/docs/compilers/CS...</a>
Even C, we are now at C23, and I bet most folks only know "my compiler C", and not even all the extensions it offers.
I don't know Rust at all, but all your comments<p>> I doubt that anybody truly knows <language>.<p>> Always keep the language reference with you.<p>> Use <tool>, the linter.<p>seem like they apply to <i>all</i> languages (and I agree that they're great advice!).
Of that table, only & and &mut actually exist, the rest are hypothetical syntax
I'm just learning Rust but so far, it looks like the author is proposing some of these ref types, like &own and &uninit.<p>I don't know 100% for sure. It's a bit confusing...
The part of the blog post where it says<p>> What’s with all these new reference types?
> All of these are speculative ideas<p>makes it pretty clear to me that they are indeed not yet part of Rust but instead something people have been thinking about adding. The rest of the post discusses how these would work if they were implemented.
Right. The &pin, &own, and &uninit in the article (or rather everything except & and &mut in that table) do not exist in Rust.<p>I have seen &pin being proposed recently [1], first time I'm seeing the others.<p>[1] <a href="https://blog.rust-lang.org/2025/11/19/project-goals-update-october-2025/" rel="nofollow">https://blog.rust-lang.org/2025/11/19/project-goals-update-o...</a>
> All of these are speculative ideas, but at this point they’ve been circulating a bunch so should be pretty robust.
For your problem, i.e. "guessing whether a function will mutate the parent structure", the solution used by Rust is far from optimal and I actually consider it as one of the ugliest parts of Rust.<p>The correct solution for the semantics of function parameters is the one described in the "DoD requirements for high order computer programming languages: IRONMAN" (January 1977, revised in July 1977), which have been implemented in the language Ada and in a few other languages inspired by Ada.<p>According to" IRONMAN", the formal parameters of a function must be designated as belonging to one of 3 classes, input parameters, output parameters and input-output parameters.<p>This completely defines the behavior of the parameters, without constraining in any way the implementation, i.e. any kind of parameters may be passed by value or by reference, whichever the compiler chooses for each individual case. (An input-output parameter where the compiler chooses to pass it by value will be copied twice, which can still be better than passing it by reference, e.g. when the parameter is passed in a register.)<p>When a programming language of the 21st century still requires for the programmer to specify whether it is passed by value or by reference, that is a serious defect for the language, because in general the programmer does not have the information needed to make a correct choice and this is an implementation detail with which the programmer should not be burdened.<p>The fact that C++ lacks this tripartite classification of the formal parameters of a function has lead to the ugliest complications of C++, which have been invented as bad workarounds for this defect, i.e. the fact that constructors are not normal functions, the fact that there exist several kinds of superfluous constructors which would not have been needed otherwise (e.g. the copy constuctor), the fact that C++ 2011 had to add some features like the "move" semantics to fix performance problems of the original C++. (The problems of C++ are avoided when you are able to differentiate "out" parameters from "inout" parameters, because in the former case the function parameter uses a "raw" area of memory with initially invalid content, where an object will be "constructed" inside the function, while in the latter case the function receives an area of memory that has already been "constructed", i.e. which holds a valid object. In C++ only "constructors" can have a raw memory area as parameter, and not the normal functions.)
I suspect this is fairly common. The reality is most developers don't need a lot of the features a language provides.<p>Python for example has a ton of stuff that can be done with classes using sunder methods and other magic. I'm aware of it, but in all the years I've been writing Python I've never actually needed it. The only time I've had to directly interact with it was when trying to figure out how the fuck OpenAPI generates FastAPI server code. Which fairly deep into a framework and code generation tool.
Rust gives you no guarantees that a function won't allocate or panic though.
Yes that is annoying, but I don't know of any mainstream systems language that does. C and C++ can also have allocations anywhere, and C++ have exceptions. And those are really the only competitors to Rust for what I do (hard realtime embedded).<p>Zig might be an option in the future, and it does give more control over allocations. I don't know what the exception story is there, and it isn't memory safe and doesn't have RAII so I'm not that interested myself at this point.<p>I guess Ada could be an option too, but I don't know nearly enough about it to say much.
Zig doesn't have exceptions, it has error unions, so basically functions return either a value or an error code and the caller is forced by the language to note which was returned. And instead of RAII it has defer ... which of course can easily be forgotten or mis-scoped, so it's not safe.
>"Yes that is annoying, but I don't know of any mainstream systems language that does. C and C++ can also have allocations anywhere, and C++ have exceptions."<p>C++ has a way to tell to compiler that the function would raise no exceptions. Obviously it is not a guarantee that at runtime exception will not happen. In that case the program would just terminate. So it is up to a programmer to turn on some brain activity to decide should they mark function as one or not.
Tbh I’ve found that whenever I’ve hit MAX RAM, failed allocations are not the biggest problem you should be focusing at that time.<p>Sure, it would be nice to get an error, but usually the biggest threat to your system as a whole is the unapologetic OOM Killer
While this is 100% true for the system allocator, hitting OOM there you're likely hosed, it isn't true if you're using arenas. I work on games and being able to respond to OOM is important as in many places I'm allocating from arenas that it is very possible to exhaust under normal conditions.
For allocation, Zig and Odin. Zig is explicit and Odin is implicit.
> Zig is explicit<p>i never got this point. whats stopping me from writing a function like this in zig?<p><pre><code> fn very_bad_func() !i32 {
var GPA = std.heap.GeneralPurposeAllocator(.{}){};
var gpa = GPA.allocator();
var s = try gpa.alloc(i32, 1000);
s[0] = 7;
return s[0];
}
</code></pre>
the only thing explicit about zig approach is having ready-to-use allocator definitons in the std library. if you excluded std library and write your own allocators, you could have an even better api in rust compared to zig thanks to actual shared behaviour features (traits).
explicit allocation is a library feature, not a language feature.
the explicit part is that zig forces you to import allocator of your choosing whereas odin has allocator passed as part of hidden context and you can change/access it only if you want to. hence explicit behavior vs implicit behavior.<p>i use neither of those languages, so don't ask me for technical details :D
You can require allocations in Odin to be explicit using `#+vet explicit-allocators`
This is something I do wish Rust could better support. A `#![no_std]` library crate can at least discourage allocation (although it can always `extern crate alloc;` in lib.rs or invoke malloc via FFI...)
Is the juice worth the squeeze to introduce two new function colors? What would you do if you needed to call `unreachable!()`?<p>It's a shame that you can't quite do this with a lint, because they can't recurse to check the definitions of functions you call. That would seem to me to be ideal, maintain it as an application-level discipline so as not to complicate the base language, but automate it.
> Is the juice worth the squeeze to introduce two new function colors?<p>Typically no... which is another way of saying occasionally yes.<p>> What would you do if you needed to call `unreachable!()`?<p>Probably one of e.g.:<p><pre><code> unsafe { core::hint::unreachable_unchecked() }
loop {}
</code></pre>
Which are of course the wrong habits to form! (More seriously: in the contexts where such no-panic colors become useful, it's because you need to <i>not</i> call `unreachable!()`.)<p>> It's a shame that you can't quite do this with a lint, because they can't recurse to check the definitions of functions you call. That would seem to me to be ideal, maintain it as an application-level discipline so as not to complicate the base language, but automate it.<p>Indeed. You can mark a crate e.g. #![deny(clippy::panic)] and isolate that way, but it's not quite the rock solid guarantees Rust typically spoils us with.
> Typically no... which is another way of saying occasionally yes.<p>You might be able to avoid generating panic handling landing pads if you know that a function does not call panic (transitively). Inlining and LTO often help, but there is no guarantee that it will be possible to elide, it depends on the whims of the optimiser.<p>Knowing that panicking doesn't happen can also enable other optimisations that wouldn't have been correct if a panic were to happen.<p>All of that is <i>usually</i> very minor, but in a hot loop it could matter, and it will help with code size and density.<p>(Note that this is assuming SysV ABI as used by everyone except Windows, I have no clue how SEH exceptions on Windows work.)<p>> Indeed. You can mark a crate e.g. #![deny(clippy::panic)] and isolate that way, but it's not quite the rock solid guarantees Rust typically spoils us with.<p>Also, there are many things in Rust which can panic apart from actual calls to panic or unwrap: indexing out of bounds, integer overflow (in debug), various std functions if misused, ...
Another one that is missing in the article is &raw mut/const but it is purely for unsafe usage when you need a pointer to an unaligned field of a struct.
&raw T/&raw mut T aren't pointer types, they're syntax for creating *const T/*mut T.<p>These aren't included in the article because they are not borrow checked, but you're right that if someone was trying to cover 100% of pointer types in Rust, raw pointers would be missing.
>"...and other languages"<p>Many "other languages", particularly ones that compile to native code in traditional way have fairly explicit ways of specifying how said parameters to be treated
TypeScript has `Readonly<T>` for this purpose.
`Readonly<T>` in TypeScript is almost useless, unsound and completely unsafe (as are most other things in TypeScript), and in no way equivalent to the Rust affine type system.<p>In particular, Readonly only prevents writing to the <i>immediate</i> fields of the object, but doing eg `const x: Readonly<X>; x.a.b = ...` is completely fine (ie. nested mutability is allowed).
If you want transitive immutability, you need a type-level function (such as `ReadonlyDeep` from `type-fest`), but then that gives terrible error messages.<p>Also due to the bivariance of the TypeScript type system, using Readonly in combination with generics can silently and automatically cast it away, making it largely pointless for actual safety...
> I sorely miss it in Python, JS and other languages. They keep me guessing whether a function will mutate the parent structure, or a local copy in those languages!<p>Python at least is very clear about this ... everything, lists, class instances, dicts, tuples, strings, ints, floats ... are all passed by object reference. (Of course it's not relevant for tuples and scalars, which are immutable.)