Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.<p>An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.<p>Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.<p>Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
A non-trivial part of the big difference between the juniors that seem talented and "get it", and those that don't is precisely their ability to form accurate enough world models quickly. You can tell who is going at the "physics" of software and applying them, and who is just writing down recipes, and doesn't try to understand the nature of any of the steps.<p>It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
I was even as a junior the kind, who tried to understand the nature of the steps. I failed many times, but I learned from them all the time. I remember my mutable public static variables, and terrible small JavaScript apps. But every time when I did something like that, I tried to understand it. I knew that I failed. Sometimes it took me a year or more (like when I first encountered React about a decade ago, I immediately knew why some of my apps failed with architecture previously).<p>However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.<p>So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.<p>Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.<p>What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
I had to learn Prolog for a university paper and I have to agree; out of the dozen-ish languages I've had to learn, something just didn't "click" with Prolog.<p>No real value is this comment, I'm just happy to share a moment over the brain-fuck that is Prolog (ironically Brainfuck made a whole lot more sense).
I wouldn't really try to equate arbitrary job titles awarded based on tenure with actual expertise; titles aren't consistently applied across the industry, or awarded on conditions other than actual merit.
There are a lot of very young developers who have less years of experience than me who have tons more expertise than me.<p>The problem is, as is evident by this article and thread, it's difficult to measure (and thus communicate) expertise, but it's really easy to measure years of experience.
This happened at an old employer of mine. We started to go down the FP road, veering off the standard OOP of the day. About 25% of the people picked it up immediately. About 50% got it well enough. And 25% just thought it was arcane wizardry.<p>Between that latter group and the bottom portion of the middle it sparked a big culture war. Eventually leading to leadership declaring that FP was arcane wizardry, and should be eradicated.
I've always had excellent model building functionality for abstractions and got the "physics" of a subject rather quickly, be it economics, biology, certain mathematical subjects and more.<p>Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.<p>It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
your "physics" grounding is exactly why it feels so odd - software is by its nature anti-physicalist<p>math and logic are closer to a basis for software abstraction - but they were scary to business people so a "fake language" was invented atop them - you have "objects" that don't actually exist as objects, they are just "type based dispatch/selection mechanism for functions", "classes" that are firstly "producers of things and holders of common implementation" and only secondarily also work to "group together classes of objects"
I feel that is a bit of a false history. OOP was invented by people trying to simulate physical systems, e.g. Stroustup, the Simula people and their contemporaries not business people. Arguably it was popularized later by business people and enterprise Java developers. But that happened way later.<p>I do not think OOP ever really worked out well as can be evidenced by it no longer being as popular and people having almost entirely abandoned "Cat > Animal > Object" inheritance hierarchies.
I have the opposite experience. Goes to show the difference between people.<p>I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.<p>Computation and maths on the other hand just click with me. Philosophy as well btw.<p>I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
>teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease.<p>Besides OO -> Functional this applies everywhere else in Computer Science. If you understood the fundamentals no new framework, language or paradigm can shock you. The similarities are clear once you have a fitting world model.
Indeed. Understand the principles, you can work with just about any tool
This resonates. Tips on how to build this skill?
Put yourself in a position where it is your problem/responsibility, where you cannot depend on another to do it for you. You'll be learning every day.
Fail, and try to understand why. Don't be quick with the answer. Sometimes it takes years. But it's crucial to want to improve, and recognize when the answer is in front of you.<p>Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.<p>The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.<p>And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.<p>One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.<p>Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
No who you replied to, but practice. Deliberate practice; not just writing the same apps over and over, but instead challenging yourself with new projects. Build things from scratch, from documentation or standards alone. Force yourself to understand all the little details for one specific problem.
By complete coincidence, yesterday I came across this link to an article Peter Naur wrote in 1985 (<a href="https://pages.cs.wisc.edu/~remzi/Naur.pdf" rel="nofollow">https://pages.cs.wisc.edu/~remzi/Naur.pdf</a>) which I haven't been able to stop thinking about.<p>I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.<p>The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.<p>A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.<p>This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
Isn't that interesting? The job of exploring a theory or model to such an extent that it can be expressed in computer code always seems to fall on the shoulders of a software developer. Other people can write specifications and requirements all day long, but until a software developer has tackled the problem, the theory probably hasn't been explored well enough yet to express clearly in computer code. It feels like software developers are scientists who study their customers' knowledge domains.
> It feels like software developers are scientists who study their customers' knowledge domains.<p>I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
> so stifled when an e.g. product manager<p>Another facet of this is my annoyance at <i>other developers</i> when they persistently incurious about the domain. (Thankfully, this has not been too common.)<p>I don't just mean when there are tight deadlines, or there's a customer-from-heck who insists they always know best, but as their default mode of operation. I imagine it's like a gardener who cares only about the catalogue of tools, and just wants the bare-minimum knowledge to deal with any particular set of green thingies in the dirt.
This might be an indicator that PM isn't doing their job; PM should be able to answer you questions regarding what the business wants (= people who you're trying to serve). Developers, by the nature of interacting with domain, do become experts in the domain, but really it should be up to PM what the domain should be doing business-wise.
If that is what a PM needs then there aren't enough good PM to warrant a PM role for most products, so just make software engineers do that in most cases.<p>Edit: The main role of PM is to decide which features to build, not how those features should be built or how they should work. Someone has to decide what to build, that is the PM, but most PM are not very good at figuring out the best way for those features to work so its better if the programmers can talk to users directly there. Of course a PM could do that work if they are skilled at it, but most PM wont be.
> not [...] how they should work<p>So that we're on the same page, what I think should be PM responsibilities:<p>If I have a user story: "As a customer I want to purchase a product so that I can receive it at my address" - PM defines this user story as they have insight to decide if such feature is needed.<p>PM should then define acceptance criteria: "Given customer is logged in When they view Product page Then 'Add product to basket' button should appear", "Given 'Add product to basket' button When customers click on it Then Product information modal should appear" etc - PM should know what users actually want, ie whether modals should appears, or not; whether this feature should be available for logged users only, or not.<p>How this will work shouldn't matter to PM; these are AC they've defined.<p>Of course the process of defining AC should involve developers (and QA), because AC should be exhaustive to delivering given feature
This is why at my current place we are not supposed to do any dev without an SME on the call. We do the development and share the screen and get immediate feedback as we are working in real time! It's great.
Agree 100%.<p>Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
In theory, it's the same as in practice.<p>In practice, it isn't.
Sorry this is just the interior trapped nonsense that engineers find themselves in. Please touch grass<p>Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.<p>Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
I agree so much with the both of you, to the point it's difficult to avoid cognitive dissonance one way or the other.<p>Software people do what they do better than anyone else. I mean obviously! Just listening to a non-software person discuss software is embarrassing. As it should be.<p>There's something close to mathematics that SWEs do, and yet it's so much more useful and economically relevant than mathematics, and I believe that's the bulk of how the "center of the universe" mindset develops. But they don't care that they're outclassed by mathematicians in matters of abstract reasoning, because they're <i>doers and builders</i>, and they don't care that they're outclassed by people in effective but less intellectual careers, because they're <i>decoding the fundamental invariants of the universe</i>.<p>I don't know. I guess I care so much because I can feel myself infected by the same arrogance when I finally succeed in getting my silicon golems to carry out my whims. It's exhilarating.
We keep seeing things like cryptic error messages shown to end users simply because of the disconnect between the programmer and the end user.<p>If the programmer gets to intimately understand the user's experience software would be easier to use. That's why I support the idea of engineers taking support calls on rotation to understand the user.<p>Both can be true at the same time, a product manager who retains the big picture of the business and product, and engineers who understand tiny but important details of how the product is being used.<p>If there were indeed perfect product managers, there would no need for product support.
You seem to be assuming a certain org structure with very clear, specialized roles. Many teams do not have this, and engineers are already Product Engineers. It sometimes even makes sense (whenever engineers dogfood their product, startups, or if it is a product targeting other engineers) and is not just a budget/capacity issue.<p>Similarly, by siloing the world model in one or two heads, you disable the team dynamics from contributing to building a better solution: eg. a product manager/designer might think the right solution is an "offline mode" for a privacy need without communicating the need, the engineering might decide to build it with an eventual consistency model — sync-when-reconnected — as that might be easier in the incumbent architecture, and the whole privacy angle goes out the window. As with everything, assuming non-perfection from anyone leads to better outcomes.<p>Finally, many of the software engineers are the creative type who like solving customer problems in innovative ways, and taking it away in a very specialized org actually demotivates them. Many have worked in environments where this was not just accepted, but appreciated, and I've it seen it lead to better products built _faster_.
Regarding the tension between symbolic representation and Naur "theory", I'd actually say they come from two different traditions, each providing two different theses. When writing them out I think it becomes a bit clearer how they interact and that they're not actually contradictory.<p>Thesis A is something like: the value of the programmer comes from their practical ability to keep developing the codebase. This ability is specific to the codebase. It can only be obtained through practice with that codebase, and can't be transferred through artefacts, for the same reason you can't learn to play tennis by reading about it (a "Mary's Room" argument).<p>This ability is what Naur calls "theory". I think the term is a bit confusing (to me, the word is associated with "theoretical" and therefore to things that can be written down). I feel like in modern discourse we would usually refer to this as a "mental model", a "capability", or "tacit knowledge".<p>Then there's Thesis B, which comes more from a DDD lineage, and which is something like: the development of a codebase requires accumulation of specific insights, specific clarifying perspectives about problem-domain knowledge. The ability for programmers to build understanding is tied to how well these insights are expressed as artefacts (codebase structure, documentation, communication documents).<p>I feel like some disagreements in SWE discourse come from not balancing these two perspectives. They're actually not contradictory at all and the result of them is pretty common-sensical. Thesis A explains the actual mechanism for Thesis B, which is that providing scaffolding for someone learning the codebase obviously helps, and vice-versa, because the learned mental model is an internally structured representation that can, with work, be externalised (this work is what "communication skills" are).
It's interesting that the way you describe it, the world model itself is _not_ just a collection of words in our minds, and I have a small theory of my own that "thoughts" in our brains aren't actually words at all (otherwise animals which don't talk wouldn't be able to make complex decisions?), and the words that we "hear" in our heads and which we perceive as our thoughts are just a rough translation of these thoughts into words, they aren't thoughts themselves. It is also why it's sometimes really hard to put complex (but correct) thoughts into words, and especially hard to adequately compare complex ideas during a regular conversation: on the surface a lot of ideas (especially in software engineering) "sound" good, but they're actually terrible. And there's no better way to communicate ideas than to put them into words, which is probably what makes good software engineering extremely difficult.<p>Or maybe I'm just a little bit insane. Or both.
Obligatory link to a great podcast that has a great episode covering this paper: <a href="https://pca.st/episode/dfc024c8-31f8-4387-b301-7a4f77132b74" rel="nofollow">https://pca.st/episode/dfc024c8-31f8-4387-b301-7a4f77132b74</a><p>Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: <a href="https://feelingof.com/" rel="nofollow">https://feelingof.com/</a>
I keep saying this is the single most important article to consider when talking about AI assisted software building. Everyone should read it. The question should always be: is a human building a theory of the software, or is does only AI understand it? If it's the latter, it is certainly slop.<p>(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
>their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem<p>Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
I think that this is actually a good thing. If everyone had the same internal world model, we would have very little innovation.<p>I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.<p>I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.<p>Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
I agree. It's pretty easy to train based on facts, and even experiences. And learners can often take things in unexpected directions.<p>I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.<p>For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".<p>All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.<p>So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
Yeah, you can't get it out in "one session of conversation", but you definitely can under a different... context.<p>"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
> An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities.<p>"Transmissionism" is a term I've seen to describe this<p><a href="https://andymatuschak.org/books/" rel="nofollow">https://andymatuschak.org/books/</a>
<a href="https://en.wikipedia.org/wiki/Tacit_knowledge" rel="nofollow">https://en.wikipedia.org/wiki/Tacit_knowledge</a>
this is why I only communicate in poetry<p><i>complexity is</i><p><i>not what you believe it is</i><p><i>please try listening</i>
So cool. One reading is “complexity is not what you believe it is”. Another is “complexity is”… “not what you believe it is”. Seems similar but the difference is subtle. Even the “please try listening” line changes in both versions. One is confrontational, the other is empathetic.
There is complexity<p>that can only be moved around,<p>not eliminated.
Reminded me of a colleague who wrote his email replys as haiku. It got old pretty quickly.
I'd say, on averaged, it's 50% what you say and 50% communication issues.<p>Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
good things llms solve this problem by assuming everything can be put into words and then convincing the world this is true.
This is surprisingly close to a personal theory I've been working on. I've been describing how to use AI to people as engaging the world model in their head, organization, or software.<p>I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
Correct. One just has to realize that the cost of communication (and the context/memory lost along the way to train that understanding) is often just far higher than anyone has patience for. To fully understand the expert, they must become the expert. (or at least a hell of a lot closer than they were)<p>This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.
Another part of the equation is practice.<p>Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)<p>I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.<p>I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.<p>To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.<p>When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.
Just wanted to say thanks for this.<p>Great thread.
“Cursive knowledge”, as an old boss told me. Was incredibly ironic when he leaned into my misunderstanding.
> AI can blow you out of the water at knowing more facts<p>Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.<p>> and they need to have the right project that provides the opportunity to learn what needs to be learnt.<p>It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.
That’s very well put.
I'm going to get downvoted to hell for this, but you described the exact reason why education is a waste of time.
I'll bite: is education not about starting with theoretical summary of the knowledge in the domain, and then applying it in practice and really <i>feeling</i> it work, be challenging, or not work?<p>The best educators I had had exactly that approach: you sometimes start with theory, but other times with challenges which make you feel the difficulty, and understand the value of the theory you are co-developing with the educator (they just have the benefit of knowing exactly where we'll end up, but when time allows, they do let you take a wrong turn too). Even if you start with theory, diving into a challenge where you are allowed not to apply the learnings should quickly tell you why the theoretical side makes sense.<p>As with everything in life, great educators are few but once you have them, you can apply the same approach yourself even if the educator is unable to steer you the right way.<p>If you never received this type of education, then what you received could arguably be called a waste of time.
yep, as I was exploring in <a href="https://danieltan.weblog.lol/2026/05/dunning-kruger-and-the-communication-tax" rel="nofollow">https://danieltan.weblog.lol/2026/05/dunning-kruger-and-the-...</a> , the expert pays the "communication tax" to dumb down concepts that the listener can understand. There is a gap between domain understanding and what is being conveyed that is similar for human-llm interactions as well.
Great points. Words allow one to communicate an approximation of part of what one knows.<p>Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.<p>Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.<p>Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.
Well here's an engineering problem figure out how to mentor 10x the number of juniors
This sounds like a whole lot of copium from devs who don't want to bother with the effort of just writing stuff down, ie good documentation practices...<p>Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").<p>Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.
I believe you are responding to a concern you are facing in your career with bad documentation (I would guess bad code too), but projecting that onto an unrelated topic: I believe both could be independently true or not.
*write stuff. Siri dictation can’t be overhauled soon enough.
[dead]
[dead]
[dead]
As a /senior/ developer I really dislike blanket statements. I've seen the same amount of failures caused by<p>> “Do we really need that?”
> “What happens if we don’t do this?”
> “Can we make do for now? Maybe come back to this later when it becomes more important?”<p>as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.<p>There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
I came to say somethign simular actually.<p>> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.<p>There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
I think this is more a matter of perspective, rather than original meaning.<p>I read the above as "avoid development that increases complexity <i>needlessly</i>" — and often, there is a desire to overcomplicate something that can be much simpler because the understanding is lacking.<p>"As much as they can" does not mean trying not to do any work, but trying to simplify the work where it achieves desired outcomes, and just about! This frequently means doing the improvement today.
> There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.<p>This is what I was thinking - I'd say the biggest step up a developer can make is to recognize that sometimes you need a bit of one approach, sometimes a bit of another one.<p>Sometimes minimalism is the way, and you need to wonder if the pain, workload or lacking capabilities and features are problematic. Or, sometimes adding the smallest possible thing is a good way, as long as we don't paint ourself into a corner and enable learning and accumulating information of what we actually need.<p>Sometimes buying a thing is a good way, if you can find a good vendor and a tool fitting your use case and especially if the effort of doing it on your own is high. This commonly occurs in security, because keeping up to date with the ongoing vulnerability and threat landscape can be a full job on its own.<p>And sometimes adding something bigger is the way, if the effort of maintaining it are less than the effort and pain incurred by not having it. Or if we can ramp up the effort of the thing incrementally, while reaping benefits along the way. This can be validated often by doing a small thing.<p>What the AI will do in my opinion is to push the bar more in this direction. Cozily hacking CRUD-Code in a web server together most likely won't be enough in a year or two for the average development job.
That doesn't sound as good in meetings. The person who can cut scope and get everyone to the "we did it" back patting phase makes everyone feel warm and cozy.<p>Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
This is where good leadership in the dev team is needed.<p>Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?<p>Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.<p>If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
For a large enough problem you need a combination of enough skill (to do the job), enough foresight (to know what likely will go wrong and how much error budget you need), and skin in the game (so you dont just cut things that sound good but instead what is truly needed) - if you don't have all three of these you usually are just talking out of your ass.
both of these things are equally important. every change will annoy somebody. every change breaks somebody's workflow.<p>preventing the unnecessary changes can help you get the political capital in your org to push through the changes that really need to happen.
I am an avoider and also a serial trend-hopper. You can do both!
Exactly.
A sort of survivor bias. A VP ordered to use elastic search, because it worked well at his company before. Turned out it worked well for us. Listen to the VP to make technical decisions. And use elastic search.
Reminds me when the ELK stack was called just ELK (idek what it is now) we had a server we put it on, and after making the additional dashboards my manager wanted, we learned the limits of ES / ELK. It needs a ridiculous amount of memory, because it will shove everything in memory. Same thing when I learned that MongoDB indexing puts every item in memory as well, which is a yikes, why would you not want to index?<p>I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
There's no high performance database that wont take all of your memory (at least for size of data) if you let it.<p>That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
Production grade multi tenant databases want to *solely* run on RAM.<p>> why would you not want to index?<p>Because if you don't need an index it wastes RAM, as you've learned. Maintaining indices also has a cost. Index only what you need.<p>In the sense of the blog post: A senior with decent DB experience would have told you. ;)
You mean NoSQL which is slightly different and nuanced, in a shop that was mostly SQL with the exception of me, the one Junior developer using MongoDB and Elastic, mind you, we got a lot of things done and I learned a lot more about Mongo than I would like.<p>In all fairness this was my first job a few years ago as a developer, I deep dove MongoDB but I was also one of the only devs using it at this place.<p>My previous experience with MongoDB had been in college and more limited.
Everything "wants to" run solely in RAM, but we don't have infinite RAM, so a "production grade" database should also be able to fetch data from disk unless this is an explicit tradeoff. MariaDB and PostgreSQL do <i>not</i> require all indices to be stored in RAM. Obviously they can be accessed more quickly if they are in RAM but they are designed under the assumption they will often be stored on disk. It sounds like MongoDB is not, and given the reputation of MongoDB, this is as likely to be incompetence as it is to be a willing tradeoff.
Every serious database that is designed to handle moderate to high traffic, will expect you to have RAM to fit all data and indices. Relational DBs do a solid job if that's not the case, but that also sabotages the efficiency you could get from them. It will work for some time. If it's enough for your, that's fine.<p>I am not experienced with MongoDB, I don't know if previous comment reports were the users fault or MongoDB's. But one thing is clear to me, complaining it uses too much RAM and not knowing the reasons for it, is a user problem. A common mistake is to setup a DB and expect it just magically does works. DBs are complicated beasts, you have to know how to deal with them.
Potentially a mix of both, though MongoDB was still very young when we were using it. Places like Google were championing it, or rather places that can afford to burn a ton of RAM.
You certainly don't need to hold all data in RAM to serve "moderate" traffic. A modern hard drive can seek about 80 times per second, an optimized RAID array even more, and an SSD tens of thousands, and if we're pessimistic, it takes 10 seeks to service a request. To me a light load means up to about a request every second, a moderate load means maybe 20 requests per second and a heavy load means hundreds or thousands of requests per second. Pessimistically each (read) request takes 5-10 random reads to service and almost every system is read-mostly.<p>I think these are realistic expectations for most apps. Obviously the likes of Netflix and Uber get orders of magnitude more, but 99.9% of apps aren't a Netflix or an Uber, and you don't have to optimize for scaling until your app is on a trajectory to become one, and putting your database on an SSD already let's you handle several thousand concurrent users with ease.
RDBMS are typically pretty good keeping the frequently requested data in RAM. This disguises the latency of disk access and performance will heavily depend on access patterns. If you serve 1TB of data from a DB with 8GB of RAM and that is sufficient for your use cases, I wont stop you. If you expect low, predictable latency (<1ms) even on a 98/2 r/w system, then it it's not worth the headache.<p>Of course everything depends on use case and constraints. I highlight the extremes here, the initial confusion was why DBs require so much RAM. Traditional DBs are optimized around RAM, that's where they perform best. You can abuse that, but it's not the best they can be in terms of latency, predictability and stability.
For anything Lucene-based (Elasticsearch, Solr) this was a problem where some of the indexed data had to be transformed for another type of query to be efficient, and it put the transformed data into the Java heap then never released it. I think it was indexed data for searching was read straight from disk and was fine, but analysis queries needed the transformed version?<p>At some point they added the docValues configuration option per-field to do the transformation during indexing and store it to disk instead, so none of it had to be stored in the heap. Instead what you're supposed to do is rely on the OS disk cache, which handles eviction automatically, so you can run with significantly less memory but get performance improvements by adding memory without having to change any configuration further.
Pick the right use case. It is super awkward, horrible UI for things like log analysis. Use Scalyr instead.
Congrats on being the third top-level comment at this hour, and the first one who seems to have read more than just the headline.
Agree. context matter. As a senior developer you need to understand complexity, risk, upsides and and downsides. Understand the business side.
If you are a startup or a big company that is already a cash cow makes a difference when changing a core featrue of the product etc... context context context
I think this is contrarian, I found author's point clear in context. Obviously sometimes actions are warranted, but the balance today is skewed in making everything more complex than they needed.<p>This do not mean we don't develop new product and services, it just means when we do so, we find the path of least overall entropy, it also applies to operations and tech debt reduction.<p><i>premature optimization is root of all evil</i>
I think you may be missing the message the OP is trying to communicate.<p>The qualities were highlighted because they can all lead to better stability.
I mean blanket statements are bad and you don't want to be the last company running on Java 6, but all the same, it's equally bad to be the guys using the latest javascript build pipeline that came out three months ago and is undocumented.
Was thinking the same thing, but then i re-read the section and noticed this:<p>> Yes, yes, of course this is simplistic.<p>It's an example, put to the extreme, to clearly communicate the ideas. As all things, the golden mean applies, as I understand the article argues for:<p>> the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.
It's a tricky balance, and there's a nonlinearity in that it really depends on what technical risk you've already taken on. Like.. clever ideas are like children. A handful are fine, lovely even! But if you have more than you can adequately keep track of or properly nurture that's no good. So best to focus attention on the small number of clever ideas that actually matter for your business--the ones that differentiate you from all the other companies doing broadly the same thing as you.
I mean, sure, reduced complexity is great, but... what about performance?
[dead]
<i>> They want to avoid development as much as they can.</i><p>One of my favorite .sigs was:<p><pre><code> I hate code, and want as little of it as possible in my software.
</code></pre>
I don't remember where I saw it, but it was a while ago. It's possible the author has an HN account.<p>One of the things that happens to "avoiders," is that they get attacked for being "negative." It can get career-ending, when the management chain is the "Move fast and break things" type.<p>I just stopped offering suggestions, after encountering that crap a few times, and learned to just quietly make preparations for when the wheels fall off.<p>I have spent my entire adult life, <i>shipping</i>, and shipping means <i>lots</i> of "not-shiny," boring stuff. But it gets onto shelves, and into end-users' hands. I was originally trained in hardware development, where mistakes can't be fixed with an OTA update. It taught me to "play the tape through," and make sure that I do a good job on <i>every</i> part of the project; which includes a lot of anticipating problems, and designing mitigations and prevention.
Most proof of concepts I've seen get traction turned into production.<p>A rewrite?<p>I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.<p>The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.<p>The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
> recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.<p>Old quote: "There is nothing so permanents as a temporary hack."
This is why you need sufficiently senior engineering leadership (both IC leadership and management). If you have engineers who meekly do whatever a non-technical stakeholder asks then you have a vacuum of responsibility, and sooner or later things will blow up catastrophically and whoever was least adept at CYA will get blamed.<p>On the other hand, almost any business problem can be solved in a reasonable way that doesn't send your system through any terrible one-way doors if you zoom out enough and ask enough whys. Of course not every place allows engineering to do that, but the ones that don't aren't able to retain senior folks because they will just go somewhere where their judgment is valued. Sometimes technical debt is the right thing for the business, but sufficiently senior engineers can set things up so there is always a way out. But what you can't do is uphold the purity of the system above the business problem. The systems are paid for by the business, so if you lose sight of that then you've lost the plot and the basis for your influence.
This problem definitely predates AI coding agents, though it may be exacerbated by them. The article essentially concludes with the ancient advice of "plan to throw one away". Well sure, I also read Mythical Man Month, but how do I convince the decision-makers?
I guess it's company culture? I had a job and we initially had quick solutions that went messy. We set a hard policy that every "quick and dirty" feature will have a follow up story that gets pulled into the following 1-2 sprints. Often it turned out that the feature didn't live up to expectations and we just disabled or deleted it, the other times we reviewed it and refactored it properly.<p>We were highly autonomous team though and hardly had cadence complains. But mostly because the all other departments were lagging. Except marketing, marketing always has "ideas".
<i>I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.</i><p>Why would you do that though? If you have a working 'prototype' that's handling the demand, has the required features, and doesn't really need to be rebuilt (except to appease the sensibilities of the developers), why would you spend time and effort on that? That makes no sense. The fact it's a prototype or a 'proof of concept' is essentially irrelevant if you can't enumerate what the actual problem with it is.<p>I work with a bunch of teams that complain that they're mired in tech debt <i>all the time</i>, and complain that it's a huge risk and it slows them down. Except I can see our incidents log and there aren't many incidents and <i>none</i> that can be attributed to running risky code in prod, I have our risk register that has no 'this code is old and rubbish and has past-EOL dependencies on it', and no team has ever managed to articulate <i>how</i> or even <i>how much</i> the tech debt slows them down. They shouldn't really claim to be surprised that no one wants them to spend time 'fixing' a problem that apparently has no impact.<p>I've also seen the opposite case where a team spent months refactoring an app that they wrote before it launches. They wrote it, then decided they could make it 'better', and spent loads of time reworking most of it before it launched. All the value was delayed because they decided they didn't like <i>their own work.</i> And obviously the leadership team were pissed off about that, and now there's very little trust left.<p>There should be a good conversation about delivery of work between teams and stakeholders or no one will be happy, but if that isn't happening the stakeholders will always win.
Because the goal isn't "keep this exact version of the app alive and running". The prototype is never the whole application. If your only metric is incidents, then yeah, don't ever touch the code again.<p>You can get a few feet closer to the moon by building a treehouse, but you still can't turn it into a spaceship.
<i>The prototype is never the whole application.</i><p>In a world where people (stakeholders, Product, and dev teams alike) want the prototype to be the full set of MVP features, this is not true.
Regarding the viability of rewrites of successful PoCs: Does the current environment change the math? How difficult would it be to overcome the inertia/hesitation/perception of slow, painful projects that may no longer be so?
Thats why you gotta write them in a language nobody else on the team has heard of.
A mention of a “rewrite” triggered. Whoever does rewrites is effectively out of ideas on what to do next. This is an opportunity cost and the team/company chooses what is more important and the rewrite is never at the top. So even promising or expecting such a thing is silly.<p>IMO it is a bit arrogant to assume it is more important to engineer a better version of a thing rather than make money quicker and cut corners. In essence it is better to have a problem which is about how to scale a new product because it got traction rather than solve a problem how to sell more copies of already scalable thing.
I do "rewrites" for my day job all day every day; with as of late the goal being rewriting critical services to get past scaling plateaus.<p>Rewrites require an existential-level threat to pursue and should never be taken lightly. They must solve a real verifiable need, backed by real world data. Rewrites for rewrites sake or some lofty or nebulous goal of "better" or "more maintainable" code are doomed to fail and a waste resources.<p>I've seen the worst of it, from your average monoliths with no separation of concerns to 1000s of lines of self-modifying assembly in dead architectures with no code comments containing critical business logic, etc.<p>The main rule is to not to bite off more than you can chew, which if I'm being honest you really only learn from fucking up or watching others fuck it up.
They said a Proof of Concept goes to prod. That’s not “rewrite the whole service that’s been built for months”. That’s “I vomitted a neat thing over the weekend” -> now it’s in prod.<p>Hackathon and overnight oncall fixes ABSOLUTELY should be rewritten or production-hardened, but they very often are not.
After my first proof of concept went into production by surprise, I stopped building proof of concepts and started building MVPs.<p>That's not to say that my first pass that I show people is ready to go into production, but I build the PoC from the beginning with the idea that it _is_ going into production and make sure I have a plan to get to production with it while I am working on it.
What I found is that my willingness to communicate and share my expertise is usually not in demand with more junior developers. In general, I find developers uninterested in finding a mentor. They don't look at your linked in profile, they don't look at you as a possible source of knowledge and expertise.<p>So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
This is my frustration at my current job. There's so much silliness and no one cares about avoiding it.<p>A less experienced dev suggested using "AI magic" to replace a URL validator. I protested, suggesting a cached fuzzy match solution (prepopulated by AI)... and no one cared. Now the AI model has been suddenly turned down, and our system is broken. We're going to have re-validate the whole system.<p>A younger developer who got promoted over me tried to write a doc on possible ways to fix it. He said "hey Dan, can you help me with this?" He got promoted over me because the way to get ahead is to write docs and have meetings, not do things sensibly. Now he's trying to use my work to demonstrate his leadership.<p>No one cares. The more I offer better solutions, the more it's a threat to less experienced developers. Things mostly work so my manager doesn't care. There's probably better ways for me to have handled things, but it's so exhausting fighting the nonsense and I just want to write good code.
I feel you. Similar experience on my side. I think it might've been like this before, but AI coding tools made it worse. Everybody thinks they can do it better - when there is a problem, the coding agent can just fix it. Why bother building relationships with senior devs or with anybody?<p>Looking deeper into it: these people don't understand the underlying foundations anymore. Just keep building fast, without building proper mental models (that would take time).
You need to advocate for yourself, because nobody else will, unless your manager is really good at his job.<p>Our work is largely very difficult to understand to outsiders, we need to write docs and have meetings to show what we have done. It's part of the job, and yes, if you don't do that, it doesn't matter how fantastic the software is that you wrote (sadly).
you've healed me - resonates
As a junior I will share my perspective from the other side.<p>Companies have outlandish hiring practices. They want juniors who already know everything. That's why admitting that you don't know something is seen as showing weakness to the company in the eyes of a junior. Also, not knowing things will actively keep you from getting promoted.<p>I'm sure it's not like that everywhere but it's juniors playing the corpo game.
Wish I had you at my first engineering job at IBM. A couple senior devs there (not all) would get pissed when juniors tried asking them questions. Not only did it take a bit of courage to ask someone who had been there 20 years about something, but it was a 50/50 chance they were going to be an asshole to ya lol. Was a good learning experience for me - I go out of my way to mentor now.
> So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.<p>seriously. it kills me to have so much knowledge and expertise that few people appear to care about if not downright hate me for wanting to pass it on to others as it appears institutional knowledge does not have any value these days
I took a job in another state in large part because one of the interviewers was a highly skilled sysadmin that I wanted to learn from (I had basically backed myself into system administration as a career at my first job, a startup, so I didn't have a lot of people to lean on to learn my trade).<p>Of course, he turned in his notice shortly after I arrived, because he had found his successor. So, that didn't work out so well for me.
All the senior developers I have worked with are absolutely allergic to coming into the office, working closely with junior developers, and in general talking to people.<p>Whereas juniors are eager to chat, have lunch with you , and share what they’re working on, the seniors are guarded and solitary.<p>Maybe that’s just my workplace though!<p>And yes, the office is important.
In the senior realm here - would love to chat with folks over lunch, brainstorm, assist, mentor, guide, etc. Can't do that <i>AND</i> be expected to deliver code at a 'full time' expected pace. What I would be delivering is... some code, some guidance, some assistance, etc. I've seen inside enough places to know that many senior folks end up being guarded and solitary because the deadlines aren't ever set to accomodate that sort of work. You're a 'Senior Developer(tm)' and the measuring stick is... lines of code.<p>Orgs get what they measure for. If your team values that sort of interactivity and support, it will ... observe it, measure it, and hire for that sort of person. I've seen groups evolve towards that, and they've been great, but it doesn't seem to be a default - most groups/orgs have to work towards it and and keep working at it.
The last two jobs I've had ended up with teams spread across multiple offices and time zones. I don't hate the idea of coming in to the office, but every time I do I end up only talking with people from other cities on calls anyway.<p>That said, I completely agree. I learned most of what I know from being in the same room with senior developers and asking questions. Something that just isn't happening these days.
Im not even confident I can mentor a junior well. Part of that is probably mentoring is a seperate skill. (Like management is) and so you need to get good at that plus research the "many worlds" of their future paths rather than share your war stories. If that makes sense.
Are juniors you ran into psychologically obsessed by being self-reliant ? or too proud of their own ideas ?<p>I also believe that some of seniors experience is flesh-level resilience. I'm no smarter than when I joined the industry, I just got used to being in the trenches, how to handle my own psychology, how all the easy-looking things are not and how the horrible ones aren't either.. I could explain this in detail to any junior, but until they're on the minefield it won't mean much.
I'm sorry this has been your experience. There are folks out there open to learning from us seniors.<p>I've been a mentor off and on for the last few decades, and I've been really lucky to have some strong mentees. Some I've followed for a better part of a decade and are crushing it out there. All I can really say is that they're out there, sorry I don't have any more helpful to say around how to find them etc. I'll mull on that for a bit..
Exactly my experience. You describe it more diplomatically than I do hah.<p>To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100<p>They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.<p>How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?<p>They think all they need is knowledge and facts and none of history, politics, communication etc<p>I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
I like to play an online strategy game, openfront.io. The way to win is to take out someone who is gaining power before they get too powerful.<p>It's just basic game theory, and you see it everywhere. However, it's so annoying in the workplace when your two options seem to come down to try to dominate or be dominated. Especially if you care about quality code and don't care for meetings.<p>As far as I'm concerned, I think I have to make peace with the fact that if I don't play the game, I am going to be managed by people who don't know what they're doing. But neither option seems particularly good. Should I try to bury my ego and influence from below? Should I work harder and try to climb the corporate ladder? I'm still not sure.
I don’t think it’s the arrogance of youth. It’s just that this generation and honestly a big cohort of millennials are not used to gleaning information from people. A stunning number of people have been raised/educated solely by the internet. That’s the source for knowledge, not other people.
> A stunning number of people have been raised/educated solely by the internet. That’s the source for knowledge, not other people.<p>On the internet you can learn from and sometimes interact with the best of the best, so the barrier of entry for what constitutes an "expert" is rised much higher.
To be quite honest I learned exactly this way myself, however nowhere near recently by any stretch of imagination; I learned through Usenet, bulletin board systems, IRC, and a heavy dose of (bordering on obsession) reading any and all technical manuals I could get my hands on from the local used book store.<p>I still vividly remember reading a z80 instruction set manual on a rainy day during summer vacation by a lake as a kid (maybe 14?)--writing my own assembly by hand in the margins for fun. TBH I probably still have that exact manual in storage somewhere. Had a green stripe down the front edge/binding iirc.<p>Back then I easily met folks like myself out there on the net, including many kids younger and smarter than me. It was awesome.<p>I do hope that some form of that 'net lives on in spirit somehow, given that the Internet I knew has largely fallen to corporate interests.<p>Now that I have my own kids, it's been painful to watch them have such an utterly different experience than I did.<p>Their Internet is based entirely on consumption and dark patterns designed to capture their attention, while providing nothing (to them) in return besides a dopamine addiction and body dismorphia.
For all I know maybe you are an expert, but as a general rule of thumb - people are sick of "experts" eager to share their "expertise".<p>It's simply the case that the supply of "experts" wanting to share "expertise" vastly eclipses the demand by several orders of magnitude.<p>I think there's a business somewhere, where you get paid to listen to "experts" and they get to feel better about themselves. It's a win-win.<p>So if people don't perceive you as an "expert" and dont go to you for answers, you simply do not register as one or they have a rather high bar which requires observable undeniable artifacts (and I don't mean credentials, I mean software) and competition is rather fierce - there's simply overproduction of people who think they are "experts" and thus you have to give unmistakable symptoms of being one to register.
This is the key sticking point.<p>"It takes two to tango" i.e. junior developers must first put in some effort and then proactively seek out seniors with expertise.<p>It may be a cliche, but a truism nevertheless; viz. the juniors are simply not interested in putting in the necessary time/effort to gain knowledge systematically. They want everything to be quick, easy and handed to them on a platter.<p>I think the main reason for this is; there is just too much out there to learn and everything is being propagandized as being the most important and most indispensable; This swamps the juniors and hence they feel lost and try to keep up with everything which is a fool's errand.<p>Juniors need to keep the following in mind;<p>1) Change their learning mindset as follows; - Browse a lot, Read a subset and Study an even smaller subset.<p>2) Always focus on the essentials and not on the frills. This is determined by your specific goals/needs.<p>3) Be okay with not knowing everything. Do not base your self-worth on others evaluation of you.<p>4) Do not compete with others. Do the best you can and always improve on your yesterday's self. As the adage goes "drops of water falling, if they fall continuously, can bore through iron and stone".<p>5) Be confident in your own intelligence. As Sherlock Holmes said "what one man can invent another can discover". What might seem impenetrable in the beginning will over time become clearer and easier when studied regularly.<p>6) Everything is dependent on Self-Effort modulated by Timing, Context, Means Employed and finally Random Chance (i.e. lady luck). Manage the last by factoring in its payoffs as part of your self-effort itself (i.e. hedging). Focusing on the above five parameters before starting on anything will guarantee success.<p>7) You can always short-circuit your studies and gain knowledge quickly by asking seniors with expertise to teach you. Your attitude and way of approach is very important here i.e. you must be sincere and committed.
you have HN, there is always someone here, my friend :).
A really competent senior figures out what the prevailing culture of the company is now, and what it will need to be in 5 years, and adapts as they go. Startups with 5 people maybe don't need extra complexity costing runway. A 500 person business may need that complexity because now there are second-order effects that need to be mitigated for every business decision. It's not a black-and-white "always avoid complexity" it's "add complexity when it makes sense" and even that question has a lot of nuance because sometimes the business just needs to survive for another couple of months.
Right, prioritization and transparency allow you to change the variables that people should be using to solve a problem (and if it doesn't they are not good at the job) - if you have two hours before a storm comes you will be asking "will it take on enough water that I cant bail it out?" instead of thinking about your architecture.<p>The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
This misses the basic problem of incentives. What "the company" wants doesn't matter, it's what the people making particular decisions want.<p>There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
It's funny, I've been literally trying to convey these exact sentiments to my team over the last few days down to the:<p><pre><code> > Need to build a whole new feature to test it? Have you tried putting a button in the existing UI and seeing if people click it?
</code></pre>
Pretty much word for word.<p>It feels like engineers are collectively feeling the pain now that product has decided that engagement of mental faculties is no longer necessary on their behalf; just build it and figure out the user persona and utility later...if ever. What used to be a process of taking the time to understand the domain, the user, and how the product fits into some process has been tossed out the window; just ship whatever we think some imaginary user wants and experiment until we succeed.<p>It creates the exact problem that OP talks about: every random feature that gets vibe-coded becomes a source of instability and risk; something that can then only be maintained via more vibe coding because no one has a working mental model of the thing.
Why junior bloggers fail to make me read their articles
Complexity, if it can be reduced to a single measurable dimension, is only one of several factors in a solution space.<p>There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.<p>Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
“Complexity” understood as the immediate first impression a junior gets looking at some arbitrary facet is always bad and too much and bad.<p>“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.<p>Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
Quality and speed are not diametrically opposed. A great engineer does well on both axes by building the minimal thing needed now in a way that is easy to extend in the future.<p>I have also seen projects go badly because the eng was trying to be perfect upfront. Whereas quickly getting to an MVP and then iterating tends to go better.
> Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me.<p>Well said. In Kent Beck’s Tidy First, it explores the slow process that can be summarized by this except from his substack [0]<p>“Valuable” lives on 2 axes:<p><pre><code> Features—what the code does now.
Futures—what we can get the code to do once we learn the lessons of this set of features.
</code></pre>
While there might be a component of time to get features out, it’s rarely urgent enough to forget about being flexible and having a somewhat constant velocity.<p>[0]: <a href="https://tidyfirst.substack.com/p/genie-tarpit" rel="nofollow">https://tidyfirst.substack.com/p/genie-tarpit</a>
TRADEOFFS! I think this is IT. Non programmers imagine there aren't tradeoffs. As a programmer one should eventually realise that every possible aspect of design is a tradeoff.
Many of these factors are directly influenced by complexity.
They all influence each other to one extent or another.<p>And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.<p>The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.<p>The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.<p>Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.<p>One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.<p>You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
Totally agree on this.<p>The use of “complexity” in terms of systems theory in comparison to “complicated”, is often misunderstood.<p>I also agree that it’s a really good framework for evaluating problems and then making decisions on potential solutions because each has its own set of approaches.<p>Small nick pick. It’s “Cynefin” not “Cynefine”. The word is Welsh (Cymraeg). Roughly pronounced ke-ne-fin.<p><a href="https://en.wikipedia.org/wiki/Cynefin_framework" rel="nofollow">https://en.wikipedia.org/wiki/Cynefin_framework</a>
Interesting and salient comment. But<p>> <i>"Software systems tend not to be complicated, not complex, until you start getting into distributed systems."</i><p>these days so much software <i>is</i> "distributed systems".
I don’t know at what threshold a complicated system becomes complex.<p>For example, at a level of scale, Kubernetes start having emergent behavior.<p>On the other hand, it doesn’t take much to produce a complex system. The Boids simulation is a complex adaptive system in the form of a flock, yet each member of the flock concurrently follows only three basic rules.
Isn't Cynefine a framework designed to sell consultancy services?<p>I think complexity is a byword for 'unintentionally complicated' here.
You missed one of the most important ones: usability
> The avoider, the reducer, the recycler.<p>As this kind of person, it can be alienating in some teams / companies.<p>What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.<p>If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.<p>In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
The best strategy is to frame your argument from the perspective of the customer:<p>> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.<p>Arguments like:<p>> We should do Z because it would provide future extensibility.<p>> Z could eventually enable some novel platform capabilities.<p>> Z is easier to unit test.<p>Are much less likely to succeed in the business contexts that I have experienced so far.
We may be looking at this differently based on our own experience fwiw. I also should have said added complexity or lack of (from poor planning).<p>That can work too, e.g. when demonstrating the pain a customer will experience when something complex is poorly designed (like some b2b workflow), but it's less visceral than telling your internal stakeholders all the extra work they'll have to do if it's rushed. Even the best of your peers are a bit selfish. The business side has a lot of incentives around quick turnarounds so it's easy to overlook the downside.<p>Imagine such a scenario. You're in healthcare and working on a feature that will add new data model for some kind of clinical information.<p>You could say:<p>> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.<p>Yeah that very well may prevent W from churning, though hopefully you think about how it will affect other clients too.<p>Or, you could say<p>"If we get this data model wrong, and the value set is ambiguous, you (product/sales/cs) will have to reach out to every single customer and clarify what they meant by x/y/z if we wish to migrate it with any degree of accuracy in the future."<p>That's drawn from experience but I'm sure there are a lot of parallels to that in other industries for any kind of data. Migrating data is a pain in the ass for everyone, but often it can be the people pushing for a quick solution that suffer the most when that goes wrong.<p>This kind is stuff is why commission structures should consider churn / residuals. Bad incentives make for hastily made decisions.
> you will earn some trust<p>Building trust is yet another quality of a good senior. By that I don't mean to be buddy with the CEO but earning trust from everyone by making good decisions, arguments and delivering as promised. Even giving a jr a warning and let him fall flat is a good trust building exercise.
> You want to bake me a whole birthday cake? Just put a candle on my sandwich.<p>I think these people also need to learn that, in the eyes of the customer, a sandwich with a candle is in no way comparable to a birthday cake.
My experience with avoiders and reducers and recyclers is that they want to avoid _my_ idea and do _their_ idea instead.
I'm trying to avoid a snarky comment like "oh of course it's a senior dev's fault again", so I'll tell a story.<p>When I started around 20 years ago, my junior dev experience was pretty harsh - I was taught, not always in a correct or respectful manner, to do this and not to do that. Overall though, it was absolutely useful and formative. Senior engineers are rarely abusers, they communicate real issues, better or worse, and it was on me to figure out why and how to work the right way. Also we were raised in a pretty receptive attitude to the "old" technology - from Tcl and Smalltalk to Ada, Perl, etc. It was admired classics rather than just old shit.<p>Surprisingly, this didn't translate too well to my experience when I found myself in a mentoring position. Starting from 2015 maybe the situation changed. Newer generation of devs felt much more entitled to social games, higher salaries and opinions rather than authentic engineering interest and therefore my experience.<p>No amount of structured communication would change that, even the cold pressure of production failures and very specific poor management feedback normally doesn't work. They're also more lenient to prod screw-ups, and often use "everyone can make a mistake" excuse for excusing even more mistakes.
The thing is, most of them don't want to hear for any reason.<p>As many of my peers I learned humility and accepted that as is, only using my advantage in expertise when it comes directly to my responsibility territory, and to avoid a hassle imposed by my eager younger teammates, like I usually parse prod logs and settings with command line while the younger guys trying to push through loki/grafana query limitations.<p>I'm fine and safe, and my job is no less secure, I guess, because someone has to fix bugs etc. The companies less so, but as long as they don't care why would I.<p>It will be interesting to see this generation wiped off by the next one. I guess they shouldn't be in a very good shape, because the foundation they built upon (namely quickly changing libraries and language supersets like React/TypeScript/some JVM flavour/and I hope Kafka) will be replaced by the next tech fashion.
Even with AI, there is a clear difference between juniors and seniors.<p>None of the things I can think of have anything to do with avoiding problems.<p>To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.<p>The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.<p>And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
I really dislike the "ah this is my favorite senior" language. The author would have done well to simply leave this "rating" of different kinds of people out, and it would not harm the article. In fact, it would improve it.<p>People don't want to be judged in the introduction of an article, based on how they like to approach their literal dayjob. It's a weird jab.
> Forget maintaining stability, AI is a downright destabilizer. It worsens understandability, fixability, debuggability, teachability, guaranteability, all the bloody bilities.<p>This is just an assumption and the whole article falls flat if this turns out to be wrong.
In my limited (as everyone else's) experience, working with agentic AI needs good documentation, good specification (spec driven, you know it's all the hype nowadays). Those alone lead to much improvement. Now take into account that probably your senior dev also has more time to think about the big picture, to improve all those little things that were a nuisance in the past but now are a mere "Claude, fix that" in a worktree away.. I would not bet on the assumption of this article.
This is an excellent article. Thought provoking and Ill rememner the 2 loops forever on.<p>> What if we had one system just for speed?<p>Like a beta? It would take incredible discipline from the business and customers not to consider that production software and demand 99.99 uptime and bug free.
The article is all about technical communication — diagrams, architectural discussions, code snippets. The more difficult piece to communicate is product sense: which user feedback indicates a genuine trend, when a feature request is a workaround for an underlying issue vs. the issue itself.<p>It’s not difficult for seasoned engineers with deep technical backgrounds to whiteboard a distributed system in twenty minutes. It takes hundreds of customer discussions, invalid hypotheses, and years of experience building judgment about whether this is the right solution at the right time.<p>The engineers who compound quickly have usually built their skills in both areas concurrently. Communication of the latter is more challenging due to the judgment-based foundation beneath it.
<i>>We could call this the ‘Speed’ version of the system. It’s not meant to be understandable, the goal is getting things good enough to take it to the market for feedback.</i><p>AI is actually quite awful for prototyping, because it makes it far too easy to add random crap to your "prototype" without any specific intention. This quickly transforms the prototyping process from something that's high-level and geared towards building the mental model of the real system into something akin to copy-editing a random piece of software without any coherent mental model involved. Moreover, prompting allows to to glaze over some essential complexity of the task without getting any notions of the scope of the effort of actually doing it. In other words, people end up failing to make necessary decisions and simultaneously get bogged down with unnecessary ones.<p>In short, fast feedback loops are only useful if there is actual feedback involved.
I found that the proposers of features "want everything" because they don't know what is critical - they're therefore totally unwilling to accept anything other than "the full monty". So as a senior developer you cannot propose any faster route.<p>As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
The article covers that under the imperative of discovery. Learn what works quickly because you may not know what the core part is otherwise.<p>There's ways to navigate it.
It's the XY problem. The customers tell sales they want Y, rather than stating their problem X which they think Y will solve. Sales runs breathlessly to the dev team and demands we implement Y. Now scale this up to 10 customers or 100 customers. They all have the same X but come up with independent Ys.<p>You see the problem immediately. Sales/marketing didn't do their job sussing out what X is and wastes dev time with Ys. And worst of all, write once, support forever. Each one off Y has to be maintained for the special snowflake customer that uses it. None of the Ys actually work well for all the customers with problem X so you end up drowning in "technical debt" spent to create them all.<p>If your marketing department leads the company, I've discovered the best option is to just quit. Go find a job at an engineering company.
This is why the first thing you should do as a dev when somebody tells you that they want Y feature is to ask why.<p>Non-developers have no clue WHAT they want, then know WHY they want it. The why is much more important to know, because the requestor has no clue how software works and imagines bad solutions.
Ill never forget I was fired from an aerospace company for designing a new system that was basically a linear diagram compared to the highly complex nightmare web of mystery my boss had desgined for our current system, which simply didnt work.<p>I was given a chance to redesign and it and when I failed to add the added complexity I was let go.<p>To this day I reckon the higher ups are still having the same age old problems and excuses from their underlings regarding a system that has an utterly useless design. The guy in charge, rarely in the office, calmly explaining its a fantastic implementation, the new coders we are getting just cant work with it / operate it well because they suck.<p>I am not bitter, if anything it just made me terrified of being C-suite of any large company, knowing it would be almost impossible to understand why your company is failing.
The safest answer a sales person can give is "yes".<p>The safest answer an engineer can give is "no".
I’m curious about this scale vs speed distinction.<p>Every codebase includes parts that are more experimental, and parts that are more core. My sense is that AI can help on both of these fronts (I.e building rapid prototypes on the fringes and hardening the core with better test coverage).
> I don’t like senior developers who like trying new technology. I like ones that avoid more complexity.<p>I guess the author has never worked on a dog shit system with no tests at all and constant downtime.<p>I have worked with “complexity averse” engineers who would rather fix the edges over and over again, than roll up their sleeves and just get the job done.<p>I just don’t believe that using new tools is at odds with avoiding complexity.<p>Sometimes you have to take it to the chin, and get to use the new shiny thing along the way to move much faster.
<p><pre><code> “I found this new tool and it’s pretty cool ...”
</code></pre>
yup<p><pre><code> “This company <company totally unlike the one we’re in> does things this way, so …”
</code></pre>
agreed<p><pre><code> “Here, look at this HackerNews post that says this is best practice, we should probably …”
</code></pre>
sir/m'lady, we're at war from now on. This is the only reason I come here. Of course I don't take everything carelessly, but the amount of experts on this forum is damn high and this is the only forum in the last 10 years that helped me grow so much
I may be missing something, but the "left" and "right" loops strike me as slightly different words for the same exact thing.<p>The company provides (offer | service) to the (market | user) and receives (feedback | payment).<p>The service IS the offer, the userbase IS the market, and payment IS the feedback signal.<p>Right?<p>EDIT - expanded on original comment to add:<p>The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)<p>I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
Hits home for me; although a lot of times adding complexity is not about your opinion as a senior developer but rather what the business wants. I've definitely worked jobs where I helped create microservice kubernetes nightmares, and while this was partially my fault for wanting to play with shiny things, a lot of this was just "this is what the business wants and you have the expertise to do it", and I'd kinda shrug and go OK. I worked one job (small business) where an executive once leveled with me that the reason <i>they</i> wanted the complexity is because it looked good to investors, not because it was an actual need.<p>FWIW though the idea about a "speed" product and a "stability" product isn't new. We used to call it "prototyping". I don't know when/how that disappeared from the collective consciousness. "have a space where we can build things fast with horrible practices" isn't some AI era innovation, it's what smart companies have done for decades.
While I agree with adding code contributing to complexity is problematic there is lots of code in existing code basis which is overly complex due to past outdated requirements or less than perfect human coders. The current flood of AI driven security fixes demonstrates that AI can be pretty good in detecting security edge cases. It is not inconceivable to use it to also reduce code complexity.
One could say in order to be a senior developer in any area, more-than-good- communication skills are required.
> Senior developers care a lot about stability<p>I saw this yesterday<p><a href="https://trinkle23897.github.io/learning-beyond-gradients/" rel="nofollow">https://trinkle23897.github.io/learning-beyond-gradients/</a><p>They are very remotely related yet somehow very close.
> I don't like the kind of senior developer that says "I found this new tool and it’s pretty cool ..."<p>Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.<p>The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
Good read. The big elephant in the room though: you likely won't purely hand-code the Stable version for much longer. So where's that split? Prototype vs. Prod? Feature Flags? Canary? 2 codebase nightmare? All of this already exists.<p>The message that hits for me is that of AI being a <i>destabilizer</i> while simultaneously being an <i>accelerator</i>. The Speed/Scale suggestion won't address this. A codebase no one understands, growing at machine speed won't go away just because you drew a box around it. The fix is likely more mundane stuff like process and role shifts, smaller PRs, tests, tooling, ownership principles.
I feel like I was totally on board until the conclusion about one fast system and one stable one. It's not really possible in practice, once a customer starts paying for something, even a vibe coded app by a sales person, it's now a stable system.<p>The thing breaks, the salesperson says "can you check this out?" then disappears and we're back to where we started.<p>I don't even find this very new: many companies I've been at have tried to spin-off a "fast" team to sell stuff.
Survivor bias. The ones who did were fired.
It sounds like a perfect idea on paper until you notice that junior devs will not be able to learn about stable code. Unless AI get's good enough to write stable code, or good enough that no human has to look in the code, the next generation will face a bigger problem than now. Well it's AI that started it so let's make AI take responsibility... Oh they can't. Now what?
I partly agree. Agents are not going to replace senior devs. Exactly for the internal context and the decision making that comes with it.<p>But senior devs are also expected to have a compounding effect even pre-AI. Writing a single doc, refactoring legacy code to make it extensible, building security frameworks specific to the project and many more. All of these would compound the dev team.<p>I think the same will happen with agents working on a org specific paved path set by senior devs.
They will (and already have) replace low-performing senior developers because a single high-performance senior developer can do a lot more than they used to.<p>I have personally noticed this a lot how multiple people can work on the same problem, but the more senior developers get way more miledge out of AI compared to those that are early in their carreers.<p>Another difference I've noticed is how many agents one can keep running without losing awareness.<p>It generally just raised the bar on what management will expect from developers which will result in a shrinking workforce. The only ones that will benefit are AI companies and the upper management since less employees means less management so lower management will get screwed too.
> will result in a shrinking workforce<p>Jevons paradox is already rearing its head, I've seen data suggesting open roles in tech are at their highest since the post-pandemic slump [1]. If you're a senior leader at a company and your engineers are now capable of multiple-times more productivity, is the logical choice to fire half, or set way more ambitious goals? One assumes engineers are hired because their outputs are worth more than their cost. If outputs, at least for those capable of wielding new tools, are higher, so is the value of that employee to you.<p>The universal thing I'm hearing from friends at small-mid-size tech companies, and experiencing myself, is that there is <i>way</i> more work and demand for it from senior leaders than they're capable of with their current teams.<p>1: <a href="https://www.ciodive.com/news/tech-job-postings-hit-3-year-high-april/819778/" rel="nofollow">https://www.ciodive.com/news/tech-job-postings-hit-3-year-hi...</a>
I think it's possible that this idea would work as a communication/branding strategy for senior developers, though I don't think it's strictly true.<p>I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.<p>The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."<p>We haven't identified too many of those things yet, but I am certain they are coming.
I tripped over the double-entendre of the teaser quote and then found it ironic that the author is a copy writer.<p>>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”<p>> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.<p>I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
The polarization of speed vs scale concern on team is interesting.<p>Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
Interesting article. I appreciate the range of perspectives here, and the overall pitch to keep the most experienced in frame along side new-fangled advancements (AI).<p>The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.<p><a href="https://www.geeksforgeeks.org/software-engineering/software-engineering-rapid-application-development-model-rad/" rel="nofollow">https://www.geeksforgeeks.org/software-engineering/software-...</a>
Speed… speed… velocity… speed. All I hear about these days. Every meeting.<p>Honest question does high velocity / first mover <i>ever</i> really pay off these days?<p>I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?<p>The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.<p>Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a <i>subscription model</i>. I feel like the only thing increased velocity can do is rapidly ruin their experience.
I stopped communicating my experience-derived lessons when I discovered that 1. it cheapened the perception of "my genius", and 2. nobody wants to hear it anyway. From non-tech workers for whom I'd write a bat or bash script for, to engineers for whom I'd debug a complex race condition - they all just want the answer and care nothing about how I got it.<p>Fine, then, I'll keep the experience to myself.
They fail to communicate in the same way we fail to download a copy of "the truths of the world as we know it" into every child's brain. It's easily to say " look both ways when you cross the road" but speech is so one dimensional. It's a slow tape reel and that's just the encoding.
I enjoyed reading this, and I agree with the underlying message: communicating better with our audience.<p>I think the framing started in the right path and then took a slightly wrong turn.<p>Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.<p>And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream.
I enjoyed reading this, and I agree with the underlying message: communicating better with our audience.<p>I think the framing started in the right path and then took a slightly wrong turn.<p>Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.<p>And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream
It seems to me that the author fails to extrapolate on the effects of recursive self improvement. The only things preventing 95% engineer obsolescence will be compute/energy constraints and the speed of adoption, which can take years for large infrastructure companies. But it's coming.
Cuts both ways. If supply chain attacks continue recursive self improvement, everyone’s going to be working in air-gapped facilities. Departments also need to be air-gapped from one another. And each team air-gapped. And so on.<p>There’s a speed limit, because the faster you go the less room for error you have. It’s the same as being heavily leveraged with debt. If you have a cash investment and it drops by 50% you can just wait. If you’re leveraged 100-to-1, a 1% drop forced liquidation and wipes you out.
I agree with the author's premise - that one feedback loop optimizes for speed, and the other for scale - but I don't think the market is bearing the conclusion - that AI should be utilized to enable more rapid experimentation, where we better scale what works.<p>Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".<p>In the best case, the users give the company more chances. Infinitely more chances.<p>In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.<p>The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
I actually think the article makes some pretty interesting points. It's not about the name of it though.
> this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can<p>And push an insurmountable pile of technical debt onto the successor.<p>Well, yeah, I understand the idea and I'm all for it: the less code the better, the less changes the better.<p>However in certain industries it is no longer a right approach for the job. In modern frontend development if you did not update your codebase for like a couple of months, this codebase falls so much behind that it becomes way more expensive to push an upgrade as compared to daily minor updates of packages. Yeah, I hate this as much as you do, but this is the pace frontend is moving at, and if you don't follow, you will mount technical debt.
I think that if this becomes an actual problem, there will be such a massive incentive to add AI to the scale/compression/risk avoidance side that there will be automated tools specialized in that kind of work.<p>I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
I do a bit of both... I pay attention to new tools, libraries and languages. But will rarely recommend them initially. That said, I also tend to fight complexity to an extreme degree KISS/YAGNI are my top enterprise development keystones.
Depends on the product, but in many cases you cant actually decouple the complexity because the complexity is the product. There are times where the archaic flow needs to work for some stupid compliance reason.
I feel this is about as accurate and relevant as if I were to write an article on senior copywriters.
There's a lot of opportunity in being the manager who can still see it
Shouldn't a senior developer strive to eliminate complexity while increasing velocity? The two do not contradict. Reducing complexity can increase velocity.
This is well-put, but the problem comes when you’ve got leadership looking at what appears to be a fully-functioning version of the product that the market is clearly indicating to them is sufficient to drive revenue. Budgeting the 6 weeks or whatever to translate from “the working version” to “the trustworthy version” is a hard pitch.<p>This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
this almost reads as a rephrased version of: <a href="https://grugbrain.dev/" rel="nofollow">https://grugbrain.dev/</a>
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”<p>No-one says this.
This is engagement bait. I almost fell for it.
Just wanted to say this writeup made tangible a real thing - a truly clarifying way to think about it.
> Your thoughts, senior software developer?<p>The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.<p>> And this is how AI affects the two loops<p>There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
The loop on getting slop out to market quick in order to get feedback is already flawed. If you don't understand the problems of your customers well enough to come up with a coherent vision for how to solve them you shouldn't be the one doing the product design or making high level business decisions in the first place.<p>There's a place for prototyping and experimental features but now agile has cultivated extreme learned helplessness and everything is an A/B test because there's no longer any ability to judge whether something is good or bad based on a holistic vision.
I can/have done this without AI and it tends to be disasterous. Management declares we need X fast. Okay, we can build that really fast, but it won't scale. Management says fine, just build it. We do. Management now wants to build Y fast. But wait, what about X? Nevermind, just build Y now. Okay, we're building Y, and X collapses... because it wasn't built to scale. Now we're being called in at 2 am to fix X while also expected to ship Y tomorrow. Sure, they'll glow you up and tell everyone what a hero you were for coming to the rescue at 2 am, but on that six month performance review, the blowup is used as reason to withhold raises and promotions. They don't lose any sleep of course, just you, the developer.
Irrespective of the linked post, let me say why I (being sort-of-a senior developer) fail to communicate my expertise. In no particular order:<p>1. I am discouraged or forbidden from devoting time to communicating my expertise; they would rather use it. Well, often, they'd rather I did the grunt work to facilitate the use of my expertise.<p>2. Same, but devoting time to preparing materials which communicate my expertise.<p>3. A lot of my expertise is a bunch of hunches and intuitions, a "sense of smell" for things. And that's difficult to communicate.<p>4. My junior colleagues don't get time off their other duties to listen to "expertise sharing", when it does not immediately promote the project they're working on.<p>5. Many of my junior colleagues lack enough fundamentals (IMNSHO) for me to share all sorts of expertise with them. That is, to share B with them I would need to first teach them A, and knowing A is not much of an expertise; but they're inexperienced, maybe fresh out of university.<p>6. My expertise may only be partially or very-partially relevant to many of my colleagues; but I can't just divide the expertise up.<p>7. For good reasons or bad, I have trouble separating my expertise from various ethical/world-view principles, which fundamentally disagree with the way things are done where I'm at. So, such sharing is to some extent a subversive diatribe against the status quo.<p>8. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I am apprehensive to talk about what I feel I actually don't know enough about - which may just result in my appearing presumptuous and not knowledgeable enough.<p>9. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I try to polish and complete my expertise before sharing it - and that's a path you can walk endlessly, never reaching a point where you feel ready to share.<p>10. Tried sharing some expertise in the past, few people attended the session, I got demotivated.<p>11. Tried sharing some expertise in the past, few people were engaged enough to follow what I was saying, I got demotivated.<p>12. Shared some expertise in the past, got a positive feedback, but then those people who seemed to appreciate what I said did not implement/apply any of it, even though they could have and really should have.
> I don’t like this kind of senior developer [...] not my wavelength.<p>Bro & I would not get along well =)))) But the article IS <i>good</i> stuff.
If you're a senior Go player and you think robots can play I'm suspicious of your expertise.<p>Literally what people thought after Fan Hui (2-dan) was beaten. For humans software requires ingenuity and creativity. Computers can cheat that, infact computers ALWAYS cheat that to beat humans. NTP as a method of cheating is slightly more general than say board evaluation, so it's less efficient for the same problem, but scaling laws show that with enough compute NTP can beat humans at chess (or any most other arbitrary games, in real time).
The second part of the system that author proposes, will not work for most of the medium and small companies. From what I saw people that ran those companies, the owners for example were looking at those devs like, hacks that try to extort them for money. They were angrily grinding their teeth but put up with that because they need them to do their software to actually make money.<p>Now, with so-called AI they will mostly slap something kinda working in few days and then maybe get hacked or double invoice some customer from time to time... They will learn of those problems the hard way. Or maybe they will not because it will be mostly working emailing system and nobody will care if it will loose 2% of the emails because of some bug.<p>Nevertheless, either the Stable version, Scale version of the software will never happen or will be looked like not necessary or it will became a thing after catastrophic failure.<p>Anyway I do not think it will be like that, everybody cares about speed and money and making money quickly without an effort is the ultimate unicorn entire world is after.<p>Those complaining developers just stand in a way.
Here's a mental exercise - do you immediately think you know what this command does?<p><pre><code> PING
</code></pre>
Junior developer: PING is used to check if a host is reachable by a network<p>Middle developer: PING constructs and sends ICMP packets to an address<p>Senior developer: what machine, what OS?<p>Junior manager: Don't care, ask a techie if you need to do something technical<p>Middle manager: Ask <techies name> about it, I know he has great experience with it<p>Senior manager: PING is used to check if a host is reachable by a network<p>Senior developers fail to communicate their expertise, because that expertise is developed and formed by <i>asking more questions than answers</i>, and managers fail to understand the capabilities of "their techies", because managers see question-asking techies as counter-productive, and attempt to route around them. Managers only want answers, developers know the value of asking deep questions.<p>Thus, AI.<p>(BTW, PING is a command that produces a distinct sound on the Oric-1/Atmos computers, and it is thus an Onomatopoeia.. I know this, because I am a Senior Oric-1/Atmos Developer who knows what lies at #FA9F, how it works, what the 14 bytes are for, and so on.. because I once asked the question, "how does PING go 'poooinnng' but ZAP go 'zap'?")<p>AI: <asks billions of questions in a second>PING is ..
Wow I'm only done with part one and the author pegged me to a t.
Interestingly the article put complexity management vs uncertainty reduction.<p>But reduction is narrower than management which is narrower than organization.<p>Also uncertainty is part of complexity. Being able to isolate what is deemed predictable under clearly identified premises is the best that can hoped on that matter. It means that then one strategy can be applied to protect the stable core, and other strategy can be tried on what is unknown (known and unknown unknowns).
Probably because unlike apprenticeships a senior developer isn’t an owner. This creates a situation where imparting knowledge means you have less time to do your own packed work stack.
I'm a "senior" developer.<p>Want me to communicate my expertise? Give me some time to actually do it.
As a senior developer, I achieved last night what I thought was impossible with all the anti-bot (including bot detection) tech that gatekeeps much of the internet.<p>An AI agent using a web browser like a human. I used various stealth technologies to achieve this. I set it off on a research task for me and it saved me $30 of a purchase by finding the best price. Its Jeff Bezos worst nightmare, visting amazon.com and ignoring all the product placement ads.<p>It had multiple tabs open, did searches in multiple places, opening products and checking sites....it looked just like a human would do doing the same task.<p>This I can assure you would not have been possible without my expertise. I had to be very careful to remove all bot signals from the browser, including going to browserscan.net to check. Once done, most captchas were never shown to the agent. There is a NodeJS codebase involved that I wrote by hand.<p>I searched through the code of the browser automation framework I was using, looking for ways to make it look more human. I had AI help with this part, but had to confirm everything and pull the agent up when it suggested bad ideas.<p>Most of the work was architectural, including making sure my browser was easy for the agent to use.<p>I'm going to add 2captcha as a next step, to solve the few captchas that it still encounters (as I still do sometimes as a human).<p>I'm thinking of open sourcing it, but i'm not sure if its a good idea as if it became widespread, it might encourage the adoption of even more invasive anti-bot measures.
apex predator of grug is complexity<p>complexity bad<p>say again:<p>complexity very bad<p>you say now:<p>complexity very, very bad
This is copy. I'm only interested in content.
You can't force people to feel what you feel. One can (pr|t)each, without experiencing, other can only mimic or rebel. That's how cult is formed.
In 2026 the answer is "job security"
The unspoken observation on the reason why this happens is it almost always political in the organization to make themselves more valuable and harder to fire / layoff.<p>That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.<p>That leverage <i>does not work</i> anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.<p>In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
FTA: “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”<p>Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.<p>My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.<p>You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
Now they *think* they can create the apps themselves. I say let every CEO and business administrator try; business will fail, everything will get shitty, and eventually somebody somewhere might learn something. Let 'em cook.
> Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.<p>And society is beginning to suffer from it. AWS alone managed to slop itself into outages twice in a matter of a year [1] (and I bet that's just the stuff that escalates into mass-visible outages, not the "oh, can't start a new EC2 instance of a specific type for a few hours" kind), and <i>a lot</i> of companies were affected.<p>It's always the same game: by the time the consequences of the beancounters' actions come home to roost, they have long since departed with nice bonus packages, leaving the rest to dig out the mess.<p>[1] <a href="https://www.theguardian.com/technology/2026/feb/20/amazon-cloud-outages-ai-tools-amazon-web-services-aws" rel="nofollow">https://www.theguardian.com/technology/2026/feb/20/amazon-cl...</a>
> Ah, well, it can’t yet do the one thing senior developers still do. Take responsibility.<p>If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.<p>> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.<p>Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.
Its all relative. There is no baseline for expertise in software. So, instead its whatever self-serving quality some sociopath on the other end favors.
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
[dead]
I don't necessarily disagree with this conclusion, but the way it is written has a lot of AI prose smell that was extremely distracting for me.
I’m inclined to take the author at their word that they’re a copywriter by trade.<p>I agree that the punchy staccato and the rhetorical questions smell AI-ish, but the way this person uses them, there’s, like, a payload each time. Versus LLM-speak, where the assertions are at best banal and more frequently just confusing.
I didn't get the AI vibe from it. At some point we are just going to have to get use to most stuff being written to some degree by AI.<p>There will be different shades of usage and maybe we draw a line somewhere in there.
Also the consumption of AI-generated text could be having an influence on the tone of how people write.<p>So even if AI was not used to write an article, it could "smell" like AI to someone who consumes less of it.
The written word is how people interact with LLMs. Clarity and precision in writing results in more effective prompting of LLMs. It is just as possible that leaning heavily AI writing will be seen as a marker of not being natively skilled enough at writing to prompt LLMs effectively because of the GIGO principle.
There's no fundamental reason that I <i>have</i> to read random blogposts from people I don't know. I do it today because I find it to be an enjoyable way to learn more about my profession and explore various perspectives on it. If I stop finding it enjoyable because too many people write their posts with AI, I'll stop reading these kind of blogs altogether, in the same way that I (and I suspect many commenters here) do not read even the most lovingly crafted Linkedin posts.
Let’s do the exact opposite of what this person is saying. Resist AI slop.
You have to be able to distinguish the scent of LLMs from the scent of Gary Halbert.
im either the biggest idiot in the world or this person is a terrible "copywriter". I found this post to be nearly unintelligible: "You can’t explain away someone else’s problem using your own problems." WTF does that mean? this would be a good place to put some very simplistic examples of what they mean, but they dont. is that because theyre trying to be succinct? clearly not as the post rambles on and on anyway. I hate posts that are both 1. not explaining their concept and 2. super long winded. That's a problem<p>are we just trying to say, "use AI for prototyping and customer demos that aren't important to be mature, use senior devs to develop and maintain the real products" ? You could just say that then...? Which I also disagree with as how AI should be used, AI is valid to include as a tool across all forms of development - it just should never be put in charge for production-level software (e.g. no vibe coding of mission critical components).