Let's make it very clear - this is the original creator of redis, or one of them.<p>He is not "your avg dev" and it took him 4 months with llm.<p>This is not a seal of approval for you to go and command all your developers to move to Claude code/codex/any other ai coding tool fully.<p>I'm looking at you - any avg CEO of a startup.
It's a pretty strong endorsement for the idea that coding agents, used skillfully by experienced developers, can further amplify their expertise.
Sure but the OP suggests that these were minor gains, and that this limited scope for gains was necessary in order to preserve the quality standard that's long been expected in that FLOSS community. We aren't talking about either a 10x productivity gain or one-shotting entire new features from scratch.<p>This is arguably a key quote: "Then, it was time to read all the code, line by line. ... I found many small inefficiencies or design errors ... so I started a process of manual and AI-assisted rewrite of many modules." We should not underestimate that step: reading code line by line might easily require <i>more</i> time than writing it from scratch.
Right, and those of us who advocate for a sensible approach to agentic engineering don't talk about 10x productivity gains or one-shotting entire new (production-ready) features from scratch either.<p>I remain unconvinced by the "faster to write it by hand than read it" arguments though. My experience throughout my career is that most people, myself included, top out at a couple of hundred lines of tested, production-ready code per day. I can productively review a couple of thousand.
> He is not "your avg dev" and it took him 4 months with llm.<p>To clarify, from TFA:<p>> even before LLMs the implementation was likely something I could do in four months. What changed is that in the same time span, I was able to do a lot more<p>The initial timeframe was 4 months, he was able to do more work within the same timeframe with LLMs.
I would add that the output was likely more as well.. ex: more thorough tests, documentation, etc.<p>I've been working on a Database adapter for a couple months using an LLM... I've got a couple minor refactors to do still, then getting the "publish" to jsr/npm working... I've mostly held off as I haven't actually done a full review of the code... I've reviewed the tests, and confirmed they're working though. The hard part is there's some features I really want when in Windows to a Windows SQL Server instance that isn't available in linux/containers. I don't think I'll ever choose SQL again, but at least I can use/access a good API with windows direct auth and FILESTREAM access in Deno/Bun/Node.<p>FWIW: My final implementation landed on ODBC via rust+ffi so after I get the mssql driver out, I'll strip a few bits in a fork and publish a more generic odbc client adapter. using/dispose and async iterators as first class features in the driver.
>He is not "your avg dev" and it took him 4 months with llm.<p>He's not, but his work is obviously not average.<p>Average dev work is plumbing and CRUDs.
Sharing my current MO:<p>I start with a high level design md doc which an AI helps write. Then I ask another AI - whether the same model without the context, or another model - to critique it and spot bugs, gaps and omissions. It always finds obvious in hindsight stuff. So I ask it to summarize its findings and I paste that into the first AI and ask its opinions. We form an agreed change and make it and carry on this adversarial round robin until no model can suggest anything that seems weighty.<p>I then ask the AI to make a plan. And I round robin that through a bunch of AIs adversarially as well. In the end, the plan looks solid.<p>Then the end to end test cases plan and so on.<p>By the end of the first day or week or month - depending on the scale of the system - we are ready to code.<p>And as code gets made I paste that into other AIs with the spec and plan and ask them to spot bugs, omissions and gaps too and so on. Continually using other AI to check on the main one implementing.<p>And of course you have to go read the code because I have found it that AI misses polishes.
The discourse around AI is that we’ve unlocked a whole new unsupervised paradigm of development; but you’re basically describing how Google has built code for a decade, just with humans of different levels of trust instead of AI.<p>And I’m not saying that to poke fun at you (my workflow is essentially identical to yours), or at Google, but rather to say that there’s nothing new :)<p>AI is a fantastic accelerator of effective and ineffective workflows alike. It’s showing us which are effective and ineffective on way shorter timescales / in realtime!
How much faster/slower are you with that process compared to writing code yourself?
Developer of 20+ years here, can't give you an accurate multiplier but I am faster.<p>Because spotting holes in specs has never been one of my strengths. And working without technical colleagues much of the time, it's a boon to be able to "rubber-duck" my ideas with something that is at least more intelligent than plastic.<p>Grabbing multipliers from thin air, the coding bit may only be 2x faster with a poorer-quality outcome, but working out what's needed is a good 5x faster.<p>And yes, I'm using the same adversarial AI MO as @wood_spirit, combined with Matt Pocock's excellent /grill-me and /grill-with-docs skills [1] and Plannotator [2] to review the plans.<p>1. <a href="https://github.com/mattpocock/skills" rel="nofollow">https://github.com/mattpocock/skills</a><p>2. <a href="https://github.com/backnotprop/plannotator" rel="nofollow">https://github.com/backnotprop/plannotator</a>
I actually use LLMs a lot to rubber duck my problems and help develop plans. Then I manually code, to ensure my skills don't deteriorate. I feel like I'm a lot faster, with few of the downsides. Do you have any thoughts on this process?
Have you considered incorporating formal modelling?<p>Like:<p>[0] <a href="https://csci1710.github.io/2026/" rel="nofollow">https://csci1710.github.io/2026/</a> and <a href="https://forge-fm.github.io/book/2026/" rel="nofollow">https://forge-fm.github.io/book/2026/</a><p>[1] <a href="https://elliotswart.github.io/pragmaticformalmodeling/" rel="nofollow">https://elliotswart.github.io/pragmaticformalmodeling/</a><p>[2] <a href="https://quint.sh/" rel="nofollow">https://quint.sh/</a>
Thanks for sharing those. They look interesting.
Can't speak for GP or OP, but I see about 10x the output and 2-4x the value of what I would be able to get by hand. Within the gap between 2-4x and the 10x is really a lot of design documents, user/dev documentation and testing that I might not have rolled to nearly the extent that I do/get when using AI.<p>I haven't been using multiple AIs adversarially as OP, but might consider giving it a try with Codex and Opus. That said, my AI workflow has been pretty similar... lots of iterations on just design, then iterations on documentation, testing, etc... then iterations on implementation, testing, validation and human review in the mix.<p>My analogy is that it's really close to working with a foreign dev team, but your turnaround is in minutes instead of days, where it's much more interactive.
I'm seeing the same, for gains being largely from documentation.<p>I feel strong making "dev" documentation though, since it seems a bit redundant/superfluous. I fully suspect nobody is going to read it at this point.
For me, sometimes faster/sometimes slower, but there are a lot of other benefits besides speed:<p>* I can work in code I'm not familiar with much easier<p>* LLMs often identify confusion or uncertainty upfront, so I can address it earlier.<p>* I'm much less mentally taxed so I can go for longer at my top end.<p>* Meetings, disruptions, end of day is WAY less critical since I can lean on the LLM to get back into things.<p>* I can do something else productive while the LLM is running. Bug fixes, documentation, PR reviews, etc.
Having tried something similar, the perceived speedup does not, in the steady state, last.<p>To get a quality, lasting, result you're ultimately having to carefully study everything otherwise you end up quickly accumulating cognitive debt and the speedup soon shrinks as you're constantly having to revisit the initial approaches.
This sort of "spec-driven development" was the USP behind AWS Kiro: <a href="https://kiro.dev/docs/specs/" rel="nofollow">https://kiro.dev/docs/specs/</a><p>> <i>And of course you have to go read the code because I have found it that AI misses polishes</i><p>Since you mentioned using other agents, do you get mileage out of code reviews with another agent polishing the unpolished bits? My colleagues swear by it, though I personally remain skeptical about its value without a human reviewer.<p>> <i>Then I ask another AI</i><p>May be synthesis-antithesis-thesis works better in applied computer science... <a href="https://en.wikipedia.org/wiki/Dialectic#Criticisms" rel="nofollow">https://en.wikipedia.org/wiki/Dialectic#Criticisms</a>
Reviewing 22,000 lines of code, even from antirez, with this complex of a feature set and minimal PR description sounds like a nightmare. One starts to see why major open-source software like Postgres tends to be developed on a mailing list, with intermediate design decisions discussed by the community, separate patches for different related features, incremental review, and then a spaced release cadence.
The code is 5000 lines of code in total, comments included:<p>2000 lines the sparse array.<p>2000 lines the t_array commands and upper layer implementation.<p>~500 lines of AOF / RDB code.<p>All the other stuff is tests, JSON command descriptions, TRE library under "deps".
I might be the outlier, but this PR feels like heaven to review. It's a complete, all encompassing PR that I can work through with the entire context right in front of me.<p>If the initial development bar is relatively high, it's far, far easier to identify flaws and gaps when you have the whole thing in front of you all at once.
I think the point GP is making is this is a PR that smells like a solo dev working on their own project and not how a community-driven project adds major new functionality, although I'm sure there are docs and descriptions (or at least a discussion of tradeoffs and design decisions if not ADRs) are <i>somewhere</i>, but not linked handily to the PR. There is a lot of explanation in the blog post and PR, but it's unilateral-looking.<p>c.f. valkey and others
Redis was completely built in this way since the start. I believe this is a better way to create software. Compromise in design is, in my opinion, something to avoid: feedbacks are important, but often times a single person that studied a lot the problem and have design taste, can come up with a great solution. Mediating such solution, even among two stellar A and B solutions, will not produce a C soution that is better, since you can't produce such solution by interpolation. It is simpler to damage A and B. And: it is rare that in a big set of people all have stellar ideas, so you have to mediate, often, also with people having poor ideas. Not worth the effort for the way I'm wired. What works better for me is to provide hints about what I'm doing, then I receive feedbacks, and sometimes there are really great ideas in this feedbacks, and I incorporate the part I like.
Thanks, I think I'm all caught up now. The timeline is like this if I understand correctly: your successors (Yossi Gottlieb and Oran Agra) explicitly announced a new governance model in 2020, saying the project had "outgrown the BDFL-style of management" and that they wanted to "promote more teamwork and structure". With the relicensing in 2024, however, external contributors with five or more commits to Redis dropped to zero in the first six months (basically, community contribution collapsed). In late 2024, you came back in the role of "Redis evangelist" and a year ago there was an additional licensing change, adding AGPLv3 as an option (8.0's tri-license). So now redis has your steady hand on the wheel again.<p>I was confused because the last time I checked on things, it was still about fostering community input and advancement but not necessarily consensus. Things have tipped back in the original direction since then. I don't think "Redis was completely built in this way since the start" is completely accurate, but also the community effort under the new governance model never got very deeply entrenched while you were away.
First of all, redis is amazing, and your 4 month development process speaks to the fact that you've already designed and verified correctness super thoroughly.<p>... just speaking as someone who sometimes has to review very long PRs sometimes, though, I feel like 25% is a roughly normal level of "signal to noise." 5,000 lines of core logic is a LOT, and the tests and dependencies do still need to be read.<p>EDIT: I feel like the problem, as a reviewer, is processing 4 months of intensive research/development and providing useful feedback. At that point, there's probably not much major input you can have into the core architecture or strategy, so you're probably not providing much more than a bugbot at that point.
> At that point, there's probably not much major input you can have into the core architecture or strategy<p>Sure you can? In this concrete case, Redis is very "flat" — there's the data structure implementations, and there's the commands that use them. 1+N. You could have feedback about the data structure (i.e. whether it's optimal for the use-cases); or about any of the commands (i.e. not just their impls, but also whether they're the best core API surface to lock in long-term, or even whether they're worth including at all.)<p>Any given feedback would necessitate fairly limited rework to address, as you're either modifying the data structure (and its tests) or a command (and its tests and docs.)
I think where we went wrong in understanding this PR is in the assumption that it's designed to invite review because that's how a lot of other team- or community-driven projects work.
Postgres and Redis are dramatically different projects with radically different stories, contributions and development team.<p>Virtually all major Redis features are a solo job of the post author.<p>By the way reviewers are paid good money for this and know the setup.
Oh wow, I didn't realize that Redis is still mostly just authored by antirez! (My understanding is that he had left for some time and then returned to the project.) That is, honestly, pretty amazing. Well, redis is great and clearly it's worked out.
In short, Redis can't be trusted any more.<p>Who is going to do an LLM free fork?
Closely matches my own experiences with current SOTA AI. Extremely useful collaborator, far from being a replacement for human intelligence and creativity.
I like to say, AI is the duck programming duck I always wanted
LLMs are the insensitive Asmovian robots I’ve always wanted, who translate and do the hardest part of my job: ensuring my emails are polite and none of my true thoughts or feelings are revealed…<p>Now I just need a way to protect my chats from any potential discovery, and <pew pew> business’ll be easy.
There are projects that I develop mostly not looking at the code, but owning the concepts, algorithms and ideas asking questions and giving hints, and owning especially the <i>product</i>. But, not for Redis, not yet at least. When in the future this will be possible, server software, the way it is developed today, will be over. I bet there will be still projects and repositories, as accumulation of features, fixes and experiences will still be worth it, but the role of programmers will be very similar to what Linus did so far for the kernel. And for certain projects I'm developing, like the DeepSeek v4 inference engine, I'l already working like that.
Thanks for adding this. Excited about array/regex, also very interested in your experience using LLMs to stretch your abilities. There are many of us laboring quietly on various projects attempting the same. "Vibe coding" (and the backlash) doesn't really capture how we work.
I definitely don't consider how I've used agents as vibe coding at all... I'm much too involved and validate/verify/review everything.
The problem with "vibe coding" is that the author who coined the term gave it a very specific definition (after all, it's his term): writing software without looking at the code, just "vibing".<p>Then it quickly lost its original meaning as people started using it for virtually all forms of AI-assisted coding.
Couldn't some of the use cases presented for this be accomplished with ZSETs? I get the performance angle, but it seems that this could have been accomplished without the new API surface by selectively optimizing ZSET storage for dense values (in the same way that Arrays selectively use sparse representations).<p>The RE component is interesting, but as commentary here has noted it seems orthogonal to the array data structure (i.e., usable on others as well). Does this not make more sense to accomplish with Lua scripting? Or if performance of Lua is an issue perhaps abstracting OP to be composable on top of any command that returns a range of values.<p>I say this with reverence for Antirez as the expert in this space, but some of this new feature set feels like the sort of solution that I tend to see arise from LLM-driven development; namely creation of new functionality instead of enhancement of existing, plus overcomplicating features when composition with others might be more effective.
Unfortunately not, sorted sets are actually a bit in the other side of the spectrum: they are semantically sound, but absolutely wasteful because of the <i>combined</i> skiplist + array. Also, if the underlying representation is not an array, range queries and ring buffers will never be as efficient and compact as they should. In theory you can do everything with everything, but segmenting what each API can do allows you to exploit the use cases to provide the best underlying implementation.
antirez: i'm curious, with the final code, have you experimented with effectively one-shotting the final result? i wonder if we can get there with GEPA, and maybe there's something we can learn in how to elicit/prompt these models to get what we want.<p>or maybe the conclusion is that model providers need to clean up their training data!
Anyone know how to get the specification mentioned in the blog post? Don't see one in the linked PR.
Salvatore really wants to popularize the term Automatic Programming/Coding it seems. (<a href="https://antirez.com/news/159" rel="nofollow">https://antirez.com/news/159</a>)
<a href="https://en.wikipedia.org/wiki/Automatic_programming" rel="nofollow">https://en.wikipedia.org/wiki/Automatic_programming</a> It's an acknowledged term in computer science, describing any mechanism whatsoever of auto-generating code from a description at a higher level of abstraction. Of course LLM's are highly unusual in being non-deterministic and having a surprisingly broad scope, but this does not make the term inapplicable.
I keep finding myself to minimize the words to describe the same thing as well, since we are finding ourselves doing "that" operation more and more over time.<p>maybe shortening the term to "auto-code" would help tho.
Thanks for the write up. Always interesting to see how very senior developers interact with AI these days.<p>@antirez: Introducing a regex feature that late into the project for a seemingly unrelated feature feels a bit weird? Can you explain more your rationale on that? thanks!
Once I realized arrays were a great fit for text files, many use cases I could conceive were always limited by the fact we need to grep on files. So I thought: what is the AROP equivalent for files? ARGREP. Then I made sure to add both fast, exact and regexp matching so that depending on the use case the best tool could be used. I then discovered that for many OR-ed strings regexps could be the faster way if we'll optimized. And then I specialized TRE a bit.
It feels like Redis is becoming a small database, which seems to make it more convenient to use. Could you add more examples that clarify where the boundary should be?
Well, Redis is a data structures server, and has very complicated and edgy data structures like the HyperLogLog, so I have very little doubts that a fundamental data type like the Array will fit :) Also the actual complexity added is mostly two C files that are quite commented and understandable.<p><pre><code> wc -l t_array.c sparsearray.c
2012 t_array.c
2063 sparsearray.c
4075 total (including comments)
</code></pre>
Sure there are also the AOF / RDB glues, the tests, the vendored TRE library for ARGREP. But all in all it's self contained complexity with little interactions with the rest of the server.<p>A quick note: if we focus only on that part of the implementation, skipping tests and persistence code which is not huge, 4075 lines in 4 months are an average of 33 lines per day, which is quite low.
I’m a big fan of your work, and I honestly didn’t expect to receive a reply from you. Thank you.
Also, thank you for pointing out exactly where I was misunderstanding the issue.
In the past, I used Redis for temperature measurements in a smart farm project. I used Hashes back then, but it seems like Array would fit that use case much better.<p>This looks like a very useful feature. Thank you again for the reply.
The use of C stdlib localization functions (toupper, mbrtowc, etc), makes me suspect if there will be some regex behavior differences between systems or locales.
Is this an apologia since the PR is +22,212 -34?
AI is a fantastic co-pilot, but you still need to know how to fly the plane when the edge cases start hitting the fan.
Is it possible to see the specification file you created and used for AI assisted development?<p>Very cool anyway! Can I expect a youtube video about this soon?
On safari mobile it's a page with the title header and a footer. Theres no content rendering.
Got few questions:<p>- the project essentially spans almost 3 different (albeit minor) generations of LLMs. Have you noticed major differences in their personas, behavior, output for that specific use case?<p>- when using AI for feedback, have you ever considered giving it different "personalities"? I have few skills that role play as very different reviewers with their own different (by design conflicting) personalities. I found this to improve the output, but also to be extremely tiring and to often have high noise ratio.<p>- when did you, if ever, felt that AI was slowing you down massively compared to just doing it yourself (e.g. some specific bug or performance or design fix)? Are there recurring patterns?<p>- conversely, how often did AI had moments where it genuinely gave you feedback or ideas that would've not come to you?<p>- last: do you have specific prompts, skills, setups, etc to work on specific repositories?
1. The huge jump from from Opus to GPT 5.3. Game changer. GPT 5.4, 5.5, were better but only incrementally better.<p>2. Nope I don't give much personalities, but I use subtle prompt differences to maximize certain responses I want, to make the model focusing in a given detail or acting in a specific kind of engineering mindset.<p>3. It never happened that the AI was slowing me down since I always had the full context and code detail in mind of what was happening. I believe that this happens more when you don't have a clear idea. Also GPT >= 5.3/4 is not the past generation of models, it is very hard to trap it into a situation where it seems unable to understand what you mean.<p>4. A few times the AI provided fresh insights that I really liked. Most of the times it was the other way around. Certain implementations were written by the AI at a very impressive level of quality.<p>5. I don't use general skills, I build skills with deep search when needed for specific projects, and build an AGENT.md that works as a knowledge base as I work with the AI. One thing that I use a lot is, when there is a very complex problem, to tell GPT that I have a friend called Machiavelli that is an incredible computer scientist. To write him an email in /tmp/letter.md with the problem we are facing, and I'll try to get a reply. Then I ask GPT 5.5 Pro on the web with extensive reasoning set on. It will take sometimes 30 minutes or more to reply. Often times after I feed back the reply, the agent will be able to see things a lot more clearly.
I vibe coded up an interactive playground against a WebAssembly build of the new array features: <a href="https://tools.simonwillison.net/redis-array" rel="nofollow">https://tools.simonwillison.net/redis-array</a>
[dead]