This is very much worth watching. It is a tour de force.<p>Laurie does an amazing job of reimagining Google's strange job optimisation technique (for jobs running on hard disk storage) that uses 2 CPUs to do the same job. The technique simply takes the result of the machine that finishes it first, discarding the slower job's results... It seems expensive in resources, but it works and allows high priority tasks to run optimally.<p>Laurie re-imagines this process but for RAM!! In doing this she needs to deal with Cores, RAM channels and other relatively undocumented CPU memory management features.<p>She was even able to work out various undocumented CPU/RAM settings by using her tool to find where timing differences exposed various CPU settings.<p>She's turned "Tailslayer" into a lib now, available on Github, <a href="https://github.com/LaurieWired/tailslayer" rel="nofollow">https://github.com/LaurieWired/tailslayer</a><p>You can see her having so much fun, doing cool victory dances as she works out ways of getting around each of the issues that she finds.<p>The experimentation, explanation and graphing of results is fantastic. Amazing stuff. Perhaps someone will use this somewhere?<p>As mentioned in the YT comments, the work done here is probably a Master's degrees worth of work, experimentation and documentation.<p>Go Laurie!
This is a 54 minute video. I watched about 3 minutes and it seemed like some potentially interesting info wrapped in useless visuals. I thought about downloading and reading the transcript (that's faster than watching videos), but it seems to me that it's another video that would be much better as a blog post. Could someone summarize in a sentence or two? Yes we know about the refresh interval. What is the bypass?<p>Update: found the bypass via the youtube blurb: <a href="https://github.com/LaurieWired/tailslayer" rel="nofollow">https://github.com/LaurieWired/tailslayer</a><p>"Tailslayer is a C++ library that reduces tail latency in RAM reads caused by DRAM refresh stalls.<p>"It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules, using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton. Once the request comes in, Tailslayer issues hedged reads across all replicas, allowing the work to be performed on whichever result responds first."
The video could be a shorter, some of the goofiness might not please the most pressed people but that is also what makes it fresh and stand out.
Just use the Ask button on YouTube videos to summarize, that's what it's for.
<i>>Just use the Ask button on YouTube videos to summarize,</i><p>For anyone confused because they don't see the "Ask" button between the Share and Bookmark buttons...<p>It looks like you have to be signed-in to Youtube to see it. I always browse Youtube in incognito mode so I never saw the Ask button.<p>Another source of confusion is that some channels may not have it or some other unexplained reason: <a href="https://old.reddit.com/r/youtube/comments/1qaudqd/youtube_ask_feature_gone/" rel="nofollow">https://old.reddit.com/r/youtube/comments/1qaudqd/youtube_as...</a>
Not complaining about the particular presenter here, this is an interesting video with some decent content, I don't find the presentation style overly irritating, and it is documenting a lot of work that has obviously been done experimenting in order to get the end result (rather than just summarising someone else's work). Such a goofy elongated style, that is infuriating if you are looking for quick hard information, is practically required in order to drive wider interest in the channel.<p>But the “ask the LLM” thing is a sign of how off kilter information passing has become in the current world. A lot of stuff is packaged deliberately inefficiently because that is the way to monetise it, or sometimes just to game the searching & recommendation systems so it gets out to potentially interested people at all, then we are encouraged to use a computationally expensive process to summarise that to distil the information back out.<p>MS's documentation the large chunks of Azure is that way, but with even less excuse (they aren't a content creator needing to drive interest by being a quirky presenter as well as a potential information source). Instead of telling me to ask copilot to guess what I need to know, why not write some good documentation that you can reference directly (or that I can search through)? Heck, use copilot to draft that documentation if you want to (but please have humans review the result for hallucinations, missed parts, and other inaccuracies, before publishing).
> using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton<p>Seems odd to me that all three architectures implement this yet all three leave it undocumented. Is it intended as some sort of debug functionality or what?
Unnecessarily negative imo.<p>I like the video because I cant read a blog post in the background while doing other stuff, and I like Gadget Hackwrench narrating semi-obscure CS topics lol
>> It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules<p>This is the sort of thing which was done before in a world where there was NUMA, but that is easy. Just task-set and mbind your way around it to keep your copies in both places.<p>The crazy part of what she's done is how to determine that the two copies don't get get hit by refresh cycles at the same time.<p>Particularly by experimenting on something proprietary like Graviton.
<i>"This is the sort of thing which was done before in a world where there was NUMA"</i><p>You sound like NUMA was dead, is this a bit of hyperbole or would really say there is no NUMA anymore.
Honest question because I am out if touch.
She determines that by having three copies. Or four. Or eight.<p>Tis just probabilities and unlikelihood of hitting a refresh cycle across that many memory channels all at once.
I hope this approach gets some visibility in the CPU field. It could be obviously improved with a special cpu instruction which simply races two reads and returns the first one which succeeds. She’s doing an insane amount of work, making multiple threads and so on (and burning lots of performance) all to work around the lack of dedicated support for this in silicon.
> Google's strange job optimisation technique (for jobs running on hard disk storage)<p>Can you give more context on this? Opus couldn't figure out a reference for it
This is a quite old technique. The idea, as I understood it, was that lots of data at Google was stored in triplicate for reliability purposes. Instead of fetching one, you fetched all three and then took the one that arrived first. Then you sent UDP packets cancelling the other two. For something like search where you're issuing hundreds of requests that have to resolve in a few hundred milliseconds, this substantially cut down on tail latency.
Aha that makes more sense, I thought it was specifically to do with job scheduling from the description. You can do something similar at home as a poor man's CDN by racing requests to regionally replicated S3 buckets. Also magic eyeballs (ipv4/v6 race done in browsers and I think also for Quic/HTTP selection) works pretty much the same way
Tournament parallelism is the technical term IIRC.
<a href="https://cacm.acm.org/research/the-tail-at-scale/" rel="nofollow">https://cacm.acm.org/research/the-tail-at-scale/</a> (hedged / tied requests)
I like the video, but this is hardly groundbreaking. You send out two or more messengers hoping at least one of them will get there on time.
Yeah. These are literally just mainframe techniques from yesteryear.
and dropbox was just rsync
The clever part is figuring out what RAM is controlled by which controllers.
everyone says this but no one says why it was clever. i find her videos have cool results but i cant have patience for them usually because its recycled old stuff (can be cool but its not ground breaking).<p>there is a ton of info you can pull from: smbios, acpi, msrs, cpuid etc. etc. about cpu/ram topology and connecticity, latencies etc etc.<p>isnt the info on what controllers/ram relationships exists somewhere in there provided by firmware or platform?<p>i can hardly imagine it is not just plainly in there with the plethtora info in there...<p>theres srat/slit/hmat etc. in acpi, then theres MSRs with info (amd expose more than intel ofc, as always) and then there is registers on memory controller itself as well as socket to socket interconnects from upi links..<p>its just a lot of reading and finding bits here n there. LLms are actually really good at pulling all sorts of stuff from various 6-10k page documents if u are too lazy to dig yourself -_-
I have to say that using drawbridges and differently colored rail pieces to explain it was very clever.
Love the format, and super cool to see a benchmark that so clearly shows DRAM refresh stalls, especially avoiding them via reverse engineering the channel layout! Ran it on my 9950X3D machine with dual-channel DDR5 and saw clear spikes from 70ns to 330ns every 15us or so.<p>The hedging technique is a cool demo too, but I’m not sure it’s practical.<p>At a high level it’s a bit contradictory; trying to reduce the tail latency of cold reads by doubling the cache footprint makes every other read even colder.<p>I understand the premise is “data larger than cache” given the clflush, but even then you’re spending 2x the memory bandwidth and cache pressure to shave ~250ns off spikes that only happen once every 15us. There’s just not a realistic scenario where that helps.<p>Especially HFT is significantly more complex than a huge lookup table in DRAM. In the time you spend doing a handful of 70ns DRAM reads, your competitor has done hundreds of reads from cache and a bunch of math. It’s just far better to work with what you can fit in cache. And to shrink what doesn’t as much as possible.
Another point about HFT - They're mostly using FPGAs (some use custom silicon) which means that they have much tighter control over how DRAM is accessed and how the memory controller is configured. They could implement this in hardware if they really need to, but it wouldn't be at the OS level.
It could be massively improved with a special CPU instruction for racing dram reads. That might make it actually useful for real applications. As it is, the threading model she used here would make it incredibly difficult to use this in a real program.
On most RAM tREF can be increased a lot from the default, at least if kept somewhat cool.
A more accurate but less inspiring title would be:<p>RAM Has a Design Tradeoff from 1966. I made another one on top.<p>The first tradeoff, of 6x fewer transistors for some extra latency,
is immensely beneficial. The second, of reducing some of that extra latency
for extra copies of static data, is beneficial only to some extremely nice application. Still a very well made and educational video about modern memory architecture.
A more accurate but less inspiring title would be:<p>RAM Has a Design Tradeoff from 1966. I made another one on top.<p>The first tradeoff, of 6x fewer transistors for some extra latency,
is immensely beneficial. The second, of reducing some of that extra latency
for extra copies of static data, is beneficial only to some extremely nice application. Still a very educational video about modern memory architecture.
Previously: <a href="https://news.ycombinator.com/item?id=47680023">https://news.ycombinator.com/item?id=47680023</a>
Halfway through this great video and I have two questions:<p>1) Can we take this library and turn it into a a generic driver or something that applies the technique to <i>all</i> software (kernel and userspace) running on the system? i.e. If I want to halve my effective memory in order to completely eliminate the tail latency problem, without having to rewrite legacy software to implement this invention.<p>2) What model miniature smoke machine is that? I instruct volunteer firefighters and occasionally do scale model demos to teach ventilation concepts. Some research years back led me to the "Tiny FX" fogger which works great, but it's expensive and this thing looks even more convenient.
1. not that I can think of, due to the core split. It really has to be independent cores racing independent loads. anything clever you could do with kernel modules, page-table-land, or dynamically reacting via PMU counters would likely cost microseconds...far larger than the 10s-100s of nanoseconds you gain.<p>what I <i>wished</i> I had during this project is a hypothetical hedged_load ISA instruction. Issue two requests to two memory controllers and drop the loser. That would let the strategy work on a single thread! Or, even better, integrating the behavior into the memory controller itself, which would be transparent to all software without recompilation. But, you’d have to convince Intel/AMD/someone else :)<p>2. It’s called a “smokeninja”. Fairly popular in product photography circles, it’s quite fun!
<i>Or, even better, integrating the behavior into the memory controller itself, which would be transparent to all software without recompilation.</i><p>Yeah it would be neat to just flip a BIOS switch and put your memory into "hedge" mode. Maybe one day we'll have an open source hardware stack where tinkerers can directly fiddle with ideas like this. In the meantime, thanks for your extensive work proving out the concept and sharing it with the world!
Is there a reason you can think of why AMD, Intel etc. would not want to do this?<p>Really enjoyed the video and feel that I (not being in the IT industry) better understand CPUs und and RAM now.
> halve my effective memory in order to completely eliminate the tail latency problem,<p>Wouldn't you have a tail latency problem on the write side though if you just blindly apply it every where? As in unless all the replicas are done writing you can't proceed.
Brio 33884. It has a tiny ultrasonic humidifier in there.
This is a cool idea, very well put through for everyone to understand such an esoteric concept.<p>However I wonder if the core idea itself is useful or not in practice. With modern memory there are two main aspects it makes worse. First is cost, it needs to double the memory used for the same compute. With memory costs already soaring this is not good. Then the other main issue of throughout, haven’t put enough thought into that yet but feels like it requires more orchestration and increases costs there too.
Doesn't doing this halve the computing power?
I don't know this world at all, is that acceptable?
It halves (or thirds or quarters or etc) available CPU cores, cache space, memory bandwidth, all the critical resources. So I expect that it's only applicable for small reads that you are reasonably certain won't be in cache and that it can only be used extremely sparingly, otherwise it will be nothing but a massive drain.
Should say DRAM, SRAM does not have this.
Indeed. And only for certain DRAM refresh strategies. I mean, it's at least conceivable that a memory management system responsible for the refresh notices that a given memory location is requested by the cache and then fills the cache during the refresh (which afaiu reads the memory) or -- simpler to implement perhaps -- delays the refresh by a μs allowing the cache-fill to race ahead.<p>(seems that in the earlier submission, <a href="https://news.ycombinator.com/item?id=47680023">https://news.ycombinator.com/item?id=47680023</a>, jeffbee hinted that IBM zEnterprise is doing something to that effect)<p>Said that, I'm not convinced that this is a big issue in practice. If you really care about performance, you got to avoid cache misses.
I haven't had time to see the whole thing yet, but I'm quite surprised this yielded good results. If this works I would have expected CPU implementations to do some optimization around this by default given the memory latency bottleneck of the last 1.5 decades. What am I missing here?
She could probably have been stinking rich on this work alone, but instead she just put it up on Github. Kudos to Laurie.
She probably is already stinking rich, or at least rich enough. Beyond certain point, though, research and knowledge seems more interesting than riches, and particularly if you feel yourself a researcher. Otherwise, perhaps, she be doing the same to business and be Ellona or something. Thank God she does not, but the contrary - is an inspiration to so many people - young and adult. Kudos!
Companies are standing in line to double their RAM usage right now, right.
For a HFT firm, RAM cost is a non-issue.
Depends how much total RAM your application needs and how much money RAM access tail latency costs your business.
Just annoyed by this slop this twitter shilposter, just read her tweets to see how much of garbage she spews every waking minute
Am I the only one who feels the comments here don't sound organic at all?
No I felt the same way, they're exactly like the usual LLM bot comment where a LLM recap ops and ends with an platitude or witty encouragement.<p>But all the accounts are old/legit so I think that you and me have just become paranoid...
I have become oversensitive to this, and my brain is probably generating a lot of false positives. I don't think it's necessarily the case here, but I've wondered if people who use LLMs a lot take over some of its idiosyncrasies and in a way start sounding like one a bit. A strange side effect is that I've come to appreciate text with grammatical errors, videos where people don't enunciate well etc because it's a sign that it's human created content.
I think it's more people being fascinated by this curious architectural detail.
I imagine it's fascinating to people who are not exposed to the intricate details of computer architecture, which I assume is the vast majority here. It's a glimpse into a very odd world (which is your day-to-day work in the HFT field, but they rarely talk about this, and much less in such big words).<p>TBH, I didn't watch the video because the title is too click-baity for me and it's too long. Instead, I looked at the benchmark results on the Github page and sure, it's fascinating how you can significantly(!) thin the latency distribution, just by using 10× more CPU cores/RAM/etc. Classic case of a bad trade-off.<p>And nobody talked about what we use RAM for, usually: Not to only store static data, but also to update it when the need arises. This scheme is completely impractical for those cases. Additionally, if you really need low latency, as others pointed out, you can go for other means of computation, such as FPGAs.<p>So I love this idea, I'm sure it's a fun topic to talk about at a hacker conference! But I'm really put off by the click-baity title of the video and the hype around it.
You're absolutely right
You're absolutely right to call this out. No humans, no emotion, no real comments - just LLM slop.<p>In all seriousness, agreed. The top comment at time of this writing seems like a poor summarizing LLM treating everything as the best thing since sliced bread. The end result is interesting, but neither this nor Google invented the technique of trying multiple things at once as the comment implies.
No, something is funny here. In the previous submission (<a href="https://news.ycombinator.com/item?id=47680023">https://news.ycombinator.com/item?id=47680023</a>) the only (competently) criticizing comment (by jeffbee) was downvoted into oblivion/flagged.
I don’t see anything unusual
[dead]
[flagged]
[flagged]
This is an unreasonably good video. Hopefully, it inspires others to see we can still think hard and critically about technical things.
Yeah, wow, the comments weren't kidding. This'll probably be the best video I watch all month, at least, if not more. I would have said what she was trying to do was "impossible" (had I not seen the title and figured … well … she posted the video) and right about when I was thinking that she got me with:<p>> <i>Hold on a second. That's a really bad excuse. And technology never got anywhere by saying I accept this and it is what it is.</i>