This is an outstanding blog post. Initially, the title did little to captivate me, but the blog post was so well written that I got nerd-sniped. Who knew this little adapter was so fascinating! I wonder if the manufacturer is buying the Mellanox cards used from data center tear-downs. The author claims they can be had for only 20 USD online. That seems too good to be true!<p>Small thing: I just checked Amazon.com: <a href="https://www.amazon.com/s?k=thunderbolt+25G&crid=2RHL4ZJL96Z9U&sprefix=thunderbolt+25g%2Caps%2C593&ref=nb_sb_noss_2" rel="nofollow">https://www.amazon.com/s?k=thunderbolt+25G&crid=2RHL4ZJL96Z9...</a><p>I cannot find anything for less than 285 USD. The blog post gave a price of 174 USD. I have no reason to disbelieve the author, but a bummer to see the current price is 110 USD more!
Thank you!<p>I think, tragically, the blog post has caused this price increase.<p>The offers on Amazon are most likely all drop shippers trying to gauge a price that works for them.<p>You might have better luck ordering directly from China for a fraction of the price: <a href="https://detail.1688.com/offer/836680468489.html" rel="nofollow">https://detail.1688.com/offer/836680468489.html</a>
> The author claims they can be had for only 20 USD online. That seems too good to be true!<p>In my experience, the cheap eBay MLX cards are DellEMC/HPE/etc OEM cards. However I also encountered zero problems cross-flashing those cards back to generic Mellanox firmware. I'm running several of those cross-flashed CX-4 Lx cards going on six or seven years now and they've been totally bulletproof.
I saw the blog post last week and immediately bought the last one on that Amazon listing for the original price... hopefully they restock soon!<p>I'm going to try a couple other fan assisted cooling options, as I'd like to keep the setup reasonably compact.<p>I just ran fiber to my desk and I have a more expensive QNAP unit that does 10G SFP+, but this will let me max out the connection to my NAS.
I believe the author is talking about the OCP (2.0) network card itself, that these adapters internally. The OCP nics are quite cheap compared to pcie - here’s 100GBE for 100!
<a href="https://ebay.us/m/HMQAph" rel="nofollow">https://ebay.us/m/HMQAph</a>
This 100GbE card is an OCP 2.0 type 2 adapter, which will _probably_ not work with the PX PCB since that NIC has two of these mezzanine connectors, and PX only one.<p>What also may not work are Dell rNDC cards. They look like they have OCP 2.0 type 1 connectors, but may not quite fit (please correct me if I'm wrong). They do however have a nice cooling solution, which could be retrofitted to one of the OCP 2.0 cards.<p>I've also ordered a Chelsio T6225-OCP cards out of curiosity. These should fit in the PX adapter but require a 3rd-party driver on macOS (which then supports jumbo frames, etc.)<p>What also fits physically is a Broadcom BCM957304M3040C, but there are no drivers on macOS, and I couldn't get the firmware updated on Linux either.
That’s a good point to note! I think the stacking height would matter, but in theory the single connector is still 8x pcie and should link without the upper 8x lanes connected.<p>Spec for reference, I’m not 100% sure.
<a href="https://docs.nvidia.com/nvidia-connectx-5-ethernet-adapter-cards-for-ocp-spec-2-0-user-manual.pdf" rel="nofollow">https://docs.nvidia.com/nvidia-connectx-5-ethernet-adapter-c...</a>
you can get a 100Gb normal pcie card like a MCX416A for less than $100 if you're willing to flash them
Not much to add here but wanted to agree, this is post was actually hacker news (tm)
$285 is still an AMAZING price for 25GbE ethernet over TB4. I paid $200 for the Sonnet TB4 10GbE adapter.
Ha! Been running these for years on both linux and windows (on lenovo x1 laptops). Using cheap chinese thunderbolt-to-nvme adapters + nvme-to-pcie boards + mellanox cx4 cards (recently got one cx5 and a solarflare x2).<p>Pic of a previous cx3 (10 gig on tb3) setup: <a href="https://habrastorage.org/r/w780/getpro/habr/upload_files/d3c/939/f26/d3c939f26c369b668155a6cab5c34a1e.jpg" rel="nofollow">https://habrastorage.org/r/w780/getpro/habr/upload_files/d3c...</a><p>10gig can saturate full speed, 25G in my experience rarely reaches same 20G as the author observed.
Note that you can do point-to-point network links directly with thunderbolt (and usb4).<p><a href="https://support.apple.com/guide/mac-help/ip-thunderbolt-connect-mac-computers-mchld53dd2f5/mac" rel="nofollow">https://support.apple.com/guide/mac-help/ip-thunderbolt-conn...</a> etc
I've had a lot of problems with even 10GbE via Thunderbolt 3/4. Bandwidth-wise it works fine, but latency and jitter are issues. This means that stuff like high-speed cameras that need to be synchronized over Ethernet using Precision Time Protocol (PTP) tend to simply fail with these devices.
I’d heard similar complaints re: TB networking latency & jitter. Did some investigations and tuning on a pair of machines with USB4 ports connected via short TB5-rated cables. Eventually got the thunderbolt links to consistently beat the ether ones on both latency <i>and</i> jitter. And not just switched Ethernet either - even a direct Ethernet P2P link lost out to TB, though the difference there was small.
I'm surprised you are only getting 20gbit/s. I did not expect PCIe to be be the limiting factor here. I've got a 100gbit cx4 card currently in a PCIe3 X4 slot (for reasons, don't judge) and it easily maxes that out. I would have expected the 25g cx4 cards to be at least able to get everything out of it. RDMA is required to achieve that in a useful way though.<p>Edit: forgot is isn't "true" PCIe but tunneled.
Thunderbolt is basically external PCIe, so this is not so surprising. High speed NICs do consume a relatively large amount of power. I have a feeling I've seen that logo on the board before.
I don't know how to measure the direct power impact on a MacBook Pro (since it's got a battery), but the typical power consumption of these cards is 9 W, not much more than Aquantia 10 GBit cards.<p>Also, if you remember where you saw that logo, please let me know!
JFYI, for measuring power draw, you might be able to use `macmon`[0] to see the total system power consumption. The values reported by the internal current sensor seem to be quite accurate.<p>[0] <a href="https://github.com/vladkens/macmon" rel="nofollow">https://github.com/vladkens/macmon</a>
Speaking of hardware, the RTL8159 (10Gbps) hit the market late last year and is said to consume only about 2–3W. It apparently runs very cool compared to older chips. (Though it would need to be bonded to reach 25Gbps ;-)
I got me one of these adapters (RTL8127AF TXA403, with SFP+ cage); I haven't properly benchmarked it yet.<p>There's no driver support on macOS, and for Linux you'd need a bleeding edge kernel. Just trying to physically connect it (along with a connected SFP28 transceiver) to my Mac's Thunderbolt port using an external PCIe-to-TB adapter, macmon tells me a power draw of around 4.3 W, so it's not significantly less for half the bandwidth, but the card doesn't get hot at all.
Very nice tip, thank you!<p>I measure around +11W idle. While running a speed test, I read ca. +15W.
Plus 1-2.5w per active cable. You need the heatsinks as the cx4 cards expect active airflow, and active transceivers as well.<p>I have a 10gbit dual port card in a Lenovo mini pc. There is no normal way to get any heat out of there so I put a 12v small radial fan in there as support. It works great at 5v: silent and cool. It is a fan though so might not suit your purpose.
It isn't. There is no sense in which "Thunderbolt is basically external PCIe". Thunderbolt provides a means of encapsulating PCIe over its native protocols, which puts PCIe on the same footing as other encapsulated things like DisplayPort and, for TB4 and later, USB.
The PCI-E logo or the “octopus in a chip” logo? I’m more interested in the latter.
Neat, but the thermal design is absolutely terrible. Sticking that heatsink inside the aluminum case without any air circulation is awful.
Yeah, it's because the network card adapter's heatsink is sandwiched between two PCBs. Not great, not terrible, works for me.<p>The placement is mostly determined by the design of the OCP 2.0 connector. OCP 3.0 has a connector at the short edge of the card, which allows exposing/extending the heat sink directly to the outer case.<p>If somebody has the talent, designing a Thunderbolt 5 adapter for OCP 3.0 cards could be a worthwhile project.
A Flex PCB connecting the OCP2 connector would allow to put the converter board <i>behind</i> the NIC board, allowing the NIC board to be exposed to the aluminum case to use the case itself as a heatsink (would need a split case so the NIC board can be screwed to one side of the case, pressing the main chip against it via a thermal pad).<p>As a stop-gap, I'd see if there was any way to get airflow <i>into</i> the case - I'd expect even a tiny fan would do much more than those two large heatsinks stuck onto the case (since the case itself has no thermal connection to the chip heatsink).
My goal was to get a fanless setup (for a quiet office).<p>If that's not a requirement just get the Raiden Digit Light One, which does have a fan (and otherwise the same network card).<p>If I could design an adapter PCB myself, I would go straight to OCP 3.0, which allows for a much simpler construction, and TB5 speeds.<p>Alternatively, there are DELL CX422A rNDC cards (R887V) that appear to have an OCP 2.0 connector but a better heatsink design.
I'd be more worried about cooling the transceivers properly.
My optical transceiver gets to around 52 °C (measured via IR camera), well below its design limit, so that's not bad.<p>If truly concerned, one could use SFP28 to SFP28 cage adapters to have the heat outside the case, and slap on some extra heatsinks there.
Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.<p>Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
The new low-power Realtek chipsets will definitely push 10 GbE forward because the chipset won't be much more expensive to integrate and run than the 2.5Gbps packages.<p>It all comes down to performance per Watt, the availability of cheap switching gear, and the actual utility in an office / home environment.<p>For 10 Gbps, cabling can be an issue. Existing "RJ45"-style Cat 6 cables could still work, but maybe not all of them.<p>Higher speeds will most likely demand a switch to fiber (for anything longer than a few meters) or Twinax DAC (for inter-device connects). Since Wifi already provides higher speeds, one may be inclined to upgrade just for that (because at some point, Wireless becomes Wired, too).<p>That change comes with the complexity of running new cabling, fiber splicing, worrying about different connectors (SFP+, SFP28, SFP56, QSFP28, ...), incompatible transceiver certifications, vendor lock-in, etc. Not a problem in the datacenter, but try to explain this to a layman.<p>Lastly, without a faster pipe to the Internet, what can you do other than NAS and AI? The computers will still get faster chips but most folks won't be able to make use of the bandwidth because they're still stuck on 1Gbps Internet or less.<p>But that will change. Swiss Init7 has shown that 25GBps Internet at home is not only feasible but also affordable, and China seems to be adding lots of 10G, and fiber in general.<p>Fun times ahead.
But <i>why</i> is the gear progressing so very slowly? Why a 25 year gap between reasonable power 1Gbps and 10Gbps?<p>And while not every cat6 will do 10, it would still be worth a shot, and devices aren't using 5 instead they're using even less.<p>Not to mention that cat8 will happily do 40Gbps as long as you can get from your switch to your end devices in 30 meters.
> Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.<p>wifi is not faster.<p>However ethernet is not as critical as it used to be, even at the office. People like the conveniency of having laptops they can move around. Unless you are working from home, having a dedicated office space is now seen as a waste of space. If the speed of the wifi is good enough when you are in a meeting room or in your kitchen, there is no reason to plug your laptop when you move back in another place, especially if most connections are to the internet and not the local network. In the workplace, most NAS have been replaced by onedrive / gdrive, at home NAS use has always been limited to a niche population: nerds/techies, photographers, music or video producers...
PCI-E lanes for consumers. Gigabit would saturate the PCI bus, but once you're on PCI-E you only need to give it 1 lane, usually off the chipset.<p>Servers had a reason to spend for the 10G, 25G and 40G cards which used 4 lanes.<p>There are 10 Gigabit chips that can run off of one PCI-E 4.0 lane now and the 2.5G and 5G speeds are supported(802.3bz).
We have 400Gbe which is certainly faster than USB.. but;<p>On consumer devices, I think part of the issue is that we’re still wedded to four-pair twisted copper as the physical medium. That worked well for Gigabit Ethernet, but once you push to 5 or 10 Gb/s it becomes inherently expensive. Twisted pair is simply a poor medium at those data rates, so you end up needing a large amount of complex silicon to compensate for attenuation, crosstalk, and noise.<p>That's doable but the double whammy is that most people use the network for 'internet' and 1G is simply more than enough, 10G therefore becomes quite niche so there's no enormous volume to overcome the inherent issues at low cost.
Wireless happened, I'd think. People started using wifi and cellular data for everything, so applications had to adapt to this lowest common denominator, and consumer broadband demand for faster-than-wifi speeds isn't there. Plus operators put all their money into cellular infra leaving no money to update broadband infra.
10Gbase-T requires a lot of transistors and power (maybe over 10x more than 1G) so it just wasn't worth the cost.
Ethernet did not stagnate. Ethernet on UTP did stagnate due to reaching the limits of the technology, but Ethernet continues to advance over fiber.<p>For 10 Gbps I find it simpler and cheaper to use fiber or DACs, but motherboards don't provide SFP+, only RJ45 ports. Over 10 Gbps copper is a no go. SFP28 and above would be nice to have on motherboards, but that's a dream with almost zero chances to happen. For most people RJ45 + WiFi 7 is good enough, computer manufacturers will not put SFP+ or SFP28 for a small minority of people.
> Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.<p>Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.<p>> Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.<p>For both laptops and desktops, PCI lanes. Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.<p>For laptops in particular, power draw. The faster you push copper, the more power you need. And laptops have even less PCIe lanes available to waste.<p>For desktops, it's a question of market demand. Again - most applications don't need ultra high transfer rate, most household connectivity is DSL and (G)PON so 1 GBit/s is enough to max out the uplink. And those few users that do need higher transfer rates can always install a PCIe card, especially as there is a multitude of different options to provide high bandwidth connectivity.
> Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.<p>Yes but a hogwash of several gigabits sometimes does give you real-world performance of more than a gigabit.<p>> Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.<p>It's been a bunch of years that a single lane could do 10Gbps, and a bunch more years that a single lane could do 5Gbps.<p>Also don't ethernet ports tend to be fed by the chipset? So they don't really take lanes.
I used to have an SFP28 Mellanox card in my home server, but went back to a simple 2.5G Ethernet port for the LAN side. The Mellanox card ran hot and needed an extra fan near it to dissipate the heat. It was cool but there was no real benefit other than occasionally when transferring some large files.<p>Until motherboards include SFP ports it's probably not worth the effort at all in home setting; external adaptors like the one presented here are unreliable and add several ms of latency.
> <i>Until motherboards include SFP ports</i> […]<p>A micro-ATX motherboard with on-board 2xSFP28 (Intel E810):<p>* <a href="https://download-2.msi.com/archive/mnu_exe/server/D3052-datasheet.pdf" rel="nofollow">https://download-2.msi.com/archive/mnu_exe/server/D3052-data...</a><p>* <a href="https://www.techradar.com/pro/this-amd-motherboard-has-a-unique-exciting-feature-that-will-make-power-users-jump-out-of-their-chairs-msis-matx-wonder-has-two-25gbe-ethernet-spf28-ports-perfect-for-a-cracking-workstation-rig" rel="nofollow">https://www.techradar.com/pro/this-amd-motherboard-has-a-uni...</a>
For reference, I'm seeing pings from my Mac to my Linux boxes (Lenovo Tiny5) at well under 1ms, not much worse than between them directly. But yeah, your mileage may vary.
Yep, these cards need a fan (or any kind of directed air flow).<p>Where did you get "several ms of latency" figure from? I have not measured external card, but may be I should do it... Because cards themselves have latency in range of microseconds, not millis.
I haven't tested this particular Thunderbolt SFP adapter, but my experience with a TP-Link 1Gbps USB adapter is that it adds about 4ms of latency. Far from being unusable and similar to WiFi perhaps, but worse than PCIe cards that should be <1ms.
it's all just driver/options crap if I were to take a guess.<p>there are a lot of usb options that matter, and tp-link ships lots of realtek chipsets that require very special driver incantations that a lot of the linux drivers simply don't replicate.<p>two+ layers of bad options will surely add 4ms quick.
I think there's definitely something with that specific setup. For me, pinging between two cheap Realtek 2.5 GbE USB dongles (one is on a Mac one is on a 7 year old Intel Atom Synology) is still sub-ms (hovering around 0.7-0.8ms) so it's not an inherent problem to USB dongles.<p>USB itself can have a lot of issues anywhere in the chain. I have a Thunderbolt dock where half of the USB ports adds latency and reduced throughput just because the USB chipset that powers them is terrible (it has two separate USB chipsets from different brands). Switch to a different port on the exact same dock and it's fine.
Does this manufacturer's practice pattern of repackaging data center components (e.g. Mellanox) imply any up and coming product creation opportunities?
That is really cool to read. And here I am, still running my home network on a measly 1Gbit Ethernet. I considered upgrading, but the equipment power consumption even when idle makes it an expensive proposition to consider just for fun.
nitpicking but why would someone type `sudo su` vs `sudo -i`
Muscle memory for folks who have been doing it since before -i was an option. I still instinctively type `sudo su -` because it worked consistently on older deployments. When you have to operate a fleet of varying ages and distributions, you tend to quickly learn [if only out of frustration] what works everywhere vs only on the newer stuff.<p>`sudo su - <user>` also seems easier for me to type than `sudo -i -u <user>`
I've mostly only ever seen `sudo su` in tutorials, so someone who's only familiar with the command through those is one possible reason why.
I still have issues under Linux (Kernel 6.14) and Thinderboldt 4 docking stations. The simply don't get recognised.<p>But this is a cool solution
Thanks! Have you tried the boltctl/rescan setup I mentioned in the post? It should get you going, as long as your Thunderbolt/USB4 setup is correct.<p>If you're using an adapter card to add Thunderbolt functionality, then your mainboard needs to support that, and the card must be connected to a PCIe bus that's wired to the Intel PCH, not to the CPU.
Yes, rescan, re-enroll too. But it still shows as disconnected.
I don't know if the firmware is completely incompatible, but it is weird that under windows works and in Linux doesn't
Disconnected as in "network"? What PCIe card do you use? Can you update the firmware (maybe from Windows)?<p>Also check the BIOS settings (try setting TB security to "No Security" or "User Authorization")<p>Some OEM Mellanox cards can be cross-flashed to NVIDIA's stock firmware, maybe that's also relevant.
Is this satire?<p>> All other 25 GbE adapter solutions I’ve found so far ... have a spinning fan. ... the biggest downside of the PX adapter is that it gets really hot, like not touchable hot. Sometimes, either the network connection silently disappeared or (sadly) my Mac crashed with a kernel panic in the network driver. ... Other than that, the PX seems to do the job
Please forgive me for my ignorance, but are there currently any ways of being able to write data down at that speed? I see 2026 PCIe 5.0 NVMe advertising theoretical 14gb/s but not sure how feasible even that is.
> the biggest downside of the PX adapter is that it gets really hot, like not touchable hot. Sometimes, either the network connection silently disappeared or (sadly) my Mac crashed with a kernel panic in the network driver. Apple has assured me that this was not a security issue. Other than that, the PX seems to do the job.<p>Made me chuckle.
Ha. I got one of the 10G Thunderbolt adapters a several years ago. And eventually started having problems with Zoom calls around noon. With dropped connections and stuttering. Zoom restarts usually fixed the problem.<p>After it happened 3-4 times, I started debugging. It turned out that we usually get at least a bit of sunlight around noon, as it burns away the morning clouds. And my Thunderbolt box was in direct sunlight, and eventually started overheating.<p>And a Zoom restart made it fall back onto the Wifi connection instead of wired.<p>I fixed that by adding a small USB-powered fan to the Thunderbolt box as a temporary workaround. I just realized that it's been like this for the last 3 years: <a href="https://pics.ealex.net/s/overheat" rel="nofollow">https://pics.ealex.net/s/overheat</a>
Now I just have to contrive the circumstances where this is useful to me. :)
I don't know about the Ethernet part but it bothers me that even wifi has become faster than the wired USB port on our phones.<p>All I want to do is copy over all the photos and videos from my phone to my computer but I have to baby sit the process and think whether I want to skip or retry a failed copy. And it is so slow. USB 2.0 slow. I guess everybody has given up on the idea of saving their photos and videos over USB?
> USB 2.0 slow<p>Many phones indeed only support USB 2.0. For example the base iPhone 17. The Pro does support USB 3.2, however.<p>> I guess everybody has given up on the idea of saving their photos and videos over USB?<p>Correct.
Wifi is fast but the latency is terrible and the reliability is even worse. It can go up and down like a yo-yo. USB is far more predictable even if it is a bit slower.
I have a cluster of 4 RPi Zero Ws and network reliability is not great. Since it is for the chaos, it’s fine, but it’s very common to have a node be offline at any given time.<p>Even worse, the control plane is exposed, but for something that runs 3 Hercules mainframe emulation and two Altairs with MP/M, it’s fine.
Why don't you get a phone with 3.0+ USB?<p>My last two phones in the last 4 years had at least USB 3.1
I feel like this is an artifact from the late 2010s when the talk was of removing the port completely from phones, where that was being touted alongside swapping speakers with haptic screen audio as a way to make them completely waterproof.<p>As wireless charging never quite reached the level hoped – see AirPower – and Google/Apple seemingly bought and never did anything with a bunch of haptic audio startups, I figure that idea died....but they never cared enough to make sure the USB port remained top end.
I'd usually be against losing ports and user serviceable stuff but if the device could actually be properly sealed up (ie no speakers, mics, charge ports, etc) that would be legitimately useful.
If the photos on the phone are visible as files on a mounted filesystem, you can use rsync to copy them. If the connection drops but recovers by itself, you can put rsync inside a while true loop until it’s doing nothing.<p>I’m using Dropbox for syncing photos from phone to Linux laptop, and mounting the SDcard locally for cameras, so this is a guess.
<i>> given up on the idea of saving their photos and videos over USB?</i><p>Until USB has monthly service business to compete with cloud storage revenue.
> but I have to baby sit the process and think whether I want to skip or retry a failed copy<p>Do you import originals or do you have the "most compatible" setting turned on?<p>I always assumed apple simply hated people that use windows/linux desktops so the occasional broken file was caused by the driver being sort-of working and if people complain, well, they can fuck off and pay for icloud or a mac. After upgrading to 15 pro which has 10 gbps usb-c it still took forever to import photos and the occasional broken photos kept happening, and after some research it turns out that the speed was limited by the phone converting the .heic originals into .jpg when transferring to a desktop. Not only does it limit the speed, it also degrades the quality of the photos and deletes a bunch of metadata.<p>After changing the setting to export original files the transfer is much faster and I haven’t had a single broken file / video. The files are also higher quality and lower filesize, although .heic is fairly computationally-demanding.<p>Idk about Android but I suspect it might have a similar behavior
Wouldn’t this be useful for clustering Macs over TB5? Wasn’t the maximum bandwidth over USB-cables 5Gbps? With a switch, you could cluster more than just 4 Mac Studios and have a couple terabytes for very large models to work with.
I was hoping somebody would suggest that (and eventually try it out).<p>With TB5, and deep pockets, you might probably also benchmark it against a setup with dedicated TB5 enclosures (e.g., Mercury Helios 5S).<p>TB5 has PCIe 4.0 x4 instead of PCIe 3.0 x4 -- that should give you 50 GbE half-duplex instead of 25 GbE. You would need a different network card though (ConnectX-5, for example).<p>Pragmatically though, you could also aggregate (bond) multiple 25 GbE network card ports (with Mac Studio, you have up to 6 Thunderbolt buses, so more than enough to saturate a 100GbE connection).
I recently did a complete disk backup/clone which only took 15 minutes instead of hours. Maxed the SSD which was being backed up at about 2.5GB/s.
rsync...grsync...a solution for broken partial batch transfers since forever
Remote Time Machine backups are snappier than ever before :)
Pretty much anywhere you have networked storage? Gigabit is about on-par with pre-sata ATA133.
Would be useful if I had to debug my internet link and I only had a laptop.
Porn?
> reduces temperatures by at least 15 Kelvin, bringing the ambient enclosure temperature below 40 °C,<p>I had to do a double-take when it mentioned Kelvin since That is physically impossible.
Isn't "reduces temperatures by 15 Kelvin" the same as "reduces temperatures by 15 Celsius"?
reduces temperatures by at least 15 Kelvin == the same as reduces temperatures by at least 15 Celcius.<p>It 'reduces it by' ... not reduces it TO