At some point SBCs that require a custom linux image will become unacceptable, right?<p>Right?
Using vendor kernels is standard in embedded development. Upstreaming takes a long time so even among well-supported boards you either have to wait many years for everything to get upstreamed or find a board where the upstreamed kernel supports enough peripherals that you're not missing anything you need.<p>I think it's a good thing that people are realizing that these SBCs are better used as development tools for people who understand embedded dev instead of as general purpose PCs. For years now you can find comments under every Raspberry Pi or other SBC thread informing everyone that a mini PC is a better idea for general purpose compute unless you really need something an SBC offers, like specific interfaces or low power.
There are some projects to port UEFI to boards like Orange Pi and Raspberry Pi. You can install a normal OS once you have flashed that.<p><a href="https://github.com/tianocore/edk2-platforms/tree/master/Platform/RaspberryPi/RPi4" rel="nofollow">https://github.com/tianocore/edk2-platforms/tree/master/Plat...</a><p><a href="https://github.com/edk2-porting/edk2-rk3588" rel="nofollow">https://github.com/edk2-porting/edk2-rk3588</a>
“Custom”? No.<p>Proprietary and closed? One can hope.
[dead]
I love that OrangePi is making good hardware, but after my experience with the OrangePi 5 Max, I won’t be buying more hardware from them again. The device is largely useless due to a lack of software support. This also happened with the MangoPi MQ-Pro. I’ll just stick with RPi. I may not get as much hardware for the money, but the software support is fantastic.
I was planning to build a NAS from OPi 5 to minimise power consumption, but ended up going for a Zen 3 Ryzen CPU and having zero regrets. The savings are miniscule and would not justify the costs.
Yeah that's the problem with ARM devices. Better just buy a N100
Something in me wants to buy every SBC and/or microcontroller that is advertised to me.
Even though all can be replaced by a decent mini pc with beefy memory, with lots of VMs.
Yeah I have this problem (?) too. They are just so neat. I also really like tiny laptops and recreations of classic computers.
One or two USB-C 3.2 Gen2 ports are all that's required - can then plug in a hub or dock. eg: <a href="https://us.ugreen.com/collections/usb-hub?sort_by=price-descending" rel="nofollow">https://us.ugreen.com/collections/usb-hub?sort_by=price-desc...</a><p>Can also plug in a power bank.
<a href="https://us.ugreen.com/collections/power-bank?sort_by=price-descending" rel="nofollow">https://us.ugreen.com/collections/power-bank?sort_by=price-d...</a><p>The advantage is that if the machine breaks or is upgraded, the dock and pb can be retained. Would also distribute the price.<p>The dock and pb can also be kept away to lower heat to avoid a fan in the housing, ideally.<p>Better hardware should end up leading to better software - its main problem right now.<p>This 10-in-1 dock even has an SSD enclosure for $80 <a href="https://us.ugreen.com/products/ugreen-10-in-1-usb-c-hub-ssd" rel="nofollow">https://us.ugreen.com/products/ugreen-10-in-1-usb-c-hub-ssd</a> (no affiliation) (no drivers required)<p>I'd have another dock/power/screen combo for traveling and portable use.
Disappointing on the NPU. I have found it's a point where industry wide improvement is necessary. People talk tokens/sec, model sizes, what formats are supported... But I rarely see an objective accuracy comparison. I repeatedly see that AI models are resilient to errors and reduced precision which is what allows the 1 bit quantization and whatnot.<p>But at a certain point I guess it just breaks? And they need an objective "I gave these tokens, I got out those tokens". But I guess that would need an objective gold standard ground truth that's maybe hard to come by.
The even more confounding factor is there are specific builds provided by every vendor of these Cix P1 systems: Radxa, Orange Pi, Minisforum, now MetaComputing... it is painful to try to sort it out, as someone who knows where to look.<p>I couldn't imagine recommending any of these boards to people who aren't already SBC tinkerers.
just try to find some benchmark top_k, temp, etc parameters for llama.cpp. There's no consistent framing of any of these things. Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.
>Temp should be effectively 0 so it's atleast deterministic in it's random probabilities.<p>Is this a thing? I read an article about how due to some implementation detail of GPUs, you don't actually get deterministic outputs even with temp 0.<p>But I don't understand that, and haven't experimented with it myself.
By default CUDA isn't deterministic because of thread scheduling.<p>The main difference comes from rounding order of reduction difference.<p>It does make a small difference. Unless you have an unstable floating point algorithm, but if you have an unstable floating point algorithm on a GPU at low precision you were doomed from the start.
Right. There are countless parameters and seeds and whatnots to tweak. But theoretically if all the inputs are the same the outputs should be within Epsilon of a known good. I wouldn't even mandate temperature or any other parameter be a specific value, just that it's the same. That way you can make sure even the pseudorandom processes are the same, so long as nothing pulls from a hardware rng or something like that. Which seems reasonable for them to do so idk maybe an "insecure rng" mode