A pocket calculator that would give the right numbers 99.7% of the time would be fairly useless. The lack of determinism is a problem and there is nothing 'uncharitable' about that interpretation. It is definitely impressive, but it is fundamentally broken, because when you start making chains of things that are 99.7% correct you end up with garbage after very few iterations. That's precisely why digital computers won out over analog ones, the fact that they are deterministic.
Category error. You want 100% accuracy for an impossible problem. This is a famously unsolved conjecture. The only way to get the answer is to fully calculate it. The task was to make a guess and see how well it could do. 99.7 is surprisingly good. If the task was to calculate, the llm could write a python program, just like I would have if asked to calculate the answer.
There is a massive difference between an 'unsolved problem' and a problem solved 'the wrong way'. Yes, 99.7% is surprisingly good. But it did not detect the errors in its own output. And it should have.<p>Besides, we're all stuck on the 99.7% as if that's the across the board output, but that's a cherry picked result:<p>"The <i>best</i> models (bases 24, 16 and 32) achieve a near-perfect accuracy of 99.7%, while odd-base models
struggle to get past 80%."<p>I do think it is a very interesting thing to do with a model and it <i>is</i> impressive that it works at all.
Category error.<p>The problem here is deterministic. *<i>It must be for accuracy to even be measured*</i>.<p>The model isn't trying to solve the Collatz conjecture, it is learning a pretty basic algorithm and then doing this a number of times. The instructions it needs to learn is<p><pre><code> if x % 2:
x /= 2
else:
x = x*3 + 1
</code></pre>
It <i>also</i> needs to learn to put that in a loop and for that to be a variable, but the algorithm is static.<p>On the other hand, the Collatz conjecture states that for C(x) (the above algorithm) has a fixed point of 1 for all x (where x \in Z+). Meaning that eventually any input will collapse to the loop 1 -> 4 -> 2 -> 1 (or just terminate at 1). You can probably see we know this is true for at least an infinite set of integers...<p>Edit: I should note that there is a slight modification to this, though model could get away with learning just this. Their variation limits to odd numbers and not all of them. For example 9 can't be represented by (2^k)m - 1 (but 7 and 15 can). But you can see that there's still a simple algorithm and that the crux is determining the number of iterations. Regardless, this is still deterministic. They didn't use any integers >2^71, which we absolutely know the sequences for and we absolutely know all terminate at 1.<p>To solve the Collatz Conjecture (and probably win a Fields Metal) you <i>must</i> do one of 2 things.<p><pre><code> 1) Provide a counter-example
2) Show that this happens for all n, which is an infinite set of numbers, so this strictly cannot be done by demonstration.</code></pre>
Most primality tests aren't 100% accurate either (eg Miller Rabin), they just are "reasonably accurate" while being very fast to compute. You can use them in conjunction to improve your confidence in the result.
Yes, and we <i>know</i> they are inaccurate and we know that if you find a prime that way you can only use it to reject, not confirm so if you think that something is prime you need to check it.<p>But now imagine that instead of it being a valid reject 0.3% of the time it would also reject valid primes. Now it would be instantly useless because it fails the test for determinism.
It's uncharitable because the comment purports to summarise the entire paper while simply cherry picking the worst result. It would be like if asked how did I do on my test and you said well you got question 1 wrong and then didn't elaborate.<p>Now I get your point that a function that is 99.7% accurate will eventually always be incorrect but that's not what the comment said.
Why do people keep using LLMs as algorithms?<p>LLMs are not calculators. If you want a calculator use a calculator. Hell, have your LLM use a calculator.<p>>That's precisely why digital computers won out over analog ones, the fact that they are deterministic.<p>I mean, no not really, digital computers are <i>far</i> easier to build and far more multi-purpose (and technically the underlying signals are analog).<p>Again, if you have a deterministic solution that is 100% correct all the time, use it, it will be cheaper than an LLM. People use LLMs because there are problems that are either not deterministic or the deterministic solution uses more energy than will ever be available in the local part of our universe. Furthermore a lot of AI (not even LLMs) use random noise at particular steps as a means to escape local maxima.
> Why do people keep using LLMs as algorithms?<p>I think they keep coming back to this because a good command of math underlies a vast domain of applications and without a way to do this as part of the reasoning process the reasoning process itself becomes susceptible to corruption.<p>> LLMs are not calculators. If you want a calculator use a calculator. Hell, have your LLM use a calculator.<p>If only it were that simple.<p>> I mean, no not really, digital computers are far easier to build and far more multi-purpose (and technically the underlying signals are analog).<p>Try building a practical analog computer for a non-trivial problem.<p>> Again, if you have a deterministic solution that is 100% correct all the time, use it, it will be cheaper than an LLM. People use LLMs because there are problems that are either not deterministic or the deterministic solution uses more energy than will ever be available in the local part of our universe. Furthermore a lot of AI (not even LLMs) use random noise at particular steps as a means to escape local maxima.<p>No, people use LLMs for <i>anything</i> and one of the weak points in there is that as soon as it requires slightly more complex computation there is a fair chance that the output is nonsense. I've seen this myself in a bunch of non-trivial trials regarding aerodynamic calculations, specifically rotation of airfoils relative to the direction of travel. It tends to go completely off the rails if the problem is non-trivial and the user does not break it down into roughly the same steps as you would if you were to work out the problem by hand (and even then it may subtly mess up).
>A pocket calculator that would give the right numbers 99.7% of the time would be fairly useless.<p>Well that's great and all, but the vast majority of llm use is not for stuff you can just pluck out a pocket calculator (or run a similarly airtight deterministic algorithm) for, so this is just a moot point.<p>People really need to let go of this obsession with a perfect general intelligence that never makes errors. It doesn't and has never existed besides in fiction.
yeah it's only correct in 99.7% of all cases, but what if it's also 10'000 times faster? There's a bunch of scenarios where that combination provides a lot of value
Ridiculous counterfactual. The LLM started failing 100% of the time 60! <i>orders of magnitude</i> sooner than the point at which we have checked literally every number.<p>This is not even to mention the fact that asking a GPU to <i>think about</i> the problem will <i>always</i> be less efficient than just asking that GPU to directly compute the result for closed algorithms like this.
Correctness in software is the first rung of the ladder, optimizing before you have correct output is in almost all cases a complete waste of time. Yes, there are a some scenarios where having a ballpark figure quickly can be useful <i>if</i> you can produce the actual result as well and if you are not going to output complete nonsense the other times but something that approaches the final value. There are a lot of algorithms that do this (for instance: Newton's method for finding square roots).<p>99.7% of the time good and 0.3% of the time noise is not very useful, especially if there is no confidence indicating that the bad answers are probably incorrect.