> We probably agree that LLMs don't have the same understanding of meaning that humans do<p>I think this is absolutely <i>key</i>, because to the extent LLM output has meaning we care about, I think it's effectively all supplied <i>by us</i> after the fact. This doesn't mean LLMs can't do interesting and useful things, but I consider some current maximalist takes on the state and immediate likely future of AI as doing something a couple steps up the complexity-chain from believing a pocket calculator that solves "1 + 5" must understand what "6" means to a human. That "6" is just some glowing dots until <i>we</i> assign it meaning and context that the pocket calculator doesn't and cannot have, even though it's really good at solving and displaying the results of calculations.<p>This model explains, I think, the actual experience of using an LLM better than the model I gather some have, which is that they're doing something pretty close to thinking but just get stuff wrong sometimes, as a human gets stuff wrong sometimes (I think they get things wrong <i>differently</i> from how a human does, and it's because they <i>aren't</i> working with the same kind of meaning we are). I think it's the familiar form of the output that's leading people down this way of thinking about what they're doing, and I think it's wrong in ways that matter, both for policy purposes (pleading that they're just doing what humans do to learn and then produce output from what they learned, when it comes to building them with copyright-protected data, falls rather flat with me, for example—I'm not quite to the point of entirely dismissing the argument, but I'm pretty close to sticking that in the "not even wrong" bucket, in part because of my perspective on how they work) and for actually working productively with generative AI. When these programs fail, it's usually <i>not</i> the way a human does, and using heuristics for recognizing places you need to be cautious when dealing with <i>human</i> output will result in mistakes. "Lies" often looks a lot like "truth" with them and can come out of nowhere, because that's not quite what they deal in, not the way humans do. They don't really lie but, crucially, they also don't <i>tell the truth</i>. But they may produce output that contains information that's wrong or correct, and takes a form that's very useful, or not very useful.<p>> but I'm closer to the position that this is because they haven't been exposed to the same datasets we have, and not necessarily because their fundamental operation is so different.<p>I'm not <i>super</i> far from agreeing with this, I think, but also think there's probably some approach (or, I'd expect, <i>set of</i> approaches) we need to layer on top of generative AI to make it do something that I'd consider notably close to human-type thinking, in addition to just being able to poke it and make text come out. I think what we've got now are, in human terms, something like severely afflicted schizophrenics with eidetic memories, high levels of suggestibility, and an absence of ego or self-motivation, which turns out to be pretty damn useful things to have but isn't necessarily something we'll get broadly human-level cognition (or better—I mean, they're already better than a lot of people at <i>some</i> tasks, let's face it, zero people who've ever lived could write bad satirical poetry as fast as an LLM can, much as nobody can solve square roots as fast as a pocket calculator) out of if we just do <i>more</i> of it—I doubt that the basic components needed to bridge that gap are present in the current systems at all. I expect we'll see them fail <i>less</i> as we feed them more energy and data, but for their failures to continue to look alien and surprising, always due to that mismatch between the meaning we're assigning to what they're doing, and their <i>internal</i> sense of "meaning", which are correlated (because we've forced them to be) but not dependent on one another in some necessary way. But yes, giving them more sources of "sensory" input and a kind of <i>will</i>, with associated tools, to seek out more input, is likely the direction we'll need to go to make them capable of more things, rather than just somewhat better at what they do now.<p>[EDIT] As for why I think our ways of discussing how these work <i>matters</i>, aside from the aforementioned reasons, it's that lay-people are taking our lead on this to some extent, and when we come out acting like these are <i>thinking agents</i> in some serious sense (or cynically promoting them as <i>super dangerous</i> and close to becoming real conscious entities on the verge of being <i>insanely "smart"</i> because, gee would you look at that, we're <i>selling</i> the things—ahem, paging Altman) it's a recipe for cooking up harmful misuse of these tools and of <i>laws and policy</i> that may be at-odds with reality.