4 comments

  • gushogg-blake1 hour ago
    I haven&#x27;t found an explanation yet that answers a couple of seemingly basic questions about LLMs:<p>What does the input side of the neutral network look like? Is it enough bits to represent N tokens where N is the context size? How does it handle inputs that are shorter than the context size?<p>I think embedding is one of the more interesting concepts behind LLMs but most pages treat it as a side note. How does embedding treat tokens that can have vastly different meanings in different contexts - if the word &quot;bank&quot; were a single token, for example, how does embedding account for the fact that it can mean river bank or money bank? Do the elements of the vector point in both directions? And how exactly does embedding interact with the training and inference processes - does inference generate updated embeddings at any point or are they fixed at training time?<p>(Training vs inference time is another thing explanations are usually frustrating vague on)
    • Udo1 minute ago
      &gt; <i>What does the input side of the neutral network look like? Is it enough bits to represent N tokens where N is the context size?</i><p>Not quite. The raw text converted into IDs corresponding to tokens by the tokenizer. Each token maps onto a vector, via a so-called embedding lookup (I always thought the word choice embedding was weird, but it&#x27;s a standard).<p>This vector is then augmented with further information, such as positional and relational information, which happens inside the model.<p>The context is not a bitfield of tokens. It&#x27;s a collection of vectors that are annotated with additional information by the model. The context size of a model is a maximum usable sequence length, it&#x27;s not a fixed input array.
  • Barbing30 minutes ago
    Lefthand labels (like Introduction) can overlap over main text content on the right in the central panel - may be able to trigger by reducing window width.
  • lukeholder1 hour ago
    Page keeps annoyingly scroll-jumping a few pixels on iOS safari
    • tbreschi24 minutes ago
      Yeah that typing effect at the top (expanding the composer) seems to be the isssue
  • learningToFly333 hours ago
    I’ve had a look, and it’s very well explained! If you ever want to expand it, you could also add how embedded data is fed at the very final step for specific tasks, and how it can affect prediction results.