7 comments

  • akersten2 hours ago
    Unicode is both the best thing that&#x27;s ever happened to text encoding and the worst. The approach I take here is to treat any text coming from the user as toxic waste. Assume it will say &quot;Administrator&quot; or &quot;Official Government Employee&quot; or be 800 pixels tall because it was built only out of decorative combining characters. Then put it in a fixed box with overflow hidden, and use some other UI element to convey things like &quot;this is an official account.&quot;<p>The worst part that this article doesn&#x27;t even touch on with normalizing and remapping characters is the risk your login form doesn&#x27;t do it but your database does. Suddenly I can re-register an existing account by using a different set of codepoints that the login system doesn&#x27;t think exists but the auth system maps to somebody else&#x27;s record.
    • ElectricalUnion1 hour ago
      For some sorts of &quot;confusables&quot;, you don&#x27;t even need Unicode in some cases. Depending on the cursed combination of font, kerning, rendering and display, `m` and `rn` are also very hard to distinguish.
      • kps14 minutes ago
        <a href="https:&#x2F;&#x2F;en.wiktionary.org&#x2F;wiki&#x2F;keming" rel="nofollow">https:&#x2F;&#x2F;en.wiktionary.org&#x2F;wiki&#x2F;keming</a>
  • joshdata1 hour ago
    &gt; If your application also runs NFKC normalization (which it should — ENS, GitHub, and Unicode IDNA all require it)<p>That&#x27;s not right. Most of the web requires NFC normalization, not NFKC. NFC doesn&#x27;t lose information in the original string. It reorders and combines code points into <i>equivalent</i> code point sequences, e.g. to simplify equality tests.<p>In NFKC, the K for &quot;Compatibility&quot; means some characters are replaced with similar, simpler code points. I&#x27;ve found NFKC useful for making text search indexes where you want matches to be forgiving, but it would be both obvious and wrong to use it in most of the web because it would dramatically change what the user has entered. See the examples in <a href="https:&#x2F;&#x2F;www.unicode.org&#x2F;reports&#x2F;tr15&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.unicode.org&#x2F;reports&#x2F;tr15&#x2F;</a>.
    • ZoneZealot34 minutes ago
      I think we&#x27;re expecting too much from an LLM generated article from a user that has been spending a lot of time spamming their content across multiple platforms and websites.
  • Liftyee1 hour ago
    Does the &quot;removing dead code&quot; advantage outweigh the additional complexity of having to maintain 2 different confusables lists: one for when NFKC has been applied first and one without? It didn&#x27;t sound like applying one after the other caused any <i>errors</i>, just that some previously reachable states are unreachable.
    • lich_king1 hour ago
      This is an inexplicable, AI-written article and the obvious answer is no. There&#x27;s no performance or complexity overhead to not removing a couple of dead characters. There is a complexity overhead to forking off the list or adding pointless special cases to your code.
  • happytoexplain1 hour ago
    Tangential - I&#x27;m aware of various types of, let&#x27;s say, &quot;swappability&quot; that Unicode defines (broader than the Unicode concept of &quot;equivalence&quot;):<p>- Canonical (NF)<p>- Compatible (NFK)<p>- Composed vs decomposed<p>- Confusable (confusables.txt)<p>Does Unicode not define something like &quot;fuzzy&quot; equivalence? Like &quot;confusable&quot; but more broad, for search bar logic? The most obvious differences would be case and diacritic insensitivity (e, é). Case is easy since any string&#x2F;regex API supports case insensitivity, but diacritic insensitivity is not nearly as common, and there are other categories of fuzzy equivalence too (e.g. ø, o).<p>I guess it makes sense for Unicode to not be interested in defining something like this, since it relates neither to true semantics nor security, but it&#x27;s an incredibly common pattern, and if they offered some standard, I imagine more APIs would implement it.
  • kccqzy2 hours ago
    If you allow users to submit arbitrary Unicode string as text, why would you need to check confusables.txt? Whose confusion are you guarding against?
    • zahlman1 hour ago
      I suppose: other users, if you store the first user&#x27;s text and transmit it to another one.
      • kccqzy48 minutes ago
        Well then it’s a failure of UI design if you think this can cause confusion. In any UGC design it should be extremely clear which text is generated by another user and which belongs to the site itself.
        • zahlman9 minutes ago
          No, no. The problem is, say you operate a forum; a malicious user makes a post that uses a Unicode confusion attack on a URL to direct other forum members to an attack site (e.g. a phishing site).
  • brazzy2 hours ago
    &gt; The correct use is to check whether a submitted identifier contains characters that visually mimic Latin letters, and if so, reject it<p>That is a really bad and user-hostile thing to do. Many of those characters are perfectly valid characters in various non-latin scripts. If you want everyone to force Latin script for identifiers, then own up to it and say so. But rejecting just <i>some</i> them for being too similar to latin characters just makes the behaviour inconsistent and confusing for users.
    • wongarsu1 hour ago
      What would make sense is to have a blacklist of usernames (like &quot;admin&quot; or &quot;moderator&quot;), then use the confusables map to see if a username or slug is visually confusable with a name from that blacklist.<p>I initially thought that must surely be what they are doing and they just worded it very, very poorly. But then of the 31 &quot;disagreements&quot; only one matters, the long s that&#x27;s either f or s. All other disagreements map to visually similar symbols, like O and 0, which you should already treat as the same for this check<p>Not to mention that this is mostly an issue for URL slugs, so after NFKC normalization. In HTML this is more robustly solved by styling conventions. Even old bb-style forums will display admin and moderator user names in a different color or in bold to show their status. The modern flourish is to put a little icon next to these kinds of names, which also scales well to other identifiers.
    • orthoxerox2 hours ago
      The correct approach is to accept [a-z][a-z0-9]* as identifiers and forbid everything else.
      • skrebbel1 hour ago
        Yeah fuck foreigners who want to be able to spell their own name right.
        • tsimionescu1 hour ago
          In all cultures, there is an expectation that you have to provide a name for yourself that is intelligible to the culture you&#x27;re interacting with, both in written language and in speech. If your name is Albert and you are going to interact with many Japanese speakers, you&#x27;ll have to call yourself アルバート in writing and pronounce your name as something like &quot;Ah roo bay toe&quot; to fit in. If you have a name whose pronunciation depends heavily on tones, such as a Mandarin or Vietnamese name, and you are going to interact with speakers of a non-tonal language, you&#x27;ll have to come up with a version that you&#x27;re happy with even if pronounced in the default neutral tone that those people will naturally use. If your name is 高山, you&#x27;ll have to spell it as Takayama.<p>Similarly, if you&#x27;re going to create an identifier for yourself that is supposed to be usable in an international context, you&#x27;ll have to use the lowest common denominator that is acceptable in that context - and that happens to be a-zA-Z0-9. Why the Latin alphabet and numerals and not, say, Arabic, you might ask? Because Chinese and Indian and Arabic speakers are far more likely to be familiar with the Latin alphabet than with each other&#x27;s writing systems.
        • kgeist58 minutes ago
          For logins, we&#x27;re already used to the fact that they&#x27;re expected to be in Latin. Having them in the native alphabet is more trouble than it&#x27;s worth (one system supports it, another breaks etc., easier to remember one, in Latin, across systems) I&#x27;d be irritated though if I couldn&#x27;t use my native alphabet in the user profile for the first name&#x2F;last name
        • silon421 hour ago
          As someone with non-ASCII name, I&#x27;d like a unicode whitelist (system wide if possible).<p>And special features to mark cyrillic or other for-me-dangerous characters.
        • diacritical1 hour ago
          [dead]
      • Zardoz841 hour ago
        And you pissed off nearly half of the world population.
  • csense1 hour ago
    My theory: The &quot;long S&quot; in &quot;Congreſs&quot; is an f. They used f instead of s because without modern dental care, a lot of people in the 1600&#x27;s and 1700&#x27;s were miffing teeth and fpoke with a lifp.
    • nkrisc1 hour ago
      <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Long_s" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Long_s</a><p>That’s not the case.