IMO while the bar is high to say "it's the responsibility of the repository operator itself to guard against a certain class of attack" - I think this qualifies. The same way GitHub provides Secret Scanning [0], it should alert upon spans of zero-width characters that are not used in a linguistically standard way (don't need an LLM for this, just n-tuples).<p>Sure, third-party services like the OP can provide bots that can scan. But if you create an ecosystem in which PRs can be submitted by threat actors, part of your commitment to the community should be to provide visibility into attacks that cannot be seen by the naked eye, and make that protection the norm rather than the exception.<p>[0] <a href="https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security" rel="nofollow">https://docs.github.com/en/get-started/learning-about-github...</a>
Regardless of the thorny question of whether it's Github's <i>responsibility</i>, it sure would be a good thing for them to do ASAP.
Here's the big reason GitHub <i>should</i> do it:<p><pre><code> It makes the product better
</code></pre>
I know people love to talk money and costs and "value", but HN is a space for developers, not the business people. Our primary concern, as developers, is <i>to make the product better</i>. The business people need us to make the product better, keep the company growing, and beat out the competition. We need them to keep us from fixating on things that are useful but low priority and ensuring we keep having money. The contention between us is good, it keeps balance. It even ensures things keep getting better even if an effective monopoly forms as they still need us, the developers, to make the company continue growing (look at monopolies people aren't angry at and how they're different). And they need us more than we need them.<p>So I'd argue it's the responsibility of the developers, hired by GitHub, to create this feature <i>because it makes the product better.</i> Because that's the thing you've been hired for: to make the product better. Your concern isn't about the money, your concern is about the product. That's what you're hired for.
I'd say that this is also true from a money-and-costs-and-value perspective. Sure, all press is good press... but any number of stakeholders would agree that "we got some mindshare by proactively protecting against an emerging threat" is higher-ROI press than "Ars did a piece on how widespread this problem is, and we're mentioned in the context of our interface making the attack hard to detect."<p>And when the incremental cost to build a feature is low in an age of agentic AI, there should be no barrier to a member of the technical staff (and hopefully they're not divided into devs/test/PM like in decades past) putting a prototype together for this.
I agree and think it's extra important when you have specialized products. Experts are more sensitive to the little things.<p>Engineers and developers are especially sensitive. It's our job to find problems and fix them. I don't trust engineers that aren't a bit grumpy because it usually means they don't know what the problems are (just like when they don't dogfood). Though I'll also clarify that what distinguishes a grumpy engineer from your average redditer is that they have critiques rather than just complaints. Critique oriented is searching for solutions of problems, you can't just stop at problem identification.<p><pre><code> > And when the incremental cost to build a feature is low in an age of agentic AI
</code></pre>
I'm not sure that's even necessary. A very quick but still helpful patch would be to display invisible characters. Just like we often do with whitespace characters. The diff can be a bit noisier and it's the perfect place for this even if you purposefully use invisible characters in your programming environment.<p>Though we're also talking about an organization that couldn't merge a PR for a year that fixed a one liner. A mistake that should never have gotten through review. Seriously, who uses a while loop counter checking for equality?!? I'm still convinced they left the "bug" because it made them money
> Your concern isn't about the money, your concern is about the product. That's what you're hired for.<p>According to whom? Certainly not the people did the hiring.<p>I somewhat agree that developers should optimize for something other than pure monetary value, but it has nothing to do with the hiring relationship, just the moral duty to use what power you have to make the world better. In general, this can easily conflict with "what you're hired for."<p>In this case I think showing suspicious (or even all) invisible Unicode in PRs is even a monetarily valuable feature, so the moral angle is mostly moot. And I would put the primary moral burden primarily on the product management either way, since they're the ones with the most power to affect the product, potentially either ordering the right thing to be done or stopping the devs when they try to do it on their own.
<p><pre><code> > According to whom? Certainly not the people did the hiring.
</code></pre>
Actually yes, according to them. Maybe they'll say that you should <i>also</i> be concerned about the money but that just makes the business people redundant now doesn't it? So is it better if I clarify and say that the product is your <i>primary</i> concern?<p>As a developer you have a <i>de facto</i> primary concern with the product. They hire you to... develop. They do not hire you to manage finances, they hire you to manage the product. Doing both is more the job of the engineering manager. But as a developer your expertize is in developing. I don't think this is a crazy viewpoint.<p>You were hired for your technical skills, not your MBA.<p><pre><code> > In this case I think showing suspicious (or even all) invisible Unicode in PRs is even a monetarily valuable feature
</code></pre>
I agree. Though I also think this is true for many things that improve the product.<p>Also note that I'm writing to <i>my audience</i>.<p><pre><code> >> but HN is a space for developers, not the business people.
</code></pre>
How I communicate with management is different, but I'm exhausted when talking to <i>fellow developers</i> and the first question being about monetary value. That's not the first question in our side of things. Our first question is "is this useful?" or "does this improve the product?" If the answer is "yes" <i>then</i> I am /okay/ talking about monetary value. If it's easy to implement and helps the product, just implement it. If it requires time and the utility is valuable then yes, it helps to formulate an argument about monetary value since management doesn't understand any other language, but <i>between developers</i> that is a rather crazy place to start out (unless the proposal is clearly extremely costly. But then say "I don't think you'd ever convince management" instead of "okay, but what is the 'value' of that feature?"). If I wanted to talk to business people I'd talk to the business people, not another developer...
They might <i>say</i> that your job is to make the product "better", and they might even think they mean it, but I think in practice you'll find that their definition of "better" as it relates to products is pretty closely related to money, and further that they are the authorities on what makes the product "better" so you should shut up and do what they say. If you want to make the product <i>actually</i> better, you're going to have to defy them occasionally. That's not what you were hired for, that's just being a human with principles.
To be frank, I tried to address your point with my comment about the audience.<p>I very much disagree that you <i>start</i> with money and work backwards to technical problems. I do not think this approach would make you efficient at solving problems nor at increasing profits for the business.<p>And I still firmly believe they need us more than we need them. At the end of the day this is why they want AI coding agents to work out but I do not think that even in the best situation we'll end up in any different of a situation than COBOL. You can make developers more efficient, but replacing them requires an entirely different set of skills.<p>An MBA-type, with no programming background, has a better chance getting their photos taken with their iPhone in a museum than they do replacing a developer. I'm sure there will be some successful at it, but exceptions do not define the rule.
Talking about the audience completely misses my point. I'm not saying it's <i>good</i> to start with money and work back. I'm saying that's what companies <i>actually do</i>, and furthermore that's something the "dev audience" should understand about their employers.<p>> I do not think this approach would make you efficient at solving problems nor at increasing profits for the business.<p>If optimizing for profit doesn't result in profit, it's not the fault of the goal. That company was just incompetent. However many companies are, in fact, moderately competent, and optimizing for profit works fine for them. It even has a pretty heavy overlap with optimizing for good products, so that's nice.<p>It's fine. We agree on the ideal outcome in this situation.
At the end of the day it boils down to putting your users first.<p>Making the product better generally stems from acting in their interest, honing the tool you offer to provide the best possible experience, and making business decisions that respect their dignity.<p>Your comment talks a lot about product and I agree with it, I just mentioned this so we don't lose sight of the fact this is ultimately about people.
Tldr:
Yeah it would make it better!
I hope I left the lead as the lead.<p>But I also think we've had a culture shift that's hurting our field. Where engineers are arguing about if we should implement certain features based on the monetary value (which are all fictional anyways). But that's not our job. At best, it's the job of the engineering manager to convince the business people that it has not only utility value, but monetary.
It absolutely is. They are simply spreading malware. You can't claim to be a 'dumb pipe' when your whole reason for existence is to make something people deemed 'too complex' simple enough for others to use, then you have an immediate responsibility to not only reduce complexity but to also ensure safety. Dumbing stuff down comes with a duty of care.
I think a "force visible ASCII for files whose names match a specific pattern" mode would be a simple thing to help. (You might be able to use the "encoding" command in the .gitattributes file for this, although I don't know if this would cause errors or warnings to be reported, and it might depend on the implementation.)
specially because it's literally a problem with their code viewer (and vscode, which is also theirs).<p>i see squares on a properly configured vim on xterm.
It baffles me that any maintainer would merge code like the one highlighted in the issue, without knowing what it does. That’s regardless of being or not being able to see the “invisible” characters. There’s a transforming function here and an eval() call.<p>The mere fact that a software maintainer would merge code without knowing what it does says more about the terrible state of software.
<i>> It baffles me that any maintainer would merge code like the one highlighted in the issue, without knowing what it does.</i><p>I don't know if it is relevant in any specific case that is being discussed here, but if the exploit route is via gaining access to the accounts of previously trusted submitters (or otherwise being able to impersonate them) it could be a case of teams with a pile of PRs to review (many of which are the sloppy unverified LLM output that is causing a problem for some popular projects) lets through an update from a trusted source that has been compromised.<p>It could correctly be argued that this is a problem caused by laziness and corner cutting, but it is still understandable because projects that are essentially run by a volunteer workforce have limited time resources available.
Wish I could upvote this more.
In this instance the PR that was merged was from 6 years ago and was clear <a href="https://github.com/pedronauck/reworm/pull/28" rel="nofollow">https://github.com/pedronauck/reworm/pull/28</a>. Looks to me like a force push overwrote the commit that now exists in history since it was done 6y later.
A lot of this thread is debating ASCII vs Unicode in source code, but that framing is too broad. The specific technique here uses Variation Selectors (U+FE00-FE0F and U+E0100-E01EF), which have zero legitimate use in source code. Zero-width joiners and spaces have real purposes in Arabic, Indic scripts, and emoji. Variation Selectors do not -- they modify glyph presentation in fonts, not program semantics.<p>So you don't need to ban Unicode from source to defend against this. Just flag Variation Selectors specifically. A pre-commit hook that catches [\x{FE00}-\x{FE0F}] in .js/.ts/.py files would have detected this entire campaign with no false positives.<p>The more interesting question is why no major CI system or editor does this by default. Someone in this thread mentioned GitHub was told about invisible character injection via their bug bounty program, paid out, and chose not to ship a fix. That's a policy decision worth examining more than the encoding trick itself.
I use non-Unicode mode in the terminal emulator (and text editors, etc), I use a non-Unicode locale, and will always use ASCII for most kind of source code files (mainly C) (in some cases, other character sets will be used such as PC character set, but usually it will be ASCII). Doing this will mitigate many of this when maintaining your own software. I am apparently not the only one; I have seen others suggest similar things. (If you need non-ASCII text (e.g. for documentation) you might store them in separate files instead. If you only need a small number of them in a few string literals, then you might use the \x escapes; add comments if necessary to explain it.)<p>The article is about in JavaScript, although it can apply to other programming languages as well. However, even in JavaScript, you can use \u escapes in place of the non-ASCII characters. (One of my ideas in a programming language design intended to be better instead of C, is that it forces visible ASCII (and a few control characters, with some restrictions on their use), unless you specify by a directive or switch that you want to allow non-ASCII bytes.)
> ... and will always use ASCII for most kind of source code files<p>Same. And I enforce it. I've got scripts and hooks that enforces <i>source files</i> to only ever be a subset of ASCII (not even all ASCII codes have their place in source code).<p>Unicode chars strings are perfectly fine in resource files. You can build perfectly i18n/l10n apps and webapps without ever using a single Unicode character in a source file. And if you really do need one, there's indeed ASCII escaping available in many languages.<p>Some shall complan that their name as "Author: ..." in comments cannot be written properly in ASCII. If I wanted to be facetious I'd say that soon we'll see:<p><pre><code> # Author: Claude Opus 27.2
</code></pre>
and so the point shall be moot anyway.
CP437 forever!<p>The biggest use of Unicode in source repos now might be LLM slop, so I certainly don't miss its absence at all.
I keep seeing this and wondering if the ESLint default rules against weird characters would catch this? But I can’t figure out how to check.
GitHub advertises itself as warning about those Unicode characters: <a href="https://github.blog/changelog/2025-05-01-github-now-provides-a-warning-about-hidden-unicode-text/" rel="nofollow">https://github.blog/changelog/2025-05-01-github-now-provides...</a><p>Of course, it doesn't work though. I reported this to their bug bounty, they paid me a bounty, and told me "we won't be fixing it": <a href="https://joshua.hu/2025-bug-bounty-stories-fail#githubs-utf-filter-warning" rel="nofollow">https://joshua.hu/2025-bug-bounty-stories-fail#githubs-utf-f...</a><p>The exact quote is "Thanks for the submission! We have reviewed your report and validated your findings. After internally assessing your report based on factors including the complexity of successfully exploiting the vulnerability, the potential data and information exposure, as well as the systems and users that would be impacted, we have determined that they do not present a significant security risk to be eligible under our rewards structure." The funny thing is, they actually gave me $500 and a lifetime GitHub Pro for the submission.
The `eval` alone should be enough of a red flag
Yeah, I would have loved to see an example where it was not obvious that there is an exploit. Where it would be possible for a reviewer to actually miss it.
I'm not a JS person, but taking the line at face value shouldn't it to nothing? Which, if I understand correctly, should never be merged. Why would you merge no-ops?
No it’s not.
OWASP disagrees: See <a href="https://cheatsheetseries.owasp.org/cheatsheets/Nodejs_Security_Cheat_Sheet.html" rel="nofollow">https://cheatsheetseries.owasp.org/cheatsheets/Nodejs_Securi...</a>, listing `eval()` first in its small list of examples of "JavaScript functions that are dangerous and should only be used where necessary or unavoidable". I'm unaware of any such uses, myself. I can't think of any scenario where I couldn't get what I wanted by using some combination of `vm`, the `Function` constructor, and a safe wrapper around `JSON.parse()` to do anything I might have considered doing unsafely with `eval()`. Yes, `eval()` is a blatant red flag and definitely should be avoided.
While there are valid use cases for eval they are so rare that it should be disabled by default and strongly discouraged as a pattern. Only in very rare cases is eval the right choice and even then it will be fraught with risk.
The parent didn't say "there's no legitimate uses of eval", they said "using eval should make people pay more attention." A red flag is a warning. An alert. Not a signal saying "this is 100% no doubt malicious code."<p>Yes, it's a red flag. Yes, there's legitimate uses. Yes, you should always interrogate evals more closely. All these are true
When is an eval not at least a security "code smell"?
It really is. There are very few proper use-cases for eval.
Small discussion yesterday (9+9 points, 9+4 comments) <a href="https://news.ycombinator.com/item?id=47374479">https://news.ycombinator.com/item?id=47374479</a> <a href="https://news.ycombinator.com/item?id=47385244">https://news.ycombinator.com/item?id=47385244</a>
I feel like the threat of this type of thing is really overstated.<p>Sure the payload is invisible (although tbh im surprised it is. PUA characters usually show up as boxes with hexcodes for me), but the part where you put an "empty" string through eval isn't.<p>If you are not reviewing your code enough to notice something as non sensical as eval() an empty string, would you really notice the non obfuscated payload either?
Unicode should be for visible characters. Invisible characters are an abomination. So are ways to hide text by using Unicode so-called "characters" to cause the cursor to go backwards.<p>Things that vanish on a printout should not be in Unicode.<p>Remove them from Unicode.
Unicode is "designed to support the use of text in all of the world's writing systems that can be digitized"<p>Unicode needs tab, space, form feed, and carriage return.<p>Unicode needs U+200E LEFT-TO-RIGHT MARK and U+200F RIGHT-TO-LEFT MARK to switch between left-to-right and right-to-left languages.<p>Unicode needs U+115F HANGUL CHOSEONG FILLER and U+1160 HANGUL JUNGSEONG FILLER to typeset Korean.<p>Unicode needs U+200C ZERO WIDTH NON-JOINER to encode that two characters should not be connected by a ligature.<p>Unicode needs U+200B ZERO WIDTH SPACE to indicate a word break opportunity without actually inserting a visible space.<p>Unicode needs MONGOLIAN FREE VARIATION SELECTORs to encode the traditional Mongolian alphabet.
[flagged]
That's a very narrow view of the world. One example: In the past I have handled bilingual english-arabic files with switches within the same line and Arabic is written from left to right.<p>There are also languages that are written from to to bottom.<p>Unicode is not exclusively for coding, to the contrary, pretty sure it's only a small fraction of how Unicode is used.<p>> Somehow people didn't need invisible characters when printing books.<p>They didn't need computers either so "was seemingly not needed in the past" is not a good argument.
> That's a very narrow view of the world.<p>Yes, it is. Unicode has undergone major mission creep, thinking it is now a font language and a formatting language. Naturally, this has lead to making it a vector for malicious actors. (The direction reversing thing has been used to insert malicious text that isn't visible to the reader.)<p>> Unicode is not exclusively for coding<p>I never mentioned coding.<p>> They didn't need computers<p>Unicode is for characters, not formatting. Formatting is what HTML is for, and many other formatting standards. Neither is it for meaning.
> That's a very narrow view of the world.<p>But not one that would surprise anyone familiar with WalterBright's antics on this website…
The fact is that there were so many character sets in use before Unicode <i>because</i> all these things were needed or at least wanted by a lot of people. Here's a great blog post by Nikita Prokopov about it: <a href="https://tonsky.me/blog/unicode/" rel="nofollow">https://tonsky.me/blog/unicode/</a>
<p><pre><code> Look Ma
xt! N !
e tee S
T larip
</code></pre>
(No Unicode needed.)
Unicode is for human beings, not machines.
No need to remove them. Just make them visible for applications that don't need to render every language. Make that behavior optional as well in case you really want to name characters with Hangul or Tibetan.<p>Some middle ground so that you can use greek letters in Julia might be nice as well.<p>But I don't see any purpose in using the Personal Use Areas (PUA) in programming.
So we need a new standard problem due to the complexity of the last standard? Isn't unicode supposed to be a superset of ASCII, which already has control characters like new space, CR, and new lines? xD
That ship has sailed, but I consider Unicode a good thing, yet I consider it problematic to support Unicode in every domain.<p>I should be able to use Ü as a cursed smiley in text, and many more writing systems supported by Unicode support even more funny things. That's a good thing.<p>On the other hand, if technical and display file names (to GUI users) were separate, my need for crazy characters in file names, code bases and such are very limited. Lower ASCII for actual file names consumed by technical people is sufficient to me.
Another dum dum Unicode idea is having multiple code points with identical glyphs.<p>Rule of thumb: two Unicode sequences that look identical when printed should consist of the same code points.
If anything, Unicode should have had more disambiguated characters. Han unification was a mistake, and lower case dotted Turkish i and upper case dotless Turkish I should exist so that toUpper and toLower didn't <i>need</i> to know/guess at a locale to work correctly.
So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?<p>And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?
> So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?<p>Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.<p>> And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?<p>Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?<p>Those Unicode homonyms are a solution looking for a problem.
> Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.<p>Do you think 1, l and I should be encoded as the same character, or does this logic only extend to characters pesky foreigners use.
But these characters only look identical in some fonts. Are you saying that if you change font, some characters in a string should change appearance and others should not?<p>And what about the round-trip rule?<p>And ligatures? Aren't those a semantic distinction?
> But these characters only look identical in some fonts.<p>That's a problem with the fonts.<p>> And what about the round-trip rule?<p>Print Unicode on paper, then ocr it, and you'll get different Unicode. Oh, and normalization.<p>> ligatures<p>Generally an issue with rendering.<p>> semantic distinction<p>Unicode isn't about semantics (or shouldn't be). Consider 'a'. It's used for <i>all kinds</i> of meanings.
Unicode is about semantics not appearance. If you don't need semantics then use something different.
>Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?<p>I can absolutely tell Cyrillic k from the lating к and latin u from the Cyrillic и.<p>>should not be about semantic meaning,<p>It's always better to be able to preserve more information in a text and not less.
What about numbers? Would they be assigned to arabic only? I guess someone will be offended by that.<p>While at it we could also unify I, | and l. It's too confusing sometimes.
One of the ground rules of Unicode is the round trip rule. You have to be able to translate to and from Unicode without loss of information.
As far as I know, glyphs are determined by the font and rendering engine. They're not in the Unicode standard.
I don't think that would help much. There are also characters which are similar but not the same and I don't think humans can spot the differences unless they are actively looking for them which most of the time people are not. If only one of two glyphs which are similar appear in the text nobody would likely notice, expectation bias will fuck you over.
greatidea,whoneedsspacesanyway
So you'd remove space and tab from Unicode?
Invisible characters are there for visible characters to be printed correctly...
>Remove them from Unicode.<p>Do you honestly think this is a workable solution?
Good luck with that given there are invisible characters in ascii.<p>Also this attack doesnt seem to use invisible characters just characters that dont have an assigned meaning.
Looks like the repo owner force-pushed a bad commit to replace an existing one. But then, why not forge it to maintain the existing timestamp + author, e.g. via `git commit --amend -C df8c18`?<p>Innocuous PR (but do note the line about "pedronauck pushed a commit that referenced this pull request last week"): <a href="https://github.com/pedronauck/reworm/pull/28" rel="nofollow">https://github.com/pedronauck/reworm/pull/28</a><p>Original commit: <a href="https://github.com/pedronauck/reworm/commit/df8c18" rel="nofollow">https://github.com/pedronauck/reworm/commit/df8c18</a><p>Amended commit: <a href="https://github.com/pedronauck/reworm/commit/d50cd8" rel="nofollow">https://github.com/pedronauck/reworm/commit/d50cd8</a><p>Either way, pretty clear sign that the owner's creds (and possibly an entire machine) are compromised.
The value of the technique, I suppose, is that it hides a large payload a bit better. The part you can see <i>stinks</i> (a bunch of magic numbers and eval), but I suppose it’s still easier to overlook than a 9000-character line of hexadecimal (if still encoded or even decoded but still encrypted) or stuff mentioning Solana and Russian timezones (I just decoded and decrypted the payload out of curiosity).<p>But really, it still has to be injected after the fact. Even the most superficial code review should catch it.
Agreed on all those fronts. I'm just dismayed by all the comments suggesting that maintainers just merged PRs with this trojan, when the attack vector implies a more mundane form of credential compromise (and not, as the article implies, AI being used to sneak malicious changes past code review at scale).
Attacks employing invisible characters are not a new thing. Prior efforts here include terminal escape sequences, possibly hidden with CSS that if blindly copied and pasted would execute who knows what if the particular terminal allowed escape sequences to do too much (a common feature of featuritis) or the terminal had errors in its invisible character parsing code.<p>For data or code hiding the Acme::Bleach Perl module is an old example though by no means the oldest example of such. This is largely irrelevant given how relevant not learning from history is for most.<p>Invisible characters may also cause hard to debug issues, such as lpr(1) not working for a user, who turned out to have a control character hiding in their .cshrc. Such things as hex viewers and OCD levels of attention to detail are suggested.
The scary part is how invisible this is in code review. Unicode direction overrides and zero-width characters don't show up in most editors by default. Anyone know a solid pre-commit hook config that catches this reliably?
I wonder if this could be used for prompt injection, if you copy and paste the seemingly empty string into an LLM does it understand? Maybe the affect Unicode characters aren’t tokenized.
Why didn't some make av rule to find stuff like this, they are just plain text files
The rule must be very simple: any occurrence of `eval()` should be a BIG RED FLAG. It should be handled like a live bomb, which it is.<p>Then, any appearance of unprintable characters should also be flagged. There are rather few legitimate uses of some zero-width characters, like ZWJ in emoji composition. Ideally all such characters should be inserted as \xNNNN escape sequences, and not literal characters.<p>Simple lint rules would suffice for that, with zero AI involvement.
> There are rather few legitimate uses of some zero-width characters, like ZWJ in emoji composition.<p>Emojis are another abomination that should be removed from Unicode. If you want pictures, use a gif.
I think there’s debate (which I don’t want to participate in) over whether or not invisible characters have their uses in Unicode. But I hope we can all agree that invisible characters have no business in code, and banishing them is reasonable.
In our repos, we have some basic stuff like ruff that runs, and that includes a hard error on any Unicode characters. We mostly did this after some un-fun times when byte order marks somehow ended up in a file and it made something fail.<p>I have considered allowing a short list that does not include emojis, joining characters, and so on - basically just currency symbols, accent marks, and everything else you'd find in CP-1521 but never got around to it.
Yeah it would have been nice to end with "and here's a five-line shell script to check if your project is likely affected". But to their credit, they do have an open-source tool [1], I'm just not willing to install a big blob of JavaScript to look for vulns in my other big blobs of JavaScript<p>[1] <a href="https://github.com/AikidoSec/safe-chain" rel="nofollow">https://github.com/AikidoSec/safe-chain</a>
Something like this should work, assuming your encoding is Unicode (normally UTF-8), which grep would interpret:<p><pre><code> grep -P '[\x{200B}\x{200C}\x{200D}\x{FEFF}]' code.ts
</code></pre>
See <a href="https://stackoverflow.com/q/78129129/223424" rel="nofollow">https://stackoverflow.com/q/78129129/223424</a>
Isn't that what this article is about? Advertising an av rule in their product that catches this.
This shows the failure of human reviews alone, an LLM-based reviewer would have caught it. Both approaches are complementary
Their button animations almost "crash" Firefox mobile. As soon as I reach them the entire page scrolls at single digit FPS.
Back in time I was on hacking forums where lot of script kiddies used to make malicious code.<p>I am wondering how that they've LLM, are people using them for making new kind of malicious codes more sophisticated than before?
In this case LLMs were obviously used to dress the code up as more legitimate, adding more human or project relevant noise. It's social engineering, but you leave the tedious bits to an LLM. The sophisticated part is the obscurity in the whole process, not the code.
Why can't code editors have a default-on feature where they show any invisible character (other than newlines)? I seem to remember Sublime doing this at least in some cases... the characters were rendered as a lozenge shape with the hex value of the character.<p>Is there ever a circumstance where the invisible characters are both legitimate and you as a software developer wouldn't want to see them in the source code?
Check out emacs for options like this.<p>And, yes, there is a circumstance if you want to include Arabic or Hebrew in comments or strings. You need the zero width left-right markers to make that work.
My hot take is that all programming languages should go back to only accepting source code saved in 7-bit ASCII. With perhaps an exception for comments.
eval() used to be evil....<p>Are people using eval() in production code?
Invisible characters, lookalike characters, reversing text order attacks [1].. the only way to use unicode safely seems to be by whitelisting a small subset of it.<p>And please, everyone arguing the code snippet should never have passed review - do you honestly believe this is the <i>only</i> kind of attack that can exploit invisible characters?<p>[1] <a href="https://attack.mitre.org/techniques/T1036/002/" rel="nofollow">https://attack.mitre.org/techniques/T1036/002/</a>
[dead]
[dead]
[dead]
[dead]
[dead]
I don't have to worry about any of this.<p>My clawbot & other AI agents already have this figured out.<p>/s