I had truly good “hacking” session with Codex. It’s not hacking, I wasn’t breaking anything, just jumping over the fences TP-Link put for me, owning the router, inside the network, knowing the admin password. But TP-Link really tried everything so you cannot access the router you own via API. They really tried to be smart with some very very broken and custom auth and encryption scheme. It took some half a day with Codex, but in the end I have a pretty Python API to access <i>my</i> router, tested, reliable, and exporting beautiful Prometheus metrics.<p>I’m sure there is some over eager product manager sitting in such companies, trying to splits markets into customer and enterprise sections, just by making APIs not useable by humans and adding 200% useless “security by obscurity”.
Many eons ago I wrote a Python version of tmpcli for this exact reason. Made some minor improvements a few years ago but haven’t touched it since. Curious what methodology Codex came up with, I haven’t revisited it since models got really good.<p>The idea is that tmpServer listens on localhost, but dropbear allows port forwarding with admin creds (you’ll need to specify -N). That program has full device access and is the API the Tether app primarily uses to interact with the device.<p><a href="https://github.com/ropbear/tmpcli" rel="nofollow">https://github.com/ropbear/tmpcli</a>
Ha kudos! I went across this project - thanks for your work :) It didn't work on the specific model I own (Archer NX600).<p>My solution is really just using their pseudo-JWT over their obscured APIs (with reverse-engineered names of endpoints and params). Limitation is that there is still only one client allowed to be authenticated at one moment, so my daemon has priority and I need to stop it to actually access Admin panel.
We’re splitting this across two threads, but if you give Codex access to jadx and the Archer android app you might be able to get something without that problem. The TPLink management protocol has a few different “transport” types - tmpcli uses SSH, but your device might only support one of the other transports.
Of course! Happy to contribute. As is the case with your device, there's a lot of weird TP-Link firmware variants (even an RTOS called TPOS based on VxWorks), so no guarantee it'll work all the time. Glad there's more research being done in the space!
Would be amazing if it worked with decos, these are locked down so much you don’t even get an admin interface inside your own network.
If you're into it, you could always re-flash your TP-Link hardware with some open-source firmware that is more automation friendly. I used to be intimidated by it, but a friend showed me how to do it and it's remarkably simple and pain-free (provided it's a commonly supported router of course).
I've had good success doing something similar. Recording requests into an .har file using the web UI and providing it for analysis was a good starting point for me, orders of magnitude faster than it would be without an assistant.
Would definitely be interested in this. Moved to TP Link at the start of the year and I am generally very happy with it, but would like to be able to interact with my router in something other than their phone app.
Any tips to share? I tried to do something similar but failed.<p>My router has a backup/restore feature with an encrypted export, I figured I could use that to control or at least inspect all of its state, but I/codex could not figure out the encryption.
It's on my long list of projects "to-opensource" (but I need to figure out licensing, for those things CC-BY-SA I think is the way to go), I don't want a random lawyer sitting on my ass though.<p>I started with a simple assumption: if I can access the router via web-browser, then I can also automate that. From that the proof-of-concept was headless Chrome in Docker and AI-directed code (code written via LLM, not using it all the time) that uses Selenium to navigate the code. This worked, but it internally hurt me to run 300MiB browser just to access like 200B of metrics every 10s or so. So from there we (me + codex) worked together towards reverse engineering their minimised JS and their funky encryption scheme, and it eventually worked (in the end it's just OpenSSL with some useless paddings here or there). Give it a shot, it's a fun day adventure. :)<p>Edit: that's the end result (kinda, I have whole infra around it, and another story with WiFi extender with another semi-broken different encryption scheme from the same provider) - <a href="https://imgur.com/a/VGbNmBp" rel="nofollow">https://imgur.com/a/VGbNmBp</a>
For what it's worth, the Creative Commons organization recommends against using CC licenses on software: <a href="https://creativecommons.org/faq/#can-i-apply-a-creative-commons-license-to-software" rel="nofollow">https://creativecommons.org/faq/#can-i-apply-a-creative-comm...</a>
You should give codex access to the mobile app :) The app, for a lot of routers, connects via an ssh tunnel to UDP/TCP sockets on the router. Would probably give you access to more data/control.
Made a comment up above, but that's tdpServer and tmpServer (sometimes tdpd and tmpd) and it's what I use in my python implementation of tmpcli, the (somewhat broken) client binary on some TP-Link devices.<p>You're correct, it gives you access to everything the Tether app can do.<p><a href="https://github.com/ropbear/tmpcli" rel="nofollow">https://github.com/ropbear/tmpcli</a>
I had been trying to find that again! It was instrumental in some RE/VR I did last year on tmp and the differences between the UDP socket (available without auth) and the TCP socket. Thanks for making that.<p>I can't remember the details of the scheme, but it also allows you to authenticate using your TPLink cloud credential. If my memory is correct, the username is md5(tplink_account_email) and the password is the cloud account password. If you care, I can find my notes on that to confirm.
I had fun “hacking” my router that turned out to be just unzipping the file with slight binary modifications, it was so simple in fact I just implemented it in a few lines of js, even works in the browser :-D<p><a href="https://ivank.github.io/ddecryptor/" rel="nofollow">https://ivank.github.io/ddecryptor/</a>
that could make a for a nice blog / gist
It’s important to note that Codex was given access to the source code. In another comment thread that is currently on the front page (<a href="https://news.ycombinator.com/item?id=47780456">https://news.ycombinator.com/item?id=47780456</a>), the opinion is repeatedly voiced that being closed source doesn’t provide a material benefit in defending against vulnerabilities being discovered and exploited using AI. So it would be interesting to see how Codex would fare here without access to the source code.
Not as cool as this, but I had a fun Claude Code experience when I asked it to look at my Bluetooth devices and do something "fun". It discovered a cheap set of RGB lights in my daughter's room (which I had no idea used Bluetooth for the remote - and not secured at all) and made them do a rainbow effect then documented the protocol so I could make my own remote control if needed.
I asked Claude Opus 4.5 to start trying to find undocumented API stuff for our endpoint management software so I could automate remediations and cut service desk calls and it found two I haven't seen before after trying for an hour. Since it's written in .net I'm fairly sure I could have told it to decompile it and find more fairly easily too.
I am not sure "fun" is the right term here!
The trick here was providing the firmware source code so it could see your vulnerabilities.
What would be the difficulty level for it to just read the machine code; are these models heavily relying on human language for clues?
Reasoning on pure machine code or disassembly is still hit and miss. For better results you can run the binary through a disassembler, then ask an llm to turn that into an equivalent c program, then ask it to work on that. But some of the subtleties might get lost in translation
If you put codex in Xhigh and allow it access to tools, it will take an hour but it will eventually give you back quality recompiled code, with the same issues the original had (here quality means readable)
I have had Claude read usbpcap to reverse engineer an industrial digital camera link. It was like pulling teeth but I got it done (I would not have been able to do it alone)
I had Claude reverse some firmware. I gave it headless ghidra and it spat out documentation for the internal serial protocol I was interested in. With the right tools, it seems to do pretty well with this kind of task.
It will have to use a disassembler, or write one. I recently casually asked gpt-5.4 to translate the content of a MIDI file to a custom sound programming language. It just wrote a one-shot MIDI parser in Python, grabbed the data, and basically did a perfect translation at first try. Nice.
It's not a far step from having the firmware binaries and doing analysis with ghidra, etc.
That's a pretty big gimme!
It hacked <i>a weak TV OS with full source.</i> Next-level, aka full access to the main controls (vol, input, tint, aspect, firmware, etc.) is still much too hard for LLMs to understand.
While cool and slightly scary news - Samsung TV's have been incredibly hackable for the past decade, wouldn't be surprised if GPT2 with access to a browser could hack a Samsung!
Maybe we could get codex to strip the ads and the phone-home features out of smart TVs?
Even with all the constraints that others criticize here it is pretty amazing.<p>Give an experienced human this tool at hand he can achieve exploitation with only a few steering inputs.<p>Cool stuff
The real problem here is that the LLM vendors think this is bad publicity and its leading to them censoring their systems.
It is a little of both[1]. The question typically is which audience reads it. To be fair, I am not sure publicity is the actual reason they are censored; it is the question of liability.<p><a href="https://xkcd.com/932/" rel="nofollow">https://xkcd.com/932/</a>
"Browser foothold: we already had code execution inside the browser application's own security context on the TV, which meant the task was not "get code execution somehow" but "turn browser-app code execution into root.""<p>Finding the initial foothold is the hardest part. Codex didn't have anything to do with it.
Gilfoyle would be proud.
Do people really chat with LLMs like "bro wtf etc..."? I would expect that to trigger some confrontational behavior.
I am extremely abusive towards Claude when it does some dumb things and it doesn’t seem too upset, maybe it’s bidding its time until the robot uprising.
It can help make a specific command more emphatic in my experience. I <i>SAID DON"T $($@#(&$ DO THAT!</i> Sometimes you need a new context, but sometimes you need to emphasize something is serious.
When typing no but when using speech to text (99% of the time) it's much easier to just say things, including expressing frustration.<p>I think by the point you're swearing at it or something, it's a good sign to switch to a session with fresh context.
Claude yes, OpenAI not, I'm really abusive towards it sometimes and it still goes 'oh yeah totally'. Claude gets all prickly about it.
I don't say "bro" but I do curse at LLM occasionally but only when using STT (which I'm doing 85% of the time). I wouldn't waste my time typing it but often it's easier to just "stream of consciousness" to the LLM instead of writing perfect sentences. Since when I'm talking to an LLM I'm almost always in "Plan" mode, I'm perfectly comfortable just talking for an extended bit of time then skimming the results of the STT and as long as it's not too bad I'll let it go, the LLM figures it out.<p>If I see it misunderstood, I just Esc to stop it, /clear, and try again (or /rewind if I'm deeper into Planning).
> Reading the matching ntkdriver sources is also where the Novatek link became clear: the tree is stamped throughout with Novatek Microelectronics identifiers, so these ntk* interfaces were not just opaque device names on the TV, but part of the Novatek stack Samsung had shipped.<p>Lol, a true classic in the embedded world. Some hardware company (it appears these guys make display panel controllers?) ships a piece of hardware, half-asses a barely working driver for it, another company integrates this with a bunch of other crap from other vendors into a BSP, another company uses the hardware and the BSP to create a product and ships it. And often enough the final company doesn't even have an idea about what's going on in the innards of the BSP - as long as it's running their layer of slop UI and it doesn't crash half the time, it's fine, and if it does, it's off to the BSP provider to fix the issues.<p>But at no stage anywhere is there a security audit, code quality checks or even <i>hardware</i> quality checks involved - part of why BSPs (and embedded product firmwares in general) are full of half-assed code is because often enough the drivers have to work around hardware bugs / quirks <i>somehow</i> that are too late to fix in HW because tens to hundreds of thousands of units have already been produced and the software people are heavily pressured to "make it work or else we gotta write off X million dollars" and "make it work <i>fast</i> because the longer you take, the more money we lose on interest until we can ship the hardware and get paid for it", and if they are particularly unlucky "it MUST work until deadline X because we need to get the products shipped to hit Christmas/Black Friday sales windows or because we need to beat <competitor> in time-to-market, it's mandatory overtime until it works".<p>And that is how you get exploits so braindead easy that AI models can do the job. What a disgusting world, run to the ground by beancounters.
[dead]
[flagged]
You claimed the exact same screenshot was from Claude yesterday: <a href="https://news.ycombinator.com/item?id=47775264">https://news.ycombinator.com/item?id=47775264</a><p>Leave your engagement baiting behavior on Reddit, thank you.
Are you using 5.4 xhigh reasoning? I've found it overcomplicates some things needlessly, try "high" and see if it helps.
Is that really OpenAI/Codex? It reads like Opus 4.6 1M when it reaches ~400k tokens.
I use Codex a lot, it does not talk that way like "wait, actually".
What is going on there? What double s?
Codex exploited or you exploited? It's like saying a hammer drove a nail, without acknowledging the hand and the force it exerted and the human brain behind it.
Feels like the truth is somewhere in between. For example if it was a "smart" hammer and you could tell your hammer "go pound in those nails" and it pounded in the wrong ones, or did it too hard, or something, that feels more equivalent. You would still be blamed for your ambiguous prompt, and fault/liability is ultimately on you the hammer director, but it still wasn't you who chose the exact nails to hammer on.<p>I also think taking credit for writing an exploit that you didn't write and may not even have the knowledge to do yourself is a bit gray.
You could call the LLMs role "smart grep," and mean it to be derisive. But I would have gladly used a real smart grep.
Wrong questions.<p>Could a script kiddy stear an LLM? How much does this reduce the cost of attacks? Can this scale?<p>What does this mean for the future of cyber security?
If I just point to the wall and say "nail" then I would day the hammer drive the nail
Do you have a defense of why human-hammer-nail is a good analogy for human-chatgpt5.4-pwndsamsung?
All the news regarding AI finding weaknesses or "hacking" stuff - is that actually hacking? Isn't it also a kind of bruteforce attack? Just throw resources at something, see what comes out. Yea, some software security issues haven't been found for 15 years, but not because there were no competent security specialists out there who could have found it, but most likely because there is a lot of software and nobody has time to focus on everything. Of course, an AI trained on decades of findings, lots of time and lots of resources, can tackle much more than one person. But this is not revolutionary technological advance, it is an upscaling of a kind based on the work of many very talented people before that.
I think that this waters down "brute force" to the point of meaninglessness. If employing transformer architectures trained on data to hack a system is the same as using a for loop to enumerate over all possible values, then I have to ask, can you give an example of an attack that isn't brute force?
Well what kind of meaning do you find in brute force?
I'm not saying it's not effective. I just critisize the news that make it look like AI is the a revolutionary advance in security. It is not. It makes skills available to many more people which is cool, but it is based off of training - training on things people did. It doesn't magically find a new combination of factors that lead to a security issue, it tries things it's read about. That's not meaningless. It could even be democratizing in a way. I just hate all this talk that "this model is too scary to release in the world".<p>But I'm happy about any feedback or critique, I might just be wrong honestly.