50 comments

  • nrenegar36 minutes ago
    You should financialize this by creating a prediction market around it.
  • Aurornis1 day ago
    Kickstarter is full of projects like this where every possible shortcut is taken to get to market. I’ve had some good success with a few Kickstarter projects but I’ve been very selective about which projects I support. More often than not I can identify when a team is in over their heads or think they’re just going to figure out the details later, after the money arrives.<p>For a period of time it was popular for the industrial designers I knew to try to launch their own Kickstarters. Their belief was that engineering was a commodity that they could hire out to the lowest bidder after they got the money. The product design and marketing (their specialty) was the real value. All of their projects either failed or cost them more money than they brought in because engineering was harder than they thought.<p>I think we’re in for another round of this now that LLMs give the impression that the software and firmware parts are basically free. All of those project ideas people had previously that were shelved because software is hard are getting another look from people who think they’re just going to prompt Claude until the product looks like it works.
    • noduerme3 hours ago
      I think you&#x27;re right. And it&#x27;s going to be loads of fun to watch.<p>Not go say there haven&#x27;t also been very good coders who <i>weren&#x27;t</i> outsourcing anything, who still got out over their skis with stuff they promised on Kickstarter. I worked on Star Citizen and saw the lure of inflating project scope, responding to the vox populi, go to someone&#x27;s head in realtime. Where they could still at some point conceivably have done what they had promised if they could just resist promising more stuff.<p>I find it odd that industrial designers wouldn&#x27;t have a firmer grasp on what was involved in shipping a product than coders do, since code seems much more prone to mission creep than a physical product would be. But I totally agree that if you&#x27;re used to outsourcing the build phase of whatever you do, AI is going to be the ultimate mirage.
    • lr4444lr1 day ago
      At this point, I trust LLMs to come up with something more secure than the cheapest engineering firm for hire.
      • nozzlegear1 day ago
        &quot;Anyone else out there vibe circuit-building?&quot;<p><a href="https:&#x2F;&#x2F;xcancel.com&#x2F;beneater&#x2F;status&#x2F;2012988790709928305" rel="nofollow">https:&#x2F;&#x2F;xcancel.com&#x2F;beneater&#x2F;status&#x2F;2012988790709928305</a>
        • godelski18 hours ago
          Is there more context to this? I&#x27;m assuming Ben is experimenting and demonstrating the danger of vibe circuit designing? Mostly because I know he has a ton of experience and I&#x27;d expect him to not make this mistake (also seems like he told the AI why it was wrong)
          • nozzlegear16 hours ago
            I&#x27;m not sure, it was posted on HN a couple weeks ago with the same title as the text in his tweet. I&#x27;d guess he was experimenting and trying to show the dangers, like you suggested.
        • stared7 hours ago
          In <a href="https:&#x2F;&#x2F;quesma.com&#x2F;blog&#x2F;nano-banana-pro-intelligence-with-tools&#x2F;" rel="nofollow">https:&#x2F;&#x2F;quesma.com&#x2F;blog&#x2F;nano-banana-pro-intelligence-with-to...</a> (Nov 2025) we had an illustrative diagram of using Nano Banana Pro to create a circuit diagram.
        • alexjplant19 hours ago
          People make these mistakes too. Several times in my high school shop class kids shorted out 9V batteries trying to build circuits because they didn&#x27;t understand how electronics work. At no point did our teacher stop them from doing so - on at least one occasion I unplugged one from a breadboard before it got too toasty to handle (and I was&#x2F;am an electronics nublet). Similarly there was also a lot of hand-wringing about the Gemini pizza glue in a world where people do wacky stuff like cook fish in a dishwasher or defrost chicken overnight on the counter or put cooked steak on the same plate it was on when raw just a few minutes prior.<p>LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill. Most people don&#x27;t know what they don&#x27;t know and fail to think about what might happen if they do something (correctly or otherwise) before they do it, let alone what they&#x27;d do if it goes wrong.
          • Majromax4 hours ago
            &gt; Several times in my high school shop class kids shorted out 9V batteries trying to build circuits because they didn&#x27;t understand how electronics work. At no point did our teacher stop them from doing so<p>Yes, and that&#x27;s okay because the classroom is a learning environment. However, LLMs don&#x27;t learn; a model that releases the magic smoke in this session will be happy to release it all over again next time.<p>&gt; LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill.<p>Which makes the problem worse, not better. If risk management is a difficult skill, then that means we can&#x27;t extrapolate from &#x27;easy&#x27; demonstrations of said skill to argue that an LLM is generally safe for more sensitive tasks.<p>Overall, it seems like LLMs have a long tail of failures. Even while their mean or median performance is good, they seem exponentially more likely than a similarly-competent human to advise something like `rm -rf &#x2F;`. This is a deeply unintuitive behaviour, precisely because our &#x27;human-like&#x27; intuition is engaged with resepct to the average&#x2F;median skill.
          • nozzlegear19 hours ago
            Well said, but I&#x27;d add that LLMs are also surfacing the fact that there&#x27;s a swathe of people out there who will treat the machines as more trustworthy than humans by default, and don&#x27;t believe they need to do any assessment or risk management in the first place.
            • hnlmorg7 hours ago
              People are just lazy. It’s got nothing to do with LLMs having more trust because they’re a machine because most people would happily trust their friend over an expert. They’d trust the first blog post they find online over an expert. Most people are just too lazy and not skilled enough to perform independent review.<p>And to be fair to those people, coming to topics with a research mindset is genuinely hard and time consuming. So I can’t actually blame people for being lazy.<p>All LLMs do is provide an even easier way to “research”. But it’s not like people were disbelieving random Facebook posts, online scams, and word-of-mouth before LLMs.
              • noduerme1 hour ago
                As right as this may be, it elides the crucial difference between asking LLMs and all the other methods of asking questions you enumerated. The difference is not between the quality of information you might get from a friend or a blog versus an LLM. The difference is the centralization and feeding of the same poor quality information to massive numbers of people <i>at scale</i>. At least whatever bonkers theory someone &quot;researches&quot; on their own is going to be a heterodox set of ideas, with a limited blast radius. Even a major search engine up-ranking a site devoted to, like, how horse dewormers can cure covid, doesn&#x27;t present it as if that link is <i>the answer</i> to how to cure covid, right? LLMs have a pernicious combination of sounding authoritative while speaking gibberish. Their real skill is not in surfacing the truth from a mass of data, it&#x27;s in presenting a set of assertions as truth in a way that might satisfy the maximum number of people with limited curiosity, and in establishing an artificial sense of trust. That&#x27;s why LLMs are likely the most demonic thing ever made by man. They are machines built to lie, tell half-truths, obfuscate and flatter at the same time. Doesn&#x27;t that sound enough like every religion&#x27;s warning about the devil?
          • godelski18 hours ago
            What&#x27;s your point?<p>The AI is being sold as an expert, not a student. These are categorically different things.<p>The mistake in the post is one that can be avoided by taking a single class at a community college. No PhD required, not even a B.S., not even an electricians certificate.<p>So I don&#x27;t get your point. You&#x27;re comparing a person in a learning environment to the equivalent of a person claiming to have a PhD in electrical engineering. A student letting the magic smoke escape from a <i>basic</i> circuit is a learnable experience (a memorable one that has high impact), especially when done in a learning environment where an expert can ensure more dangerous mistakes are less likely or non existent. But the same action from a PhD educated engineer would make you reasonably question their qualifications. Yes, humans make mistakes but if you follow the AI&#x27;s instructions and light things on fire you get sued. If you follow the engineer&#x27;s instructions and set things on fire then that engineer gets fired likely loses their license.<p>So what is your point?
            • bethekidyouwant18 hours ago
              No one thinks their breadboard wont catch on fire because an AI agent told them it wouldn’t. Its never been easier to learn because of these agents.
              • zdragnar17 hours ago
                Lawyers are getting in trouble because they use AI and submit fabricated citations about fabricated cases as precedent. A bunch of charges were recently thrown out in Wisconsin because of this, and it&#x27;s not the first time such behavior has made the news.<p><a href="https:&#x2F;&#x2F;www.wpr.org&#x2F;news&#x2F;judge-sanctions-kenosha-county-da-ai-use-court" rel="nofollow">https:&#x2F;&#x2F;www.wpr.org&#x2F;news&#x2F;judge-sanctions-kenosha-county-da-a...</a><p>AI is indeed being understood to be an expert that replaces human judgement, and people are being hurt because of it.
                • DrewADesign6 hours ago
                  The real analog here would be an electronics teacher leading his students to create a circuit that caught fire. If you’re confidently giving faulty information to people that don’t know any better, you’re not teaching them.
              • techpression17 hours ago
                In my experience people don’t use LLMs to learn but to circumvent learning.
                • aix110 hours ago
                  I am sure this is true. On the flip side, as someone who is addicted to learning, I&#x27;ve been finding LLMs to be amazing at feeding my addiction. :)<p>Some recent examples:<p>* foreign languages (&quot;explain the difference between these two words that have the same English translation&quot;, &quot;here&#x27;s a photo of a mock German exam paper and here is my written answer - mark it &amp; show how I could have done better&quot;)<p>* domains that I&#x27;m familiar with but might not know the exact commands off the top of my head (troubleshooting some ARP weirdness across a bunch of OSX&#x2F;Linux&#x2F;Windows boxes on an Omada network)<p>* learning basic skills in a new domain (&quot;I&#x27;m building this thing out of 4mm mild steel - how do I go about choosing the right type of threading tap?&quot;, &quot;what&#x27;s the difference between Type B and Type F RCCB?&quot;)<p>Many of these can be easily answered with a web search, but the ability to ask follow-up questions has been a game changer.<p>I&#x27;d love to hear from other addicts - are there areas where LLMs have really accelerated your learning?
                  • rescbr4 hours ago
                    Hah, yesterday I was discussing solar panels and moving shadows. I would have wasted money buying a commercial solar panel if I didn’t have this chat.<p>Learned a lot on how it works, to the point I’m confident that I can go the DIY route and spend my money in AliExpress buying components instead.<p>Why not ask a pro solar panel installer instead? I live in an apartment, of course they would say it’s not possible to place a solar panel on my terrace. I don’t believe in things not being possible.<p>But I had two semesters of electronics&#x2F;robotics in my CS undergrad and I know to not to trust the LLM blindly and verify.
                    • maximalthinker3 hours ago
                      &quot;I don’t believe in things not being possible.&quot;<p>Found the Musk-eteer.
                  • techpression8 hours ago
                    I agree, I always ask to know more if I don’t get it or it’s a new subject. But I think we’re in the minority, it’s easier to just accept the answer and move on, it requires very little effort compared to trying to understand and retain.
                • FrinkleFrankle10 hours ago
                  Just because a calculator will only ever be used by a subset of the population to type 80085 and giggle, doesn&#x27;t mean it can&#x27;t also be used for complex calculations.<p>AI is a tool that can accelerate learning, or severely inhibit it. I do think the tooling is going to continue to make it easier and easier to get good output without knowing what you&#x27;re doing, though.
                • cwillu11 hours ago
                  Exactly. I like to say that learning feels like frustration. If I&#x27;m right, then LLM&#x27;s eliminate <i>precisely</i> the thing that <i>is</i> learning.
              • godelski18 hours ago
                That&#x27;s a very strong claim. I don&#x27;t think people expect their circuits to ignite, LLM instruction or not. But I&#x27;d expect learning from a book or dedicated website would be less likely for that to occur. (Even accounting for bad manufacturing)<p>You&#x27;re biased because you&#x27;re not considering that by definition the student is inexperienced. Unknown unknowns. Tons of people don&#x27;t know very basic things (why would they?) like circuits with capacitors bring dangerous when the power is off.<p>Why are you defending there LLM? Would you be as nice to a person? I&#x27;d expect not because these threads tend to point out a person&#x27;s idiocy. I&#x27;m not sure why we give greater leeway to the machine. I&#x27;m not sure why we forgive them as if they are a student learning but someone posting similar instructions on a blog gets (rightfully) thrashed. That blog writer is almost never claiming PhD expertise<p>I agree that LLMs can greatly aid in learning. But I also think they can greatly hinder learning. I&#x27;m not sure why anyone thinks it&#x27;s any different than when people got access to the internet. We gave people access to all the information in the world and people &quot;do their own research&quot; and end up making egregious errors because they don&#x27;t know how to research (naively think it&#x27;s &quot;searching for information&quot;), what questions to ask, or how to interrogate data (and much more). Instead we&#x27;ve ended up with lots of conspiratorial thinking. Now a sycophantic search engine is going to fix that? I&#x27;m unconvinced. Mostly because we can observe the result.
                • xvilka11 hours ago
                  &gt; We gave people access to all the information in the world and people &quot;do their own research&quot; and end up making egregious errors because they don&#x27;t know how to research (naively think it&#x27;s &quot;searching for information&quot;), what questions to ask, or how to interrogate data (and much more).<p>You pin pointed a major problem with education, indeed. Personally, I think 3 crucial courses should be taught in school to mitigate that: 1) rational thinking 2) learning how to learn 3) learning how to do a research.
                • bethekidyouwant15 hours ago
                  The result of more people getting into electronics because it’s easier now?
                  • godelski11 hours ago
                    When reading I suggest trying to interpret what the person wrote rather than just ignore it. I&#x27;d probably start by taking the advice of your username
          • azan_18 hours ago
            What’s wrong with dishwasher salmon?
            • astura15 hours ago
              It doesn&#x27;t get hot enough to be a safe cooking method<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;dSwzau2_KF8?t=1108" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;dSwzau2_KF8?t=1108</a>
              • wafflemaker7 hours ago
                In Norway we eat plenty of salmon which is quite raw or even raw (in sushi). It has to be frozen and thawed first, to kill parasites.<p>A friend that studied fish production did recommend not eating salmon though and eating trout instead (ørret in Norwegian). Based on scientific evidence difference is pretty small (15% fish not surviving for salmon vs 12% for trout). But rainbow trout does have more DHA per kg.
          • kortilla11 hours ago
            The difference is that LLMs pretend to be experts on all things. The high school shop kids aren’t under the impression they can build a smart toaster or whatever.
        • JKCalhoun17 hours ago
          Ha ha, I said this before when Ben&#x27;s post came up earlier, but, yes I am. And so far it has been a positive experience.
      • Aurornis1 day ago
        The cheapest engineering firms you hire are also using LLMs.<p>The operator is still a factor.
        • jama2111 day ago
          Yeah, but they’ll add another layer of complexity over doing it yourself
          • Aurornis1 day ago
            The people doing these kickstarters are outsourcing the work because they can’t do it themselves. If they use an LLM, they don’t know what to look for or even ask for, which is how they get these problems where the production backend uses shared credentials and has no access control.<p>The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
            • jama21150 minutes ago
              Well I certainly hope that’s true to some degree or I’m out of a job
            • caminante1 day ago
              You&#x27;re still not following.<p>The parents are saying they&#x27;d rather vibe code themselves than trust an unproven engineering firm that does(n&#x27;t) vibe code.
              • numpad010 hours ago
                This took me a while(I&#x27;m slow), but I think GP is saying: &quot;I&#x27;ve seen enough of (expressions) thinking ideas is the key when engineering was; with everyone snorting LLMs, we&#x27;ll see that replicating in software world&quot; but nicely.<p>THAT makes sense. Engineering was never cheap nor non-differentiating if normalized by man-hours, only when it was USD normalized. If a large enough number of people were to get the same FALSE impression that software and firmware parts are now basically free and non-differentiating commodities, then there will be tons of spectacular failures in software world in coming years. There has already been early previews of those here.
              • Aurornis18 hours ago
                I’m following exactly, but the parent commenter is off on a tangent unrelated to the topic.<p>We’re not taking about the parent commenter, we’re talking about unskilled Kickstarter operators making decisions. Not a skilled programmer using an LLM.
              • TeMPOraL21 hours ago
                &gt; <i>they&#x27;d rather vibe code themselves than trust an unproven engineering firm</i><p>You could cut the statement short here, and it would still be a reasonable position to take these days.<p>LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you&#x27;re holding it wrong.
                • caminante15 hours ago
                  I forgot about that Jobs&#x2F;Apple reference!<p>Paraphasing, LLMs are great (bad) tools for the right (wrong) job...<p>in the right hands,<p>at the right time,<p>in the right place...
      • seanmcdirmid18 hours ago
        I don’t know, you can get a lot of nice engineering done in a Shenzhen dark alley.
      • Kiro23 hours ago
        LLMs definitely write more robust code than most. They don&#x27;t take shortcuts or resort to ugly hacks. They have no problem writing tedious guards against edge cases that humans brush off. They also keep comments up to date and obsess over tests.
        • Hendrikto21 minutes ago
          &gt; They don&#x27;t take shortcuts or resort to ugly hacks.<p>In my experience that is all they do, and you constantly have to fight them to get the quality up, and then fight again to prevent regressions on every change.
        • thayne19 hours ago
          &gt; They don&#x27;t take shortcuts or resort to ugly hacks.<p>That hasn&#x27;t, universally, been my experience. Sometimes the code is fine. Sometimes it is functional, but organized poorly, or does things in a very unusual way that is hard to understand. And sometimes it produces code that might work sometimes but misses important edge cases and isn&#x27;t robust at all, or does things in an incredibly slow way.<p>&gt; They have no problem writing tedious guards against edge cases that humans brush off.<p>The flip side of that is that instead of coming up with a good design that doesn&#x27;t have as many edge cases, it will write verbose code that handles many different cases in similar, but not quite the same ways.<p>&gt; They also keep comments up to date and obsess over tests.<p>Sure but they will often make comments or tests that aren&#x27;t actually useful, or modify tests to succeed instead of fixing the code.<p>One significant danger of LLMs is that the quality of the output is higly variable and unpredictable.<p>That&#x27;s ok, if you have someone knowledgeable reviewing and correcting it. But if you blindly trust it, because it produced decent results a few times, you&#x27;ll probably be sorry.
          • godelski18 hours ago
            <p><pre><code> &gt; Sure but they will often make comments or tests that aren&#x27;t actually useful, or modify tests to succeed instead of fixing the code. </code></pre> I&#x27;ve been deeply concerned that there&#x27;s been a rise of TDD. I thought we already went through this and saw its failure. But we&#x27;re back to we&#x27;re people cannot differentiate &quot;tests aren&#x27;t enough&quot; from &quot;tests are useless&quot;. The amount of faith people put into tests is astounding. Especially when they aren&#x27;t spending much time analyzing the tests and understanding their coverage.
        • godelski18 hours ago
          <p><pre><code> &gt; They don&#x27;t take shortcuts or resort to ugly hacks. </code></pre> My experience is quite different<p><pre><code> &gt; They have no problem writing tedious guards against edge cases that humans brush off. </code></pre> Ditto.<p>I have a hard time getting them to write small and flexible functions. Even with explicit instructions about how a specific routine should be done. (Really easy to produce in bash scripts as they seem to avoid using functions, but so do people, but most people suck at bash) IME they&#x27;re fixated on the end goal and do not grasp the larger context (which is often implicit though I still find difficulty when I&#x27;m highly explicit. Which at that point it&#x27;s usually faster to write myself)<p>It also makes me question context. Are humans not doing this because they don&#x27;t think about it or because we&#x27;ve been training people to ignore things? How often do we hear &quot;I just care that it works?&quot; I&#x27;ve only heard that phrase from those that also love to talk about minimum viable products because... frankly, who is not concerned if it works? That&#x27;s always been a disagreement about what is sufficient. Only very junior people believe in perfection. It&#x27;s why we have sayings like &quot;there&#x27;s no solution more permanent than a temporary fix that works&quot;. It&#x27;s the same people who believe tests are proof of correctness rather than a bound on correctness. The same people who read that last sentence and think I&#x27;m suggesting to not write tests or believe tests are useless.<p>I&#x27;d be concerned with the LLM operator quite a bit because of this. Subtle things are important when instructing LLMs. Subtle things in the prompts can wildly change the output
        • girvo18 hours ago
          They absolutely take shortcuts and resort to ugly hacks.<p>My AGENTS.md is filled with specific lines to counter all of them that come up.
        • kahnclusions16 hours ago
          What? Yes they do take shortcuts and hacks. They change the tests case to make it pass. As the context gets longer it is less reliable at following earlier instructions. I literally had Claude hallucinate nonexistent APIs and then admitted “You caught me! I didn’t actually know, let me do a web search” and then after the web search it still mixes deprecated patterns and APIs against instructions.<p>I’m much more worried about the reliability of software produced by LLMs.
        • BoorishBears21 hours ago
          I had 5.3-Codex take two tries to satisfy a linter on Typescript type definitions.<p>It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex &quot;looksLikeHttpsUrl&quot; and hoping the first valid URL that had https:&#x2F;&#x2F; would be the correct key to use.<p>On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
        • devmor22 hours ago
          Interesting and completely wrong statement, what gave you this impression?
          • dylanowen22 hours ago
            I know right. I kept waiting for a sarcasm tag at the end
          • majorchord22 hours ago
            right and wrong don&#x27;t exist when evaluating subjective quantifiers
          • Kiro22 hours ago
            The discourse around LLMs has created this notion that humans are not lazy and write perfect code. They get compared to an ideal programmer instead of real devs.
            • joe_mamba21 hours ago
              This. The hacks, shortcuts and bugs I saw in our product code after i got hired, were stuff every LLM would tell you not to do.
            • salawat22 hours ago
              LLM&#x27;s at best asymptotically approach a human doing the same task. They are trained on the best and the worst. Nothing they output deserves faith other than what can be proven beyond a shadow of a doubt with your own eyes and tooling. I&#x27;ll say the same thing to anyone vibe coding that I&#x27;d say to programmatically illiterate. Trust this only insofar as you can prove it works, and you can stay ahead of the machine. Dabble if you want, but to use something safely enough to rely on, you need to be 10% smarter than it is.
            • gxs21 hours ago
              Amen. On top of that, especially now, with good prompting you can get closer to that better than you think.
        • Aurornis17 hours ago
          &gt; LLMs definitely write more robust code than most.<p>I’ve been using Opus 4.6 and GPT-Codex-5.3 daily and I see plenty of hacks and problems all day long.<p>I think this is missing the point. The code in this product might be robust in the sense that it follows documentation and does things without hacks, but the things it’s doing are a mismatch for what is needed in the situation.<p>It might be perfectly structured code, but it uses hardcoded shared credentials.<p>A skilled operator could have directed it to do the right things and implement something secure, but an unskilled operator doesn’t even know how to specify the right requirements.
      • lukan1 day ago
        And the cheapest engineering firm won&#x27;t use LLMs as well, wherever possible?
        • fc417fc80223 hours ago
          The cheapest engineering firm will turn out to be headed up by an openclaw instance.
        • TheRealPomax1 day ago
          fun fact, LLMs come in cheapest and useless and expensive but actually does what&#x27;s being asked, too.<p>So, will they? Probably. Can you trust the kind of LLM that <i>you</i> would use to do a better job than the cheapest firm? Absolutely.
      • this.
    • girvo18 hours ago
      Oh gosh anyone who thinks LLMs make firmware free haven’t seriously tried to use it for firmware engineering then.
  • dnw1 day ago
    I would love to see the prompt history. Always curious how much human intervention&#x2F;guidance is necessary for this type of work because when I read the article I come away thinking I prompt Claude and it comes out with all these results. For example, &quot;So Claude went after the app instead. Grabbed the Android APK, decompiled it with jadx.&quot; All by itself or the author had to suggest and fiddle with bits?
    • Very little intervention tbh. I will try to retrieve it and post.
      • selkin1 day ago
        By default, Claude code keeps session history (as jsonl files in ~&#x2F;.claude).<p>It’s wasteful not to save and learn from those.
        • dnw20 hours ago
          Check this out: <a href="https:&#x2F;&#x2F;github.com&#x2F;kulesh&#x2F;catsyphon" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kulesh&#x2F;catsyphon</a>
      • dnw20 hours ago
        That&#x27;s great to hear. I&#x27;d be interested to see the session. Yes, Claude Code keeps sessions in ~&#x2F;.claude&#x2F;projects&#x2F; by default. Thank you!
        • minimalthinker17 hours ago
          here you go: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;aimihat&#x2F;a206289b356cac88e2810654adf06a55" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;aimihat&#x2F;a206289b356cac88e2810654adf0...</a>
    • cyanydeez1 day ago
      Really is a derth of livestreams demostrating these things. Youd think if thetes so much Unaided AI work people would stream it.
      • ndespres6 hours ago
        Why would anyone watch a live stream of someone else poking a computer into completing a task? It’s barely more interesting than having someone tell you about a dream they had.
        • cyanydeez3 hours ago
          Fantastic claims require fantastic Evidence.
  • wildylion7 hours ago
    &gt; and send them electric impulses in their sleep. So, it&#x27;s like Lovense, but for dreams?<p>Sorry, I know it&#x27;s horrible, but I couldn&#x27;t resist.
  • yumraj21 hours ago
    While most comments are focused on the issue that they found, I’m more intrigued by the fact that Claude was able to reverse engineer so well.<p>Lowering the skills bar needed to reverse engineer at this level could have its own AI-related implications.
    • flutas10 hours ago
      One of my earlier experiences with codex was actually reverse engineering, far before it was good at actual coding.<p>It was able to decompile a react native app (Tesla Android app), and fully trace from a &quot;How does X UI display?&quot; down to a network call with a payload for me to intercept.<p>Granted it did it by splitting the binary into a billion txt files with each one being a single function and then rging through it, but it worked.
      • madeofpalk4 hours ago
        I heard about this and tried quite a bit to reverse engineer a decompiled binary from a big game to find struct&#x2F;schema information but could never get anything useful.
    • yieldcrv13 hours ago
      I love that it shows you the thought process that to a Senior or Staff level person would be expected to know in their approach to a reverse engineering problem with no documentation<p>Levels up the way I think about things
    • Neywiny17 hours ago
      I wholeheartedly disagree. Running strings and a decompiler explicitly written for that language is kinda the first thing that comes to mind. Trying hundreds of random ways to talk to it before even doing any real reverse engineering is just a waste of compute. You&#x27;re never going to guess the JSON to send to it or the random bytes. But it&#x27;s not my tokens getting spent on it so meh
  • rbbydotdev1 day ago
    &gt; I was not expecting to end up with the ability to read strangers&#x27; brainwaves and send them electric impulses in their sleep. But here we are.<p>Almost out of a Phillip K Dick novel
    • nephihaha6 hours ago
      Just what I was thinking.<p>China has a recent history of spying on personal data. <a href="https:&#x2F;&#x2F;www.telegraph.co.uk&#x2F;news&#x2F;2026&#x2F;01&#x2F;26&#x2F;china-hacked-downing-street-phones-for-years&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.telegraph.co.uk&#x2F;news&#x2F;2026&#x2F;01&#x2F;26&#x2F;china-hacked-dow...</a>
  • nilsherzig11 hours ago
    Can someone explain the other iot devices using the same broker? I tried cross referencing the feature list, information about the user base, kickstarter origin and flutter app with some search results and I’m pretty sure that I found the company and product in question. But they don’t (publicly) produce iot devices? Sooo I’m wondering if different companies are streaming their data into a shared sink and why they would do that?
    • rglover4 hours ago
      They were scanning BLE so any device using that protocol in range would be picked up. Similar to seeing your neighbor&#x27;s Wi-Fi router from your couch.
  • SubiculumCode1 day ago
    How about complaining that brain waves get sent to a server? I&#x27;m a neuroscientist, so I&#x27;m not going to say that the EEG data is mind reading or anything, but as a precedent, non privacy of brain data is very bad.
    • willturman1 day ago
      Non-privacy of <i>this person is currently sleeping</i> data is very bad as well, for different reasons.<p>You know, now that I&#x27;m thinking about it, I&#x27;m beginning to wonder if poor data privacy could have some negative effects.
      • thayne19 hours ago
        It sounds like there was &quot;presence in room&quot; data as well, which could be very bad
      • andai12 hours ago
        This is the easiest signal though, on basically any account. You can see the time that communication happens, and the times when it doesn&#x27;t.<p>For example a while back I wanted to map out my sleep cycle and I found a tool that charts your browser history over a 24 hour period, and it mapped almost perfectly to my sleep &#x2F; wake periods.
      • fc417fc80223 hours ago
        Unsecured fitness monitor data revealed military guard post (IIRC) activity a while back.
        • werrett23 hours ago
          Yawp. T’was Strava. <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2018&#x2F;jan&#x2F;28&#x2F;fitness-tracking-app-gives-away-location-of-secret-us-army-bases" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;world&#x2F;2018&#x2F;jan&#x2F;28&#x2F;fitness-tracki...</a>
        • iririririr20 hours ago
          not because you knew how much someone worked out. But because it had GPS.
          • fc417fc80215 hours ago
            True.<p>But keep in mind that other less obvious data sources can often lead to similar issues. For example phone accelerometer data can be used to precisely locate someone driving in a car in a city by comparing it with a street map.<p>In the context of the military even just inferring a comprehensive map of which people are on which shift and when they change might be considered a threat.
    • People will be lining up to have their brainwaves harvested because it&#x27;ll be mildly easier to send emails or something similarly inane.
      • RobotToaster23 hours ago
        Corporations will be lining up to require their employees have their brainwaves harvested, so they can fire employees who aren&#x27;t alert enough.
        • kyleee14 hours ago
          Will someone invent the equivalent of a mouse jiggler to get around this?
    • delichon1 day ago
      You could read the alertness level from an EEG, which could be helpful to a burglar. The device with slow-wave status seems ideal.
    • amarant1 day ago
      How useful could something like this be for research? I&#x27;m not a neuroscientist so I have no clue, but it seems like the only justification I can think of..
      • mattkrause23 hours ago
        The general idea of an EEG system that posts data to a network?<p>Very, but there are already tons of them at lots of different price, quality, openness levels. A lot of manufacturers have their own protocols; there are also quasi&#x2F;standards like Lab Streaming Layer for connecting to a hodgepodge of devices.<p>This particular data?<p>Probably not so useful. While it’s easy to get <i>something</i> out of an EEG set, it takes some work to get good quality data that’s not riddled with noise (mains hum, muscle artifacts, blinks, etc). Plus, brain waves on their own aren’t particularly interesting—-it’s seeing how they change in response to some external or internal event that tells us about the brain.
      • brabel1 day ago
        Not a neuroscientist either but I would imagine that raw data without personal information would not be useful for much. I can imagine that it would be quite valuable if accompanied with personal data plus user reports about how they slept each night, what they dreamed about if anything, whether it was positive dreams or nightmares etc. And I think quite a few people wouldn’t mind sharing all of that in the name of science, but in this case they don’t seem to have even tried to ask.
        • iberator22 hours ago
          What if you gonna think about your social security number 30000 times in your dreams, and someone knows the pattern? See the danger? That&#x27;s evil.
      • I believe they use it for sleep tracking
      • AnimalMuppet1 day ago
        If they&#x27;re taking patient data for research without permission, they are not ethical researchers.
        • sneak23 hours ago
          Is it really “without permission” if it’s from a server for which the access credentials have been deliberately published to the entire internet?
          • AnimalMuppet21 hours ago
            If it&#x27;s without the <i>patient&#x27;s</i> permission, then yes, it is without the only permission that matters for medical ethics.
    • I would presume data privacy laws already have good precedent for health data?
      • baby_souffle1 day ago
        &gt; I would presume data privacy laws already have good precedent for health data?<p>Google for a list of all the exceptions to HIPPA. There are a lot of things that _seem_ like they should be covered by HIPPA but are not...
      • freedomben1 day ago
        Only for &quot;covered entities&quot; under HIPAA (at least in the US)
    • zephen20 hours ago
      &quot;Broker&quot; is right there in the title of the post.<p>Baby&#x27;s gotta get some cash somewhere.
      • Kuinox20 hours ago
        An MQTT Broker just mean server, that&#x27;s MQTT terminology.
        • zephen19 hours ago
          Dark humor is like food.<p>Not everybody gets it.
          • Kuinox19 hours ago
            Here it&#x27;s more Poe&#x27;s law.
    • sneak23 hours ago
      Millions of people voluntarily use Gmail which gives a <i>lot</i> more useful data than EEG output to DHS et al without a warrant under FAA702. What makes you think people who “have nothing to hide” would care about publishing their EEG data?
  • autoexec1 day ago
    This guy bought an internet connected sleep mask so it&#x27;s not surprising that it was collecting all kinds of data, or that it was doing it insecurely (everyone should expect IoT anything to be a security nightmare) so to me the surprising thing about this is that the company actually bothered to worry about saving bandwidth&#x2F;power and went through the trouble of using MQTT. Probably not the best choice, and they didn&#x27;t bother to do it securely, but I&#x27;m genuinely impressed that they even tried to be efficient while sucking up people&#x27;s personal data.
    • 8n4vidtmkvmk23 hours ago
      Meanwhile streaming <i>everyone&#x27;s</i> data, negating any benefit.
  • simonbw1 day ago
    Ok, obviously unethical to do it, but this sounds like you&#x27;ve got the power to create some sci-fi shared dreaming device, where you can read people&#x27;s brainwaves and send signals to other people&#x27;s masks based on those signals. Or send signals to everyone at the same time and suddenly people all across the world experience some change in their dream simultaneously.<p>Like, don&#x27;t actually do it, but I feel like there&#x27;s inspiration for a sci-fi novel or short story there.
    • ddtaylor19 hours ago
      I feel if you&#x27;re doing something that will require a Hans Zimmer soundtrack you might be the bad guy.
    • pjerem21 hours ago
      That’s the plot of Paprika.
    • StanislavPetrov21 hours ago
      Dreamscape, 1984
    • billylo23 hours ago
      Inception
    • darba21 hours ago
      [dead]
  • abeppu17 hours ago
    Ok so obviously this is a security disaster. But also ... is there a hackable consumer EEG device that gets useful data and is as comfortable as a sleep mask (and presumably you&#x27;re not slathering electrode every time you put on your sleep mask)? Cuz once the thing can&#x27;t phone home, that sounds pretty cool.
  • Larrikin23 hours ago
    This feels like a reason to buy the device to me? I would want to block all of the data going to the cloud and would only want operations happening locally. But the MQTT broadcast then allows me to create a local only integration in Home Assistant with all of the data.<p>What&#x27;s the real risk profile? Robbers can see you are asleep instead of waiting until you aren&#x27;t home?<p>I have not implemented MQTT automations myself, but it&#x27;s there a way to encrypt them? That could be a nice to have
    • matthewfcarlson23 hours ago
      Sounds like you cannot control which MQTT endpoint it is headed to? It just goes to the server of the device. Assuming you could modify the firmware, you could program it to send to a local MQTT.
      • erazor4223 hours ago
        Simpler just update your local network dns so whatevercompany.brain.com redirect to your local 10.0.0.3 mqtt
      • wongogue12 hours ago
        With no encryption, this isn’t a problem.
    • andai12 hours ago
      I thought the author was going to change the hardcoded server (or override DNS) and set up his own.
  • pedalpete20 hours ago
    I&#x27;m the founder of neurotech&#x2F;sleeptech company <a href="https:&#x2F;&#x2F;affectablesleep.com" rel="nofollow">https:&#x2F;&#x2F;affectablesleep.com</a>, and this post shows the major issue with current wellness device regulation.<p>I believe there was some good that came from last months decision to be more open to what apps and data can say without going through huge regulatory processes (though because we apply auditory stimulation, this doesn&#x27;t apply to us), however, there should be at least regulatory requirements for data security.<p>We&#x27;ve developed all of our algorithms and processing to happen on device, which is required anyway due to the latency which would result from bluetooth connections, but even the data sent to the server is all encrypted. I&#x27;d think that would be the basics. How do you trust a company with monitoring, and apparently providing stimulation, if they don&#x27;t take these simple steps?
  • thedougd5 hours ago
    Agents are excellent for reverse engineering. I was also recently working on a BLE reverse engineering exercise and followed a similar path. I ran into lots of headaches with BLE on my Mac and tabled it.<p>Author or others who know, did you perform this on Linux? I imagine it lacks the tooling challenges I had with BLE on MacOS.
    • minimalthinker5 hours ago
      It was on a MBP, didn’t run into any issues
      • thedougd1 hour ago
        What sort of tools did it use? I suppose the path mine took may have been a dead end. The Tuya app (I was also using decompiled APK) downloads the BLE definitions on-demand and weren&#x27;t embedded in the app. It wanted me to capture traffic on a device with the app. I punted but plan to resume with an emulator setup or real device connected with adb.
  • basedrum1 day ago
    Name the company, hiding it is irresponsible
    • Jolter1 day ago
      Author doesn’t spell out why they are not naming them, but my guess is they are trying to not promote the product to malicious actors who would be interested in the sleep data of others.<p>I guess that’s not a huge problem, though, since all users are presumably at least anonymous.
      • bstsb22 hours ago
        less sleep data, i imagine, and more the whole “send remote electrical impulses” thing
    • brabel1 day ago
      It’s probably safe to assume they are all like that.
  • huh, not sure if life imitates snark and bull <a href="https:&#x2F;&#x2F;medium.com&#x2F;luminasticity&#x2F;great-products-of-illuminati-ganga-8f1698c06a53" rel="nofollow">https:&#x2F;&#x2F;medium.com&#x2F;luminasticity&#x2F;great-products-of-illuminat...</a><p>&quot;The ZZZ mask is an intelligent sleep mask — it allows you to sleep less while sleeping deeper. That’s the premise — but really it is a paradigm breaking computer that allows full automation and control over the sleep process, including access to dreamtime.&quot;<p>or if this is another scifi variation of the same theme, with some dev like embellishments.
    • mrguyorama1 day ago
      That is the premise of HypnoSpace Outlaw, a neat game about 90s internet nostalgia and scifi.
  • tomsmithtld1 day ago
    the shared MQTT credentials pattern is unfortunately super common in budget IoT. seen the exact same thing in smart plugs and air quality sensors. the frustrating part is per-device auth is not even hard to set up, mosquitto supports client certs and topic ACLs with minimal config. manufacturers skip it because per-device key provisioning adds a step to the assembly line and nobody wants to think about key management. so they hardcode one set of creds and hope nobody runs strings on the binary.
    • RyJones13 hours ago
      Why is it that almost all ODB-II dongles you buy have the same MAC address? If you buy two, one for each car, your app can never tell which car you&#x27;re connected to.<p>They all come with Bluetooth certified logos, as well.<p>The ones that don&#x27;t reuse everything cost like $120, not $15.
  • baby_souffle1 day ago
    Well that’s a brand new sentence.
    • amelius1 day ago
      But not a beautiful sentence.
  • Jang-woo13 hours ago
    Really interesting read. This feels less like a security bug and more like a missing execution boundary.
  • victor10613 hours ago
    I asked ChatGPT which product this could be and it came up with<p><a href="https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;selepu&#x2F;dreampilot-ai-guided-sleep-mask" rel="nofollow">https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;selepu&#x2F;dreampilot-ai-gu...</a><p>Claude could not tell which one
  • PunchyHamster9 hours ago
    &gt; For obvious reasons, I am not naming the product&#x2F;company here, but have reached out to inform them about the issue.<p>It&#x27;s working as intended
  • mr_toad10 hours ago
    &gt; I recently got a smart sleep mask from Kickstarter. I was not expecting to end up with the ability to read strangers&#x27; brainwaves and send them electric impulses in their sleep. But here we are.<p>One of the best opening paragraphs in a SF novel that I’ve ever read.<p>Oh, wait.
  • bronlund9 hours ago
    That&#x27;s exactly what I need. A radio transmitter as close as possible to my brain when I sleep.
  • anonymousiam21 hours ago
    The narrator in the article acts as a third person observer and identifies &quot;Claude&quot; as the active hacker. So assuming the (unidentified) company that sells&#x2F;manages the product wants to prosecute a CFAA violation, who do they go after? Was Claude the one responsible for all of the hacking?
    • arter452 hours ago
      What do you mean? IANAL, but Claude doesn&#x27;t just &quot;wake up&quot; (whatever that means) and decide to reverse engineering&#x2F;hack stuff, so if this is a CFAA violation the person who prompted Claude is indeed responsible. At best, one could argue that the company producing Claude is partially responsible because it didn&#x27;t prevent people from using it to reverse engineer stuff, but there&#x27;s no way Claude is &quot;responsible for all of the hacking&quot;, regardless of how many times the blog posts says &quot;Claude did X&quot;.
    • wongogue12 hours ago
      The narrator. It doesn’t matter to the law the kind of intimate relationship you have with your tool.
    • ssener200110 hours ago
      [dead]
  • Insanity16 hours ago
    Reading a blog post where Claude did all the actual work is kinda sad.
  • speedgoose1 day ago
    Remember that the S in IoT stands for Security.<p>I have deployed open MQTT to the world for quick prototypes on non personal (and healthcare) data. Once my cloud provider told me to stop because they didn’t like it, that could be used for relay DDOS attacks.<p>I would not trust the sleep mask company even if they somehow manage to have some authentication and authorisation on their MQTT.
    • n4bz0r1 day ago
      I don&#x27;t think there is an S in IoT?..
      • BenjiWiebe1 day ago
        Right - the saying indicates that IoT stuff is well known for ignoring security.
        • n4bz0r1 day ago
          Went right over my head :)
          • BenjiWiebe17 minutes ago
            It did get me thinking - maybe there should be IoTS devices, where the S stands for Security. A commitment to updates for a certain amount of time, the source code in escrow to be released when updates&#x2F;support ceases, probably other things I&#x27;m not thinking of.
          • rationalist23 hours ago
            Where I work, the saying is, &quot;The H in ABC stands for Happiness.&quot;<p>(Also, &quot;We&#x27;re not happy until you&#x27;re not happy.&quot;)
          • Terr_14 hours ago
            It does work a lot better with verbal inflection.
      • roysting23 hours ago
        Thank you for your astute observation. :)
      • absoluteunit11 day ago
        Exactly
    • zephen20 hours ago
      And the P in IoT stands Privacy, and the Q for quality.<p>The K, of course, stands for Ka-ching!
  • digiown1 day ago
    As an aside, it seems cool that the bar to reverse engineering has lowered from all the LLMs. Maybe we&#x27;ll get to take full control of many of these &quot;smart&quot; devices that require proprietary&#x2F;spyware apps and use them in a fully private way. There&#x27;s no excuse that any such apps solely to interact with devices locally need to connect to the internet, like dishwasher.<p><a href="https:&#x2F;&#x2F;www.jeffgeerling.com&#x2F;blog&#x2F;2025&#x2F;i-wont-connect-my-dishwasher-your-stupid-cloud&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.jeffgeerling.com&#x2F;blog&#x2F;2025&#x2F;i-wont-connect-my-dis...</a>
  • morkalork1 day ago
    &gt;Since every device shares the same credentials and the same broker, if you can read someone&#x27;s brainwaves you can also send them electric impulses.<p>Amazing.
  • HeartofCPU9 hours ago
    How is the smart sleep mask called?
  • neuroelectron16 hours ago
    OK, but can we get a teledildonics device that records all thrusts onto the Blockchain?
  • secbear19 hours ago
    Amazing to see claude&#x27;s reasoning and process through reversing this
  • dlenski21 hours ago
    I discovered a very similar vulnerability in Mysa smart thermostats a year ago, also involving MQTT, and also allowing me to view and control <i>anyone&#x27;s</i> thermostat <i>anywhere</i> in the world: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43392991">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=43392991</a><p>Also discovered during reverse-engineering of the devices’ communications protocols.<p>IoT device security is an utterly shambolic mess.
    • stevage20 hours ago
      That is terrifying. Messing with thermostats could be enough to kill vulnerable people.
      • dlenski19 hours ago
        Yes. An excerpt from my initial email to Mysa&#x27;s security contact…<p>&gt; I stumbled upon these vulnerabilities on one of the coldest days of this winter in Vancouver. An attacker using them could have disabled all Mysa-connected heaters in the America&#x2F;Vancouver timezone in the middle of the night. <i>That would include the heat in the room where my 7-month-old son sleeps.</i>
    • minimalthinker20 hours ago
      I’m not super familiar with MQTT. I wonder how common this is..
      • dlenski20 hours ago
        MQTT is a very simple pub&#x2F;sub messaging protocol.<p>It&#x27;s used in a enormous number of IoT devices.<p>The &quot;IoT gateway&quot; service from AWS supports MQTT and a whole lot of IoT devices are tethered to this service specifically.
  • nephihaha6 hours ago
    A lot of so called &quot;smart&quot; devices have little or no concept of privacy or personal boundaries built into them.
  • flax1 day ago
    This smells like bullshit to me, although I am admittedly not experienced with Claude.<p>I find it difficult to believe that a sleep mask exists with the features listed: &quot;EEG brain monitoring, electrical muscle stimulation around the eyes, vibration, heating, audio.&quot; while also being something you can strap to your face and comfortably sleep in, with battery capacity sufficient for several hours of sleep.<p>I also wonder how Claude probed bluetooth. Does Claude have access to bluetooth interface? Why? Perhaps it wrote a secondary program then ran that, but the article describes it as Claude probing directly.<p>I&#x27;m also skeptical of Claude&#x27;s ability to make accurate reverse-engineered bluetooth protocol. This is at least a little more of an LLM-appropriate task, but I suspect that there was a lot of chaff also produced that the article writer separated from the wheat.<p>If any of this happened at all. No hardware mentioned, no company, no actual protocol description published, no library provided.<p>It makes a nice vague futuristic cyperpunk story, but there&#x27;s no meat on those bones.
    • petercooper19 hours ago
      This isn&#x27;t to the level of the OP, but I just asked Claude <i>&quot;Are there any interesting Bluetooth devices in my vicinity which aren&#x27;t actually mine or ones I am connected to?&quot;</i> and it downloaded a tool called `blueutil` and identified a variety of things.<p>When I complained that the results were boring, it installed a Python package called &#x27;bleak&#x27;, found a set of LED lights (which I assumed are my daughter&#x27;s) and tried to control them. It said the signal was too weak and got me to move around the house, whereupon it connected to them, figured out the protocol, and actually changed the lights while I was sat on her bed - where I am right now. Now I have a new party trick when she gets home! I had no idea they were Bluetooth controlled, nor clearly without any security at all.
    • minimalthinker17 hours ago
      thread with claude: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;aimihat&#x2F;a206289b356cac88e2810654adf06a55" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;aimihat&#x2F;a206289b356cac88e2810654adf0...</a>
    • skibz23 hours ago
      A lot of BLE peripherals are very easy to probe. And there are libraries available for most popular languages that allow you to connect to a peripheral and poke at any exposed internals with little effort.<p>As for the reverse engineering, the author claims that all it took was dumping the strings from the Dart binary to see what was being sent to the bluetooth device. It&#x27;s plausible, and I would give them the benefit of the doubt here.
    • threecheese20 hours ago
      Claude could access anything on your device, including system or third party commands for network or signal processing - it may even have their manuals&#x2F;sites&#x2F;man pages in the training set. It’s remarkably good at figuring things out, and you can watch the reasoning output. There are mcp tools for reverse engineering that can give it even higher level abilities (ghidra is a popular one).<p>Yesterday I watched it try and work around some filesystem permission restrictions, it tried a lot of things I would never have thought of, and it was eventually successful. I was kinda goading it though.
    • RachelF21 hours ago
      Yes, it is very lacking in details. The Claude output would have been interesting, or a few logs or protocol dumps.<p>The lack of detail makes me suspect the truth of most of the story.
      • d0mine11 hours ago
        <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47020069">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=47020069</a>
        • RachelF10 hours ago
          wow! Thanks for that.
    • llm_nerd1 day ago
      <a href="https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;selepu&#x2F;dreampilot-ai-guided-sleep-mask" rel="nofollow">https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;selepu&#x2F;dreampilot-ai-gu...</a><p>Found that in seconds. EEG, electrical stimulation, heat, audio, etc. Claims a 20 hour battery.<p>As to the Claude interactions, like others I am suspicious and it seems overly idealized and simplified. Claude can&#x27;t search for BT devices, but you could hook it up with an MCP that does that. You can hook it up with a decompiler MCP. And on and on. But it&#x27;s more involved than this story details.
      • flax1 day ago
        That appears to be more than a centimeter thick, and not particularly flexible. It&#x27;s more like ski goggles than a sleep mask.<p>So yeah, a product exists that claims to be a sleep mask with these features. Maybe someone could even sleep while wearing that thing, as long as they sleep on their back and don&#x27;t move around too much. I remain skeptical that it actually does the things it claims and has the battery life it claims. This is kickstarter after all. Regardless, this would qualify as the device in question for the article. Or at least inspiration for it.<p>Without evidence such as wireshark logs, programs, protocol documentation, I&#x27;m not convinced that any of this actually _happened_.
      • orsorna23 hours ago
        Claude, or any good agent, doesn&#x27;t need MCP to do things. As long as it has access to a shell it can craft any command that it needs to fulfill its prompt.
        • llm_nerd23 hours ago
          There are no shell commands to do what is described. I could get Claude to interact with BLE devices, but it did it by writing and running various helper applications, for instance using the Bleak library. So I guess not an MCP per se.
      • kfajdsl17 hours ago
        Not really? I did something similar for a different device recently. It can make files and has access to bash. It&#x27;s perfectly capable of installing packages and writing small scripts basically entirely autonomously. No MCP needed.
    • sublinear23 hours ago
      I was originally going to ask something similar, but from a different angle.<p>These blog posts now making the rounds on HN are the usual reverse engineering stories, but made a lot more compelling simply because they involve using AI.<p>Never mind that the AI part isn&#x27;t doing any heavy lifting and probably just as tedious as not using AI in the first place. I am confused why the author mentions it so prominently. Past authors would not have been so dramatic and just waved their hands that they had some trial and error before finding out how the app is built. The focus would have been on the lack of auth and the funny stuff they did before reporting it to the devs.
  • skibz23 hours ago
    It&#x27;s disappointing to see. It doesn&#x27;t take much work to configure a MQTT server to require client certificates for all connections. It does require an extra step in provisioning to give each device a client certificate. But for a commercial product, it&#x27;s inexcusably negligent.<p>Then there&#x27;s hardening your peripheral and central device&#x2F;app against the kinds of spoofing attacks that are described in this blog post.<p>If your peripheral and central device can securely [0] store key material, then (in addition to the standard security features that come with the Bluetooth protocol) one may implement mutual authentication between the central and peripheral devices and, optionally, encryption of the data that is transmitted across that connection.<p>Then, as long as your peripheral and central devices are programmed to only ever respond when presented with signatures that can be verified by a trusted public key, the spoofing and probing demonstrated here simply won&#x27;t work (unless somebody reverse engineers the app running on the central device to change its behaviour after the signature verification has been performed).<p>To protect against that, you&#x27;d have to introduce server-mediated authorisation. On Android, that would require things like the Play Integrity API and app signatures. Then, if the server verifies that the instance of the app running on the central device is unmodified, it can issue a token that the central device can send to the peripheral for verification in addition to the signatures from the previous step.<p>Alternatively, you could also have the server generate the actual command frames that the central device sends to the peripheral. The server would provide the raw command frame and the command frame signed with its own key, which can be verified by the peripheral.<p>I guess I got a bit carried away here. Certainly, not every peripheral needs that level of security. But, into which category this device falls, I&#x27;m not sure. On the one hand, it&#x27;s not a security device, like an electronic door lock. And on the other hand, it&#x27;s a very personal peripheral with some unusual capabilities like the electrical muscle stimulation gizmo and the room occupancy sensor.<p>[0]: Like with the Android KeyStore and whichever HSMs are used in microcontrollers, so that keys can&#x27;t be extracted by just dumping strings from a binary.
  • SilentM681 day ago
    Interesting project. Here&#x27;s a thought which I&#x27;ve always had in the back of my mind, ever since I saw something similar in an episode of Buck Rogers (70s-80s)! Many people struggle with falling asleep due to persistent beta waves; natural theta predominance is needed but often delayed. Imagine an &quot;INEXPENSIVE&quot; smart sleep mask that facilitates sleep onset by inducing brain wave transitions from beta (wakeful, high-frequency) to alpha (8-13 Hz, relaxed) and then theta (4-8 Hz, stage 1 light sleep) via non-invasive stimulation. A solution could be a comfortable eye mask with integrated headphones (unintrusive) and EEG sensors. It could use binaural beats or similar audio stimulation to &quot;inject&quot; alpha&#x2F;theta frequencies externally, guiding the brain to a tipping point for abrupt sleep onset. Sensors would detect current waves; app-controlled audio ramps from alpha-inducing beats to theta, ensuring natural predominance. If it could be designed, it could accelerate sleep transition, improve quality, non-pharmacological.
    • BenjiWiebe1 day ago
      So are the brain waves the cause or the effect?<p>Are beta waves a sign that my mind is racing and wide awake, or are they the reason?
      • SilentM6818 hours ago
        Don&#x27;t know but as AI advances, questions like that may get easier to answer.
    • Jolter1 day ago
      What’s your proposed mechanism for how audio waves would induce brain waves?
      • pixl9719 hours ago
        No idea about audio frequencies close to hearing, but I&#x27;m pretty sure it&#x27;s common to manipulate the brain with ultrasonic frequencies these days.
        • SilentM6818 hours ago
          Yeah, I&#x27;m sure that technology has existed for decades. Common folks just not allowed to know about it. It&#x27;s &quot;for our own good!&quot; sarcastically speaking :(
      • SilentM6818 hours ago
        That&#x27;s a toughie, but if it were me and I had the energy, I&#x27;d start by looking at the following patents:<p>- US20030171688A1: Mind controller - Induces alpha&#x2F;theta brainwaves via audio messages. - US20070084473A1: Brain wave entrainment in sound - Modulates music for desired brain states. - US11309858: Inducing brainwaves by sound - Adjusts volume gains for specific frequencies. - US5036858A: Changing brain wave frequency - Generates binaural beats to alter waves. - US3951134: Remotely altering brain waves - Monitors and modifies via RF&#x2F;EM waves. - US5306228A: Brain wave synchronizer - Uses light&#x2F;sound for entrainment. - US6587729: RF hearing effect - Transmits speech via microwaves to brain. - US6488617: Desired brain state - Electromagnetic pulses for mind states. - US4858612: Microwave hearing simulation - Induces sounds in auditory cortex. - US6930235B2: EM to sound waves - Relates waves for brain influence. - EP0747080A1: Brain wave inducing - Sine waves via speaker for alpha waves. - US5954629A: Brain wave system - Feedback light stimulation. - US5954630A: FM theta sound - Superposes low frequencies for theta induction. - US5159703A: Silent subliminal - Ultrasonic carriers for brain inducement. - US6017302A: Acoustic manipulation - Subaudio pulses for nervous system control.
  • sodapopcan14 hours ago
    Who cares. I&#x27;m so tired.
  • ThouYS23 hours ago
    the headlines these days
  • 4gotunameagain11 hours ago
    &gt; Claude ran strings on the binary and this was the most productive step of the whole session.<p>After $150 in tokens, inflating GPU prices by 10%, spending $550 of VC money, and increasing the earth&#x27;s temperature by 0.2 degC, claude did what a 16 year old that read two blog posts about reverse engineering would do.
    • dash211 hours ago
      I think the number of people who could do this in half an hour is low.
      • therein10 hours ago
        Article is saying it was the most productive step and crediting it to Claude. However it is indeed what anyone would do pretty much as a first step.
    • azan_7 hours ago
      The impact of AI on environment is overblown.
  • techsocialism22 hours ago
    &quot;smart sleep mask :D - what next, smart toilet seats? Oh, wait...<p>Dudes so stupid being tied to tech everywhere.
  • t3chd33r20 hours ago
    Is this some kind of joke? Claude hallucinated everything, including capacity of device to accurately measure EGG of brain waves and hallucinated the process of decoding APK to some paranoidal user who has posted his conspiracy level AI hallucinations “finds” to his blog post and everyone is like “Yeah, Claude can do this”. Is everyone here insane? I am insane?
    • logicprog6 hours ago
      Why do you think it&#x27;s all hallucinated?<p>You have no evidence of that, and it seems very unlikely unless you&#x27;re intentionally wildly assuming the craziest possible scenario, as if you&#x27;re paranoid or insane.<p>You do realize the user can <i>see the tool calls running</i> and check their real, actual output, during this process, right?<p>You do realize that there are <i>several</i> sleep masks on Kickstarter that actually have these features, right?<p>The user has also shared the Claude transcript:<p><a href="https:&#x2F;&#x2F;gist.github.com&#x2F;aimihat&#x2F;a206289b356cac88e2810654adf06a55" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;aimihat&#x2F;a206289b356cac88e2810654adf0...</a>
  • roywiggins1 day ago
    cyberpunk
  • mystraline1 day ago
    &gt; For obvious reasons, I am not naming the product&#x2F;company here, but have reached out to inform them about the issue.<p>Coward. The only way to challenge this garbage is &quot;Name and Shame&quot;. Light a fire under their asses. That fire can encourage them to do right, and as a warning to all other companies.<p>My guess is this is Luuna <a href="https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;flowtimebraintag&#x2F;luuna" rel="nofollow">https:&#x2F;&#x2F;www.kickstarter.com&#x2F;projects&#x2F;flowtimebraintag&#x2F;luuna</a>
    • a4isms1 day ago
      Doesn&#x27;t disclosing this to the world at the same time as you disclose it to the company immediately send hundreds of black hats to their terminals to see how much chaos they can create before the company implements a fix?<p>Perhaps the author is not a coward, but is giving the company time to respond and commit to a fix for the benefit of other owners who could suffer harm.
      • rkagerer1 day ago
        <i>but is giving the company time to respond and commit to a fix for the benefit of other owners who could suffer harm.</i><p>If that&#x27;s the case then they should have deferred this whole blog post.
      • mystraline1 day ago
        It took me 30 seconds with ChatGPT by saying:<p>Identify the kickstarter product talked around in this blog post: (link)<p>To think some blackhat hasn&#x27;t already did that is frankly laughable. What I did was like the lowest of low-bars these days.
        • Barbing1 day ago
          Put the product name in the title &amp; maybe it sends thousands instead of hundreds of blackhats…<p>We often treat doxxing the same way, prohibiting posting of easily discovered information.
          • mystraline1 day ago
            So your plan is to let the blackhats in the know attack user devices, rather than send out a large warning to &quot;Quit using immediately&quot;?<p>If we applied this similar analogy to a e.coli infection of foods, your recommendation amounts to &quot;If we say the company name, the company would be shamed and lose money and people might abuse the food&quot;.<p>People need to know this device is NOT SAFE on your network, paired to your phone, or anything. And that requires direct and public notification.
        • pphysch1 day ago
          And ChatGPT hallucinated a misleading answer that you are confidently regurgitating.
          • croisillon1 day ago
            their original message said &quot;my guess&quot;, not ChatGPT&#x27;s, talk about responsible disclosure...
    • I did consider naming, but they were very responsive to the disclosure and I was not entirely familiar with potential legal implications of doing so. (For what it&#x27;s worth, it is not Luuna)
      • stavros1 day ago
        Please name 50 other companies it&#x27;s not.<p>It&#x27;s good that they were responsive in the disclosure, but it&#x27;s still a mark of sloppiness that this was done in the first place, and I&#x27;d like to know so I can avoid them.
    • itishappy1 day ago
      I don&#x27;t see estim mentioned on that website, but I do see a comparison chart with 4 other competitors with similar capabilities to the one you linked.<p>What makes you think this is the one?
      • mystraline1 day ago
        <a href="https:&#x2F;&#x2F;meta.wikimedia.org&#x2F;wiki&#x2F;Cunningham%27s_Law" rel="nofollow">https:&#x2F;&#x2F;meta.wikimedia.org&#x2F;wiki&#x2F;Cunningham%27s_Law</a><p>I said a guess, not absolute.
    • everdrive1 day ago
      Even if naming and shaming doesn&#x27;t work, I sure want to know so I can always avoid them for myself and my family. Thanks for the call-out and the educated guess.
    • j451 day ago
      EEG devices can cost a lot to own personally as well.<p>The other side of owning equipment like this is it still could be useful for some for personal and private use.
      • EEG is very useful for accurate sleep tracking.
    • hxbdg1 day ago
      Presumably they’ll be named and shamed after they’ve been given a chance to fix things.
  • intellirim1 day ago
    [dead]
    • plagiarist1 day ago
      It is a governance failure.<p>It is also technically a user failure to have purchased a connected device in the first place. Does the device require a closed-source proprietary app? Closed-source non-replaceable OS? Do not buy it.
      • brabel1 day ago
        Very few options available, if any, if you actually do that. The IoT market is unfortunately small and dominated by vendors that don’t want at all an open ecosystem. That would hinder their ability to force you to pay for a subscription which is where all the money is.
      • jmb991 day ago
        Yes, that’s right, don’t buy any new car, any phone, any television. Hell don’t buy any x86 laptop or desktop computer, since you can’t disable out replace Intel ME&#x2F;etc.
    • ai-x1 day ago
      There should be two separate lines of products. One in which privacy is priority and adheres to government regulations (around privacy) and probably costs 2x and one with zero government intervention (around privacy) which costs less and time-to-market is faster.<p>I don&#x27;t want a few irrationally paranoid people bottlenecking progress and access to the latest technology and innovation.<p>I&#x27;m happy to broadcast my brainwaves on an open YouTube channel for the ZERO people who are interested in it.
      • drnick123 hours ago
        &gt; I don&#x27;t want a few irrationally paranoid people bottlenecking progress and access to the latest technology and innovation.<p>Paranoid? Is there not enough evidence posted almost daily on HN that tech companies are constantly spying on their users through computers, Internet-of-Shit devices, phones, cars and even washing machines? You might not care about the brainwave data specifically, but there is bound to be information on your devices that you expect remains private.<p>Things have become so bad that I now refuse to use computers that don&#x27;t run a DIY Linux distro like Arch that allows users to decide what goes into their system. My phone runs GrapheneOS because Google and Apple can&#x27;t be trusted. I self host email and other &quot;cloud&quot; services for the same reason.
      • tgv1 day ago
        Explain how sending EEG recordings is progress. And why faster access to the latest tech is always good, for everyone.
      • selkin1 day ago
        otoh: the non regulated should cost more.<p>It’s kinda like “qualified investors” - you want to make sure people who are wiling to do something extremely stupid can afford it and acknowledge their stupidity.<p>We don’t need regulation to protect those that can afford to buy protection: we need it for those who can’t.
  • kevincloudsec23 hours ago
    [dead]
    • roysting23 hours ago
      &gt; nobody budgets time for security architecture on v1<p>It’s quite literally why the internet is so insecure, because at many points all along the way, “hey, should we design and architect for security?” is&#x2F;was met with “no, we have people to impress and careers to advance with parlor tricks to secure more funding; besides, security is hard and we don’t actually know what we are doing, so tow the line or you’ll be removed.”
  • t3chd33r20 hours ago
    [flagged]
  • bobim1 day ago
    Won&#x27;t they sue for the reverse engineering?
    • Jolter1 day ago
      On what grounds could they sue?
      • bobim4 hours ago
        Well, in the end user agreement there are usually clauses that forbids it. It&#x27;s tolerated in some geographies for interoperability, research and infosec, but you agreed on ToS already.
  • Without a brand name, how can we verify this is real?
    • ohyoutravel1 day ago
      Without any skin in the game with your username, why should we take anything you say seriously?
      • edgarvaldes1 day ago
        Interesting position in a thread about the dangers of exposing yourself to the internet.
  • avanai2 hours ago
    “Ask an LLM to hack your app” should be a production-readiness step from now on.