9 comments

  • paxys40 minutes ago
    This spiel is hilarious in the context of the product this company (<a href="https:&#x2F;&#x2F;juno-labs.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;juno-labs.com&#x2F;</a>) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.<p>“Oh but they only run on local hardware…”<p>Okay, but that doesn&#x27;t mean every aspect of our lives needs to be recorded and analyzed by an AI.<p>Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?<p>Have all your guests consented to this?<p>What happens when someone breaks in and steals the box?<p>What if the government wants to take a look at the data in there and serves a warrant?<p>What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
    • ajuhasz8 minutes ago
      &gt; Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?<p>One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).<p>Of these 5 minute transcripts, those that don&#x27;t become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we&#x27;ll continue to iterate based on feedback if this is the correct decision.
    • zmmmmm9 minutes ago
      The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It&#x27;s a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.<p>I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.
  • BoxFour43 minutes ago
    This strikes me as a pretty weak rationalization for &quot;safe&quot; always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.<p>Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it&#x27;s still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.<p>Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup <i>ever</i> sits well with people who value their privacy.
    • ajuhasz37 minutes ago
      We give an overview of our the current memory architecture at <a href="https:&#x2F;&#x2F;juno-labs.com&#x2F;blogs&#x2F;building-memory-for-an-always-on-ai-that-listens-to-your-kitchen" rel="nofollow">https:&#x2F;&#x2F;juno-labs.com&#x2F;blogs&#x2F;building-memory-for-an-always-on...</a><p>This is something we call out under the &quot;What we got wrong&quot; section. We&#x27;re currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.<p>&gt; The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.
  • zmmmmm16 minutes ago
    It&#x27;s interesting to me that there seems to be an implicit line being drawn around what&#x27;s acceptable and what&#x27;s not between video and audio.<p>If there&#x27;s a camera in an AI device (like Meta Ray Ban glasses) then there&#x27;s a light when it&#x27;s on, and they are going out of their way to engineer it to be tamper resistant.<p>But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn&#x27;t need active consent, flashing lights or other privacy preserving measures. And it&#x27;s true, it&#x27;s fundamentally different, because I have to make a proactive choice to speak, but I can&#x27;t avoid being visible. So you can construct a logical argument for it.<p>I&#x27;m curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don&#x27;t use a lot of power, it won&#x27;t be too hard to just have them always on. If we settle on this as the line, what&#x27;s it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?
    • BoxFour5 minutes ago
      &gt; Passively listening ambient audio is being treated as something that doesn&#x27;t need active consent<p>That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.<p>Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.
  • sxp1 hour ago
    The article is forgetting about Anthropic which currently has the best agentic programmer and was the backbone for the recent OpenClaw assistants.
    • ajuhasz1 hour ago
      True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.
      • iugtmkbdfil83446 minutes ago
        This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.
  • NickJLange58 minutes ago
    This isn&#x27;t a technology issue. Regulation is the only sane way to address the issue.<p>For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.
  • FeteCommuniste1 hour ago
    The level of trust I have in a promise made by any existing AI company that such a device would never phone home: 0.
  • rimbo7891 hour ago
    Ads in AI should be banned right now. We need to learn from mistakes of the internet (crypto, facebook) and aggressively regulate early and often before this gets too institutionalized to remove.
    • nancyminusone55 minutes ago
      They did learn. That&#x27;s why they are adding ads.
    • doomslayer99948 minutes ago
      Boomers in government would be clueless on how to properly regulate and create correct incentives. Hell, that is still a bold ask for tech and economist geniuses with the best of intentions.
      • irishcoffee42 minutes ago
        Would that be the same cohort of boomers jamming LLMs up our collective asses? So they don’t understand how to regulate a technology they don’t understand, but fucking by golly you’re going to be left behind if you don’t use it?<p>This is like a shitty Disney movie.
        • doomslayer99935 minutes ago
          It&#x27;s mostly SV grifters who shoved LLMs up our asses. They then get in cahoots with boomers in the government to create policies and &quot;investment schemes&quot; that inflate their stock in a ponzi-like fashion and regulate competition. Why do you think Trump has some no-name crypto firm, or why Thiel has Vance as his whipping boy, and Elon spend a fortune trying to get Trump to win? This is a multiparty thing, as most politicians are heavily bought and paid for.
    • kalterdev1 hour ago
      Ads (at least in the classical pre-AI sense) are by orders of magnitude better than preventive laws
  • kleiba1 hour ago
    Always on is incompatible with data protection rights, such as the GDPR in Europe.
    • ajuhasz58 minutes ago
      With cloud based inference we agree, this being just one more benefit of doing everything with &quot;edge&quot; inference (on device inside the home) as we do with Juno.
  • doomslayer99946 minutes ago
    Who would buy OpenAI&#x27;s spy device? I think a lot of public discourse and backlash about the greedy, anticompetitive, and exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.