9 comments

  • crabmusket37 days ago
    This is interesting, and I think worth trying. However,<p><pre><code> The process is iterative: Vibe code users &lt;--&gt; Vibe code software Step by step, you get closer to truly understanding your users </code></pre> Do not fool yourself. This is not &quot;truly&quot; &quot;understanding&quot; your &quot;users&quot;. This is a model which may be very useful, but should not be mistaken for your users themselves.<p>Nothing beats feedback from humans, and there&#x27;s no way around the painstaking effort of customer development to understand how to satisfy their needs using software.
    • bahmboo37 days ago
      I agree. I do like the general idea as an exploration.<p>Perhaps the idea is to use an LLM to emulate users such that some user-based problems can be detected early.<p>It is very frustrating to ship a product and have a product show stopper right out of the gate that was missed by everyone on the team. It is also sometimes difficult to get accurate feedback from an early user group.
  • elxr37 days ago
    A bit too vague to be useful advice don&#x27;t you think?<p>Why not show some actual examples of these agents actually doing what you describe. How exactly would you set up an agent to simulate a user?
    • crabmusket37 days ago
      To me it sounds like one way to do this would be to have LLMs write Cucumber test cases. Those are high level, natural language tests which could be run in a browser.
  • oldsj37 days ago
    I like the idea. As a solo dev I&#x27;ve experimented with creating Claude subagents for multiple perspectives for &quot;team leads&quot; and will run ideas through them (in parallel). The subagents are just simple markdown explaining the various perspectives that are usually in contention when designing stuff. And a &#x27;decider&#x27; that gives me an executive summary.<p><pre><code> agents&#x2F; |-- customer-expert.md - validates problem assumptions, customer reality |-- design-lead.md - shapes solution concepts, ensures UX quality |-- growth-expert.md - competitive landscape, positioning, distribution |-- technical-expert.md - assesses feasibility, identifies technical risks |-- decider-advisor.md - synthesizes perspectives, executive analysis</code></pre>
    • bisonbear36 days ago
      I&#x27;ve experimented with something similar - my flow is to have the subagents &quot;initialize&quot; a persona for the task at hand, and then have the main thread simulate a debate between the personas. Not sure if it&#x27;s the best approach but it&#x27;s helpful to get a diversity of perspectives on an issue
    • nprateem36 days ago
      s&#x2F;validates&#x2F;guesses&#x2F;<p>Ftfy. You might as well toss a coin.
      • oldsj36 days ago
        I think we’re slightly better than random at this point
        • nprateem36 days ago
          You really think asking AI to spaff out some bullshit validates anything?
  • wiseowise37 days ago
    Another fart in the wind. How to write lots of “programming philosophy” and say nothing.
  • invide37 days ago
    Good point. I think it is the time to remove the line between the engineering and product management completely. Because, we can.
  • canadiantim37 days ago
    Pretty radical idea, love it! Definitely going to give this a try
  • 3D3973909137 days ago
    &gt; LLMs likely have a much better understanding of what our users need and want.<p>They don&#x27;t.<p>Basically this sounds like Agentic Fuzz Testing. Could it be useful? Sure. Does it have anything to do with what real users need or want? Nope.
  • fainpul37 days ago
    This is ridiculous. I doubt this would work with a general AI, but it surely cannot work with LLMs who understand exactly nothing about human behaviour.
    • josters37 days ago
      They may not understand it but they may very well be able to reproduce aspects of feedback and comments on similar pieces of software.<p>I agree that the approach shouldn’t be done unsupervised but I can imagine it being useful to gain valuable insights for improving the product before real users even interact with it.
      • fainpul37 days ago
        &gt; reproduce aspects of feedback and comments on similar pieces of software<p>But this is completely worthless or even misleading. There is zero value in this kind of &quot;feedback&quot;. It will produce nonsense which sounds believable. You need to talk to real users of your software.
        • ZachSaucier37 days ago
          Letting the LLM generate use cases is probably not a good idea. But writing common use cases and having a bot crawl your app and then write up how many steps it took to accomplish the goal is a good idea that I hadn&#x27;t thought of before. You could even set it up as a CI check to make sure that new features that introduce more steps for specific flows are very conscious decisions. In a large application this could be a very useful feature.
  • Sharanxxxx37 days ago
    Yeah, I have build a Product hunt alternative for solo founders - Solo Launches to give them visibility. I built 290+ users from it till now and its free and giving a good DR dofollow backlink. <a href="https:&#x2F;&#x2F;sololaunches.com" rel="nofollow">https:&#x2F;&#x2F;sololaunches.com</a>