2 comments

  • dash22 days ago
    The important bit, which they don't talk about, is that custom instructions seem to have been used to trick ChatGPT into giving dangerous shell commands as help.
  • Source: <a href="https:&#x2F;&#x2F;www.huntress.com&#x2F;blog&#x2F;amos-stealer-chatgpt-grok-ai-trust" rel="nofollow">https:&#x2F;&#x2F;www.huntress.com&#x2F;blog&#x2F;amos-stealer-chatgpt-grok-ai-t...</a> (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46227224">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46227224</a>)