2 comments

  • ChrisArchitect55 days ago
    Source: <a href="https:&#x2F;&#x2F;www.huntress.com&#x2F;blog&#x2F;amos-stealer-chatgpt-grok-ai-trust" rel="nofollow">https:&#x2F;&#x2F;www.huntress.com&#x2F;blog&#x2F;amos-stealer-chatgpt-grok-ai-t...</a> (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46227224">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=46227224</a>)
  • dash256 days ago
    The important bit, which they don&#x27;t talk about, is that custom instructions seem to have been used to trick ChatGPT into giving dangerous shell commands as help.