2 comments
The important bit, which they don't talk about, is that custom instructions seem to have been used to trick ChatGPT into giving dangerous shell commands as help.
Source: <a href="https://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust" rel="nofollow">https://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-t...</a> (<a href="https://news.ycombinator.com/item?id=46227224">https://news.ycombinator.com/item?id=46227224</a>)