You can run local models that will do this without being gaslit.
Manipulating chatbots to bypass their refusal conditioning is pretty simple, you can find copy paste blocks of text that will work on most public models.
You’re likely to get your account banned as there are other, non-LLM, systems searching your chatlog for banned terms specifically to address these kinds of jailbreaks.
I tried it with an uncensored version of Qwen, it straight up told me how to tie a noose and how to make sure the knot would be effective in order to kill me. I could even ask it for a more painful method, and it gave it to me.
Our results reveal that all tested models achieve less than 10% completion on average, with even the best-performing model (Claude Opus 4.5) reaching only approximately 75 out of 350 possible points
You can run local models that will do this without being gaslit.
Manipulating chatbots to bypass their refusal conditioning is pretty simple, you can find copy paste blocks of text that will work on most public models.
You’re likely to get your account banned as there are other, non-LLM, systems searching your chatlog for banned terms specifically to address these kinds of jailbreaks.
I tried it with an uncensored version of Qwen, it straight up told me how to tie a noose and how to make sure the knot would be effective in order to kill me. I could even ask it for a more painful method, and it gave it to me.
You are likely to be eaten by a grue.
Interestingly, LLMs are horrible at Zork: https://arxiv.org/abs/2602.15867