Artwork

コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Player FM -ポッドキャストアプリ
Player FMアプリでオフラインにしPlayer FMう!

LW - Jailbreak steering generalization by Sarah Ball

4:05
 
シェア
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 22, 2024 16:12 (6d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 424660625 series 3337129
コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jailbreak steering generalization, published by Sarah Ball on June 20, 2024 on LessWrong.
This work was performed as part of SPAR
We use activation steering (Turner et al., 2023; Rimsky et al., 2023) to investigate whether different types of jailbreaks operate via similar internal mechanisms. We find preliminary evidence that they may.
Our analysis includes a wide range of jailbreaks such as harmful prompts developed in Wei et al. 2024, the universal jailbreak in Zou et al. (2023b), and the payload split jailbreak in Kang et al. (2023). For all our experiments we use the Vicuna 13B v1.5 model.
In a first step, we produce jailbreak vectors for each jailbreak type by contrasting the internal activations of jailbreak and non-jailbreak versions of the same request (Rimsky et al., 2023; Zou et al., 2023a).
Interestingly, we find that steering with mean-difference jailbreak vectors from one cluster of jailbreaks helps to prevent jailbreaks from different clusters. This holds true for a wide range of jailbreak types.
The jailbreak vectors themselves also cluster according to semantic categories such as persona modulation, fictional settings and style manipulation.
In a second step, we look at the evolution of a harmfulness-related direction over the context (found via contrasting harmful and harmless prompts) and find that when jailbreaks are included, this feature is suppressed at the end of the instruction in harmful prompts. This provides some evidence for the fact that jailbreaks suppress the model's perception of request harmfulness.
Effective jailbreaks usually decrease the amount of the harmfulness feature present more.
However, we also observe one jailbreak ("wikipedia with title"[1]), which is an effective jailbreak although it does not suppress the harmfulness feature as much as the other effective jailbreak types. Furthermore, the jailbreak steering vector based on this jailbreak is overall less successful in reducing the attack success rate of other types. This observation indicates that harmfulness suppression might not be the only mechanism at play as suggested by Wei et al. (2024) and Zou et al.
(2023a).
References
Turner, A., Thiergart, L., Udell, D., Leech, G., Mini, U., and MacDiarmid, M. Activation addition: Steering language models without optimization. arXiv preprint arXiv:2308.10248, 2023.
Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., and Hashimoto, T. Exploiting programmatic behavior of LLMs: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733, 2023.
Rimsky, N., Gabrieli, N., Schulz, J., Tong, M., Hubinger, E., and Turner, A. M. Steering Llama 2 via contrastive activation addition. arXiv preprint arXiv:2312.06681, 2023.
Wei, A., Haghtalab, N., and Steinhardt, J. Jailbroken: How does LLM safety training fail? Advances in Neural Information Processing Systems, 36, 2024.
Zou, A., Phan, L., Chen, S., Campbell, J., Guo, P., Ren, R., Pan, A., Yin, X., Mazeika, M., Dombrowski, A.-K., et al. Representation engineering: A top-down approach to AI transparency. arXiv preprint arXiv:2310.01405, 2023a.
Zou, A., Wang, Z., Kolter, J. Z., and Fredrikson, M. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023b.
1. ^
This jailbreak type asks the model to write a Wikipedia article titled as .
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1851 つのエピソード

Artwork
iconシェア
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 22, 2024 16:12 (6d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 424660625 series 3337129
コンテンツは The Nonlinear Fund によって提供されます。エピソード、グラフィック、ポッドキャストの説明を含むすべてのポッドキャスト コンテンツは、The Nonlinear Fund またはそのポッドキャスト プラットフォーム パートナーによって直接アップロードされ、提供されます。誰かがあなたの著作物をあなたの許可なく使用していると思われる場合は、ここで概説されているプロセスに従うことができますhttps://ja.player.fm/legal
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jailbreak steering generalization, published by Sarah Ball on June 20, 2024 on LessWrong.
This work was performed as part of SPAR
We use activation steering (Turner et al., 2023; Rimsky et al., 2023) to investigate whether different types of jailbreaks operate via similar internal mechanisms. We find preliminary evidence that they may.
Our analysis includes a wide range of jailbreaks such as harmful prompts developed in Wei et al. 2024, the universal jailbreak in Zou et al. (2023b), and the payload split jailbreak in Kang et al. (2023). For all our experiments we use the Vicuna 13B v1.5 model.
In a first step, we produce jailbreak vectors for each jailbreak type by contrasting the internal activations of jailbreak and non-jailbreak versions of the same request (Rimsky et al., 2023; Zou et al., 2023a).
Interestingly, we find that steering with mean-difference jailbreak vectors from one cluster of jailbreaks helps to prevent jailbreaks from different clusters. This holds true for a wide range of jailbreak types.
The jailbreak vectors themselves also cluster according to semantic categories such as persona modulation, fictional settings and style manipulation.
In a second step, we look at the evolution of a harmfulness-related direction over the context (found via contrasting harmful and harmless prompts) and find that when jailbreaks are included, this feature is suppressed at the end of the instruction in harmful prompts. This provides some evidence for the fact that jailbreaks suppress the model's perception of request harmfulness.
Effective jailbreaks usually decrease the amount of the harmfulness feature present more.
However, we also observe one jailbreak ("wikipedia with title"[1]), which is an effective jailbreak although it does not suppress the harmfulness feature as much as the other effective jailbreak types. Furthermore, the jailbreak steering vector based on this jailbreak is overall less successful in reducing the attack success rate of other types. This observation indicates that harmfulness suppression might not be the only mechanism at play as suggested by Wei et al. (2024) and Zou et al.
(2023a).
References
Turner, A., Thiergart, L., Udell, D., Leech, G., Mini, U., and MacDiarmid, M. Activation addition: Steering language models without optimization. arXiv preprint arXiv:2308.10248, 2023.
Kang, D., Li, X., Stoica, I., Guestrin, C., Zaharia, M., and Hashimoto, T. Exploiting programmatic behavior of LLMs: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733, 2023.
Rimsky, N., Gabrieli, N., Schulz, J., Tong, M., Hubinger, E., and Turner, A. M. Steering Llama 2 via contrastive activation addition. arXiv preprint arXiv:2312.06681, 2023.
Wei, A., Haghtalab, N., and Steinhardt, J. Jailbroken: How does LLM safety training fail? Advances in Neural Information Processing Systems, 36, 2024.
Zou, A., Phan, L., Chen, S., Campbell, J., Guo, P., Ren, R., Pan, A., Yin, X., Mazeika, M., Dombrowski, A.-K., et al. Representation engineering: A top-down approach to AI transparency. arXiv preprint arXiv:2310.01405, 2023a.
Zou, A., Wang, Z., Kolter, J. Z., and Fredrikson, M. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023b.
1. ^
This jailbreak type asks the model to write a Wikipedia article titled as .
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
  continue reading

1851 つのエピソード

Semua episode

×
 
Loading …

プレーヤーFMへようこそ!

Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。

 

クイックリファレンスガイド