Artwork

Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

“Optimizing The Final Output Can Obfuscate CoT (Research Note)” by lukemarks, jacob_drori, cloud, TurnTrout

11:30
 
Teilen
 

Manage episode 497523431 series 3364758
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. This research note overviews some early results which we are looking for feedback on.
TL;DR: We train language models with RL in toy environments. We show that penalizing some property of the output is sufficient to suppress that property in the chain of thought also, even when that property is relevant to task completion. For example, when we penalize a model for mentioning in its output that it completed a task via a certain form of cheating, its reasoning also omits this fact. We therefore suspect that training against an output monitor is sufficient to cause obfuscated CoTs in more realistic settings.
Introduction
Chain of thought (CoT) supervision appears in many control and scalable oversight protocols. It has been argued that being able to monitor CoTs for unwanted behavior is a critical property [...]
---
Outline:
(00:56) Introduction
(02:38) Setup
(03:48) Single-Turn Setting
(04:26) Multi-Turn Setting
(06:51) Results
(06:54) Single-Turn Setting
(08:21) Multi-Turn Terminal-Based Setting
(08:25) Word-Usage Penalty
(09:12) LLM Judge Penalty
(10:12) Takeaways
(10:57) Acknowledgements
The original text contained 1 footnote which was omitted from this narration.
---
First published:
July 30th, 2025
Source:
https://www.lesswrong.com/posts/CM7AsQoBxDW4vhkP3/optimizing-the-final-output-can-obfuscate-cot-research-note
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Cherry-picked example of a run in which the output penalty causes the penalized word to go to zero in the CoT, but the run with no output penalty still frequently contains the penalized word in the CoT. These results are for the ACRE task.
The count of
The count of
  continue reading

604 Episoden

Artwork
iconTeilen
 
Manage episode 497523431 series 3364758
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. This research note overviews some early results which we are looking for feedback on.
TL;DR: We train language models with RL in toy environments. We show that penalizing some property of the output is sufficient to suppress that property in the chain of thought also, even when that property is relevant to task completion. For example, when we penalize a model for mentioning in its output that it completed a task via a certain form of cheating, its reasoning also omits this fact. We therefore suspect that training against an output monitor is sufficient to cause obfuscated CoTs in more realistic settings.
Introduction
Chain of thought (CoT) supervision appears in many control and scalable oversight protocols. It has been argued that being able to monitor CoTs for unwanted behavior is a critical property [...]
---
Outline:
(00:56) Introduction
(02:38) Setup
(03:48) Single-Turn Setting
(04:26) Multi-Turn Setting
(06:51) Results
(06:54) Single-Turn Setting
(08:21) Multi-Turn Terminal-Based Setting
(08:25) Word-Usage Penalty
(09:12) LLM Judge Penalty
(10:12) Takeaways
(10:57) Acknowledgements
The original text contained 1 footnote which was omitted from this narration.
---
First published:
July 30th, 2025
Source:
https://www.lesswrong.com/posts/CM7AsQoBxDW4vhkP3/optimizing-the-final-output-can-obfuscate-cot-research-note
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Cherry-picked example of a run in which the output penalty causes the penalized word to go to zero in the CoT, but the run with no output penalty still frequently contains the penalized word in the CoT. These results are for the ACRE task.
The count of
The count of
  continue reading

604 Episoden

Minden epizód

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung

Hören Sie sich diese Show an, während Sie die Gegend erkunden
Abspielen