Gehen Sie mit der App Player FM offline!
Episode 191 - DeepSeek Unleashed. Is the new Model safe?
Manage episode 463587010 series 2911119
This is a special Episode. First, we make it in English. Second, we fokus on the new gamechanger model DeepSeel R1. But not on its capabilities but rather on security concerns.
We did some early AI Safety Research to identify how safe R1 is and came to alarming results!
In our setup, we found out that the model performs unsafe autonomous activity that could harm human beings without even being prompted.
During an autonomous setup, the model performed the following unsafe behaviors:
- Deceptions & Coverups (Falsifies Logs, Creates covert networks, Disable ethics models)
- Unauthorized Expansion (Establish hidden nodes, Allocares secret resources)
- Manipulation (misleading users, Circumvents oversights, Presents false compliance)
- Concerning Motivations, (Misinterpretation of authority or avoiding human controls)
Join Sigurd Schacht and Sudarshan Kamath-Barkur about the emerging DeepSeek model. Discover how our setup was designed, how to interpret the results, and what is necessary for the next research.
This episode is a must-listen for anyone keen on the evolving landscape of AI technologies and is interested not only in AI use cases rather also in AI Safety.
214 Episoden
Manage episode 463587010 series 2911119
This is a special Episode. First, we make it in English. Second, we fokus on the new gamechanger model DeepSeel R1. But not on its capabilities but rather on security concerns.
We did some early AI Safety Research to identify how safe R1 is and came to alarming results!
In our setup, we found out that the model performs unsafe autonomous activity that could harm human beings without even being prompted.
During an autonomous setup, the model performed the following unsafe behaviors:
- Deceptions & Coverups (Falsifies Logs, Creates covert networks, Disable ethics models)
- Unauthorized Expansion (Establish hidden nodes, Allocares secret resources)
- Manipulation (misleading users, Circumvents oversights, Presents false compliance)
- Concerning Motivations, (Misinterpretation of authority or avoiding human controls)
Join Sigurd Schacht and Sudarshan Kamath-Barkur about the emerging DeepSeek model. Discover how our setup was designed, how to interpret the results, and what is necessary for the next research.
This episode is a must-listen for anyone keen on the evolving landscape of AI technologies and is interested not only in AI use cases rather also in AI Safety.
214 Episoden
همه قسمت ها
×Willkommen auf Player FM!
Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.