Artwork

Inhalt bereitgestellt von Machine Learning Street Talk (MLST). Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Machine Learning Street Talk (MLST) oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

The Secret Engine of AI - Prolific [Sponsored] (Sara Saab, Enzo Blindow)

1:19:39
 
Teilen
 

Manage episode 514420582 series 2803422
Inhalt bereitgestellt von Machine Learning Street Talk (MLST). Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Machine Learning Street Talk (MLST) oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

We sat down with Sara Saab (VP of Product at Prolific) and Enzo Blindow (VP of Data and AI at Prolific) to explore the critical role of human evaluation in AI development and the challenges of aligning AI systems with human values. Prolific is a human annotation and orchestration platform for AI used by many of the major AI labs. This is a sponsored show in partnership with Prolific.

**SPONSOR MESSAGES**

cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy

Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++

Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst

Submit investment deck: https://cyber.fund/contact?utm_source=mlst

While technologists want to remove humans from the loop for speed and efficiency, these non-deterministic AI systems actually require more human oversight than ever before. Prolific's approach is to put "well-treated, verified, diversely demographic humans behind an API" - making human feedback as accessible as any other infrastructure service.

When AI models like Grok 4 achieve top scores on technical benchmarks but feel awkward or problematic to use in practice, it exposes the limitations of our current evaluation methods. The guests argue that optimizing for benchmarks may actually weaken model performance in other crucial areas, like cultural sensitivity or natural conversation.

We also discuss Anthropic's research showing that frontier AI models, when given goals and access to information, independently arrived at solutions involving blackmail - without any prompting toward unethical behavior. Even more concerning, the more sophisticated the model, the more susceptible it was to this "agentic misalignment."

Enzo and Sarah present Prolific's "Humane" leaderboard as an alternative to existing benchmarking systems. By stratifying evaluations across diverse demographic groups, they reveal that different populations have vastly different experiences with the same AI models.

Looking ahead, the guests imagine a world where humans take on coaching and teaching roles for AI systems - similar to how we might correct a child or review code. This also raises important questions about working conditions and the evolution of labor in an AI-augmented world. Rather than replacing humans entirely, we may be moving toward more sophisticated forms of human-AI collaboration.

As AI tech becomes more powerful and general-purpose, the quality of human evaluation becomes more critical, not less. We need more representative evaluation frameworks that capture the messy reality of human values and cultural diversity.

Visit Prolific:

https://www.prolific.com/

Sara Saab (VP Product):

https://uk.linkedin.com/in/sarasaab

Enzo Blindow (VP Data & AI):

https://uk.linkedin.com/in/enzoblindow

TRANSCRIPT:

https://app.rescript.info/public/share/xZ31-0kJJ_xp4zFSC-bunC8-hJNkHpbm7Lg88RFcuLE

TOC:

[00:00:00] Intro & Background

[00:03:16] Human-in-the-Loop Challenges

[00:17:19] Can AIs Understand?

[00:32:02] Benchmarking & Vibes

[00:51:00] Agentic Misalignment Study

[01:03:00] Data Quality vs Quantity

[01:16:00] Future of AI Oversight

REFS:

Anthropic Agentic Misalignment

https://www.anthropic.com/research/agentic-misalignment

Value Compass

https://arxiv.org/pdf/2409.09586

Reasoning Models Don’t Always Say What They Think (Anthropic)

https://www.anthropic.com/research/reasoning-models-dont-say-think

https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdf

Apollo research - science of evals blog post

https://www.apolloresearch.ai/blog/we-need-a-science-of-evals

Leaderboard Illusion

https://www.youtube.com/watch?v=9W_OhS38rIE MLST video

The Leaderboard Illusion [2025]

Shivalika Singh et al

https://arxiv.org/abs/2504.20879

(Truncated, full list on YT)

  continue reading

234 Episoden

Artwork
iconTeilen
 
Manage episode 514420582 series 2803422
Inhalt bereitgestellt von Machine Learning Street Talk (MLST). Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Machine Learning Street Talk (MLST) oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

We sat down with Sara Saab (VP of Product at Prolific) and Enzo Blindow (VP of Data and AI at Prolific) to explore the critical role of human evaluation in AI development and the challenges of aligning AI systems with human values. Prolific is a human annotation and orchestration platform for AI used by many of the major AI labs. This is a sponsored show in partnership with Prolific.

**SPONSOR MESSAGES**

cyber•Fund https://cyber.fund/?utm_source=mlst is a founder-led investment firm accelerating the cybernetic economy

Oct SF conference - https://dagihouse.com/?utm_source=mlst - Joscha Bach keynoting(!) + OAI, Anthropic, NVDA,++

Hiring a SF VC Principal: https://talent.cyber.fund/companies/cyber-fund-2/jobs/57674170-ai-investment-principal#content?utm_source=mlst

Submit investment deck: https://cyber.fund/contact?utm_source=mlst

While technologists want to remove humans from the loop for speed and efficiency, these non-deterministic AI systems actually require more human oversight than ever before. Prolific's approach is to put "well-treated, verified, diversely demographic humans behind an API" - making human feedback as accessible as any other infrastructure service.

When AI models like Grok 4 achieve top scores on technical benchmarks but feel awkward or problematic to use in practice, it exposes the limitations of our current evaluation methods. The guests argue that optimizing for benchmarks may actually weaken model performance in other crucial areas, like cultural sensitivity or natural conversation.

We also discuss Anthropic's research showing that frontier AI models, when given goals and access to information, independently arrived at solutions involving blackmail - without any prompting toward unethical behavior. Even more concerning, the more sophisticated the model, the more susceptible it was to this "agentic misalignment."

Enzo and Sarah present Prolific's "Humane" leaderboard as an alternative to existing benchmarking systems. By stratifying evaluations across diverse demographic groups, they reveal that different populations have vastly different experiences with the same AI models.

Looking ahead, the guests imagine a world where humans take on coaching and teaching roles for AI systems - similar to how we might correct a child or review code. This also raises important questions about working conditions and the evolution of labor in an AI-augmented world. Rather than replacing humans entirely, we may be moving toward more sophisticated forms of human-AI collaboration.

As AI tech becomes more powerful and general-purpose, the quality of human evaluation becomes more critical, not less. We need more representative evaluation frameworks that capture the messy reality of human values and cultural diversity.

Visit Prolific:

https://www.prolific.com/

Sara Saab (VP Product):

https://uk.linkedin.com/in/sarasaab

Enzo Blindow (VP Data & AI):

https://uk.linkedin.com/in/enzoblindow

TRANSCRIPT:

https://app.rescript.info/public/share/xZ31-0kJJ_xp4zFSC-bunC8-hJNkHpbm7Lg88RFcuLE

TOC:

[00:00:00] Intro & Background

[00:03:16] Human-in-the-Loop Challenges

[00:17:19] Can AIs Understand?

[00:32:02] Benchmarking & Vibes

[00:51:00] Agentic Misalignment Study

[01:03:00] Data Quality vs Quantity

[01:16:00] Future of AI Oversight

REFS:

Anthropic Agentic Misalignment

https://www.anthropic.com/research/agentic-misalignment

Value Compass

https://arxiv.org/pdf/2409.09586

Reasoning Models Don’t Always Say What They Think (Anthropic)

https://www.anthropic.com/research/reasoning-models-dont-say-think

https://assets.anthropic.com/m/71876fabef0f0ed4/original/reasoning_models_paper.pdf

Apollo research - science of evals blog post

https://www.apolloresearch.ai/blog/we-need-a-science-of-evals

Leaderboard Illusion

https://www.youtube.com/watch?v=9W_OhS38rIE MLST video

The Leaderboard Illusion [2025]

Shivalika Singh et al

https://arxiv.org/abs/2504.20879

(Truncated, full list on YT)

  continue reading

234 Episoden

All episodes

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung

Hören Sie sich diese Show an, während Sie die Gegend erkunden
Abspielen