Artwork

Inhalt bereitgestellt von Kashif Manzoor. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Kashif Manzoor oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

AI Security and Agentic Risks Every Business Needs to Understand with Alexander Schlager

27:42
 
Teilen
 

Manage episode 506108095 series 2922369
Inhalt bereitgestellt von Kashif Manzoor. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Kashif Manzoor oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation.

Chapters

00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions

Episode # 166

Today’s Guest: Alexander Schlager, Founder and CEO of AIceberg.ai

He's founded a next-generation AI cybersecurity company that’s revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he’s restoring trust in an era of automation.

What Listeners Will Learn:

  • Why real-time AI security and runtime protection are essential for safe deployments
  • How explainable AI builds trust with users and regulators
  • The unique risks of agentic AI and how to manage them responsibly
  • Why AI safety and governance are becoming strategic priorities for companies
  • How education, awareness, and upskilling help close the AI skills gap
  • Why natural language processing (NLP) is becoming the default interface for enterprise technology

Keywords:

AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning

Resources:

  continue reading

167 Episoden

Artwork
iconTeilen
 
Manage episode 506108095 series 2922369
Inhalt bereitgestellt von Kashif Manzoor. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Kashif Manzoor oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

In this episode of Open Tech Talks, we delve into the critical topics of AI security, explainability, and the risks associated with agentic AI. As organizations adopt Generative AI and Large Language Models (LLMs), ensuring safety, trust, and responsible usage becomes essential. This conversation covers how runtime protection works as a proxy between users and AI models, why explainability is key to user trust, and how cybersecurity teams are becoming central to AI innovation.

Chapters

00:00 Introduction to AI Security and eIceberg 02:45 The Evolution of AI Explainability 05:58 Runtime Protection and AI Safety 07:46 Adoption Patterns in AI Security 10:51 Agentic AI: Risks and Management 13:47 Building Effective Agentic AI Workflows 16:42 Governance and Compliance in AI 19:37 The Role of Cybersecurity in AI Innovation 22:36 Lessons Learned and Future Directions

Episode # 166

Today’s Guest: Alexander Schlager, Founder and CEO of AIceberg.ai

He's founded a next-generation AI cybersecurity company that’s revolutionizing how we approach digital defense. With a strong background in enterprise tech and a visionary outlook on the future of AI, Alexander is doing more than just developing tools — he’s restoring trust in an era of automation.

What Listeners Will Learn:

  • Why real-time AI security and runtime protection are essential for safe deployments
  • How explainable AI builds trust with users and regulators
  • The unique risks of agentic AI and how to manage them responsibly
  • Why AI safety and governance are becoming strategic priorities for companies
  • How education, awareness, and upskilling help close the AI skills gap
  • Why natural language processing (NLP) is becoming the default interface for enterprise technology

Keywords:

AI security, generative AI, agentic AI, explainability, runtime protection, cybersecurity, compliance, AI governance, machine learning

Resources:

  continue reading

167 Episoden

כל הפרקים

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung

Hören Sie sich diese Show an, während Sie die Gegend erkunden
Abspielen