Player FM - Internet Radio Done Right
Checked 4d ago
Vor eins Jahr hinzugefügt
Inhalt bereitgestellt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!
Gehen Sie mit der App Player FM offline!
Podcasts, die es wert sind, gehört zu werden
GESPONSERT
T
This Is Woman's Work with Nicole Kalil


We’ve turned intuition into a buzzword—flattened it into a slogan, a gut feeling, or a vague whisper we don’t always know how to hear. But what if intuition is so much more? What if it's one of the most powerful tools we have—and we’ve just forgotten how to use it? In this episode, I’m joined by Hrund Gunnsteinsdóttir , Icelandic thought leader, filmmaker, and author of InnSæi: Icelandic Wisdom for Turbulent Times . Hrund has spent over 20 years studying and teaching the science and art of intuition through her TED Talk, Netflix documentary ( InnSæi: The Power of Intuition ), and global work on leadership, innovation, and inner knowing. Together, we explore what intuition really is (hint: not woo-woo), how to cultivate it in a culture obsessed with logic and overthinking, and why your ability to listen to yourself might be the most essential skill you can develop. In This Episode, We Cover: ✅ Why we’ve misunderstood intuition—and how to reclaim it ✅ Practical ways to strengthen your intuitive muscle ✅ What Icelandic wisdom teaches us about inner knowing ✅ How to use intuition during uncertainty and decision-making ✅ Why trusting yourself is an act of rebellion (and power) Intuition isn’t magic—it’s a deep, internal guidance system that already exists inside you. The question is: are you listening? Connect with Hrund: Website: www.hrundgunnsteinsdottir.com TedTalk: https://www.ted.com/talks/hrund_gunnsteinsdottir_listen_to_your_intuition_it_can_help_you_navigate_the_future?utm_campaign=tedspread&utm_medium=referral&utm_source=tedcomshare Newsletter: https://hrundgunnsteinsdottir.com/blog/ LI: www.linkedin.com/in/hrundgunnsteinsdottir IG: https://www.instagram.com/hrundgunnsteinsdottir/ Book: InnSæi: Icelandic Wisdom for Turbulent Times Related Podcast Episodes: How To Breathe: Breathwork, Intuition and Flow State with Francesca Sipma | 267 VI4P - Know Who You Are (Chapter 4) Gentleness: Cultivating Compassion for Yourself and Others with Courtney Carver | 282 Share the Love: If you found this episode insightful, please share it with a friend, tag us on social media, and leave a review on your favorite podcast platform! 🔗 Subscribe & Review: Apple Podcasts | Spotify | Amazon Music…
Pondering AI
Alle als (un)gespielt markieren ...
Manage series 3546664
Inhalt bereitgestellt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
…
continue reading
71 Episoden
Alle als (un)gespielt markieren ...
Manage series 3546664
Inhalt bereitgestellt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
How is the use of artificial intelligence (AI) shaping our human experience? Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse. All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
…
continue reading
71 Episoden
Tüm bölümler
×Dr. Ash Watson studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI. Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can’t escape the stories we’re told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI’s promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI’s claims to innovation; who innovation is really for; and positive developments in co-design and participatory research. Dr. Ash Watson is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council ( ARC ) Centre of Excellence for Automated Decision-Making and Society (CADMS). Related Resources: Ash Watson (Website): https://awtsn.com/ The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article ) : https://doi.org/10.1111/1467-9566.13840 An imperative to innovate? Crisis in the sociotechnical imaginary (Article): https://doi.org/10.1016/j.tele.2024.102229 A transcript of this episode is here .…
Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration. Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research. Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs. A transcript of this episode is here . Additional Resources: The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/ Robert Mahari (website): https://robertmahari.com/…
Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers. Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI. Phaedra Boinodiris is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us . As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all. A transcript of this episode is here . Additional Resources: Phaedra’s Website - https://phaedra.ai/ The Future World Alliance - https://futureworldalliance.org/…
Ryan Carrier trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective. Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets. A transcript of this episode is here . Ryan Carrier is the Executive Director of ForHumanity , a non-profit organization improving AI outcomes through increased accountability and oversight.…
Olivia Gambelin values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives. Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters. A transcript of this episode is here . Olivia Gambelin is a renowned AI Ethicist and the Founder of Ethical Intelligence , the world’s largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI. Additional Resources: Responsible AI: Implement an Ethical Approach in Your Organization – Book Plato & a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes - Book The Values Canvas – RAI Design Tool Women Shaping the Future of Responsible AI – Organization In Pursuit of Good Tech | Subscribe - Newsletter…
Helen Beetham isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course. Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities. Helen Beetham is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, Imperfect Offerings , is recommended by the Guardian/Observer for its wise and thoughtful critique of generative AI. Additional Resources: Imperfect Offerings - https://helenbeetham.substack.com/ Audrey Watters - https://audreywatters.com/ Kathryn (Katie) Conrad - https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/ Anna Mills - https://www.linkedin.com/in/anna-mills-oer/ Dr. Maya Indira Ganesh - https://www.linkedin.com/in/dr-des-maya-indira-ganesh/ Tech(nically) Politics - https://www.technicallypolitics.org/ LOG OFF - logoffmovement.org/ Rest of World - www.restofworld.org/ Derechos Digitales – www.derechosdigitales.org A transcript of this episode is here .…
Steven Kelts engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care. Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices. Steven Kelts is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s Responsible University Network . Additional Resources: Princeton Agile Ethics Program: https://agile-ethics.princeton.edu CITP Talk 11/19/24: Agile Ethics Theory and Evidence Oktar, Lomborozo et al: Changing Moral Judgements 4-Stage Theory of Ethical Decision Making: An Introduction Enabling Engineers through “ Moral Imagination ” (Google) A transcript of this episode is here .…
Susie Alegre makes the case for prioritizing human rights and connection, taking AI systems to account, minding the right gaps, and resisting unwitting AI dependency. Susie and Kimberly discuss the Universal Declaration of Human Rights (UDHR); legal protections and access to justice; human rights laws; how court cases impact legislative will; the wicked problem of companion AI; abdicating accountability for AI systems; Stepford Wives and gynoid robots; human connection and agency; minding the wrong gaps with AI systems; AI dogs vs. AI pooper scoopers; the reality of care and legal work; writing to think; cultural heritage and creativity; pausing for thought; unwittingly becoming dependent on AI; and prioritizing people over technology. Susie Alegre is an acclaimed international human rights lawyer and the author of Freedom to Think: The Long Struggle to Liberate Our Minds and Human Rights, Robot Wrongs: Being Human in the Age of AI . She is also a Senior Fellow at the Centre for International Governance and Innovation ( CIGI ) and Founder of the Island Rights Initiative . Learn more at her website: Susie Alegre A transcript of this episode is here .…
Eryk Salvaggio articulates myths animating AI design, illustrates the nature of creativity and generated media, and artfully reframes the discourse on GenAI and art. Eryk joined Kimberly to discuss myths and metaphors in GenAI design; the illusion of control; if AI saves time and what for; not relying on futuristic AI to solve problems; the fallacy of scale; the dehumanizing narrative of human equivalence; positive biases toward AI; why asking ‘is the machine creative’ misses the mark; creative expression and meaning making; what AI generated art represents; distinguishing archives from datasets; curation as an act of care; representation and context in generated media; the Orwellian view of mass surveillance as anonymity; complicity and critique of GenAI tools; abstraction and noise; and what we aren’t doing when we use GenAI. Eryk Salvaggio is a new media artist, Visiting Professor in Humanities, Computing and Design at the Rochester Institute of Technology, and an Emerging Technology Research Advisor at the Siegel Family Endowment . Eryk is also a researcher on the AI Pedagogies Project at Harvard University’s metaLab and lecturer on Responsible AI at Elisava Barcelona School of Design and Engineering. Addition Resources: Cybernetic Forests: mail.cyberneticforests.com The Age of Noise: https://mail.cyberneticforests.com/the-age-of-noise/ Challenging the Myths of Generative AI: https://www.techpolicy.press/challenging-the-myths-of-generative-ai/ A transcript of this episode is here .…
Geertrui Mieke de Ketelaere reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors. Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the Safe AI Companion Collective ; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond. A transcript of this episode is here . Geertrui Mieke de Ketelaere is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: www.gmdeketelaere.com…
Vaishnavi J respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset. Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth. A transcript of this episode is here . Vaishnavi J is the founder and principal of Vyanams Strategies (VYS) , helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust & safety strategies, and organizational design. Additional Resources: Monthly Youth Tech Policy Brief: https://quire.substack.com…
Kathleen Walch and Ron Schmelzer analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking. The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out. A transcript of this episode is here . Kathleen Walch and Ron Schmelzer are the co-founders of Cognilytica , an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development. Additional Resources: CPMAI certification: https://courses.cognilytica.com/ AI Today podcast: https://www.cognilytica.com/aitoday/…
Dr. Marisa Tschopp explores our evolving, often odd, expectations for AI companions while embracing radical empathy, resisting relentless PR and trusting in humanity. Marisa and Kimberly discuss recent research into AI-based conversational agents, the limits of artificial companionship, implications for mental health therapy, the importance of radical empathy and differentiation, why users defy simplistic categorization, corporate incentives and rampant marketing gags, reasons for optimism, and retaining trust in human connections. A transcript of this episode is here . Dr. Marisa Tschopp is a Psychologist, a Human-AI Interaction Researcher at scip AG and an ardent supporter of Women in AI. Marisa’s research focuses on human-AI relationships, trust in AI, agency, behavioral performance assessment of conversational systems (A-IQ), and gender issues in AI. Additional Resources: The Impact of Human-AI Relationship Perception on Voice Shopping Intentions in Human Machine Collaboration Publication How do users perceive their relationship with conversational AI? Publication KI als Freundin: Funktioniert eine Chatbot-Beziehung? TV Show (German, SRF) Friends with AI? It’s complicated! TEDxBoston Talk…
John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself. John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects. Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself. John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters . A transcript of this episode is here .…
Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems. A transcript of this episode is here .…
Willkommen auf Player FM!
Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.