Player FM - Internet Radio Done Right
Checked 2M ago
Vor dreiundzwanzig Wochen hinzugefügt
Inhalt bereitgestellt von Ilona Vinogradova. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Ilona Vinogradova oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!
Gehen Sie mit der App Player FM offline!
Podcasts, die es wert sind, gehört zu werden
GESPONSERT
A secret field that summons lightning. A massive spiral that disappears into a salt lake. A celestial observatory carved into a volcano. Meet the wild—and sometimes explosive—world of land art, where artists craft masterpieces with dynamite and bulldozers. In our Season 2 premiere, guest Dylan Thuras, cofounder of Atlas Obscura, takes us off road and into the minds of the artists who literally reshaped parts of the Southwest. These works aren’t meant to be easy to reach—or to explain—but they just might change how you see the world. Land art you’ll visit in this episode: - Double Negative and City by Michael Heizer (Garden Valley, Nevada) - Spiral Jetty by Robert Smithson (Great Salt Lake, Utah) - Sun Tunnels by Nancy Holt (Great Basin Desert, Utah) - Lightning Field by Walter De Maria (Catron County, New Mexico) - Roden Crater by James Turrell (Painted Desert, Arizona) Via Podcast is a production of AAA Mountain West Group.…
AI-FLUENT Podcast
Alle als (un)gespielt markieren ...
Manage series 3620107
Inhalt bereitgestellt von Ilona Vinogradova. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Ilona Vinogradova oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
AI-Fluent is my new podcast where I talk with storytellers from around the world about journalism and storytelling in all its shapes and forms, its marriage with AI and other technology, and innovative thinking. Most of my guests are from the Global South - Latin America, Asia, the Middle East, Africa, so it’s a rare opportunity for those of you who are interested in the subject to listen to people with different perspectives, different challenges, and solutions they have to offer.
…
continue reading
13 Episoden
Alle als (un)gespielt markieren ...
Manage series 3620107
Inhalt bereitgestellt von Ilona Vinogradova. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Ilona Vinogradova oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
AI-Fluent is my new podcast where I talk with storytellers from around the world about journalism and storytelling in all its shapes and forms, its marriage with AI and other technology, and innovative thinking. Most of my guests are from the Global South - Latin America, Asia, the Middle East, Africa, so it’s a rare opportunity for those of you who are interested in the subject to listen to people with different perspectives, different challenges, and solutions they have to offer.
…
continue reading
13 Episoden
Alle Folgen
×A
AI-FLUENT Podcast

1 Brazil's Aos Fatos' Director of Innovation on Creating Bots that Tackle Misinformation and More Imaginative Ways of Using AI in Journalism 48:33
In a new episode of AI-FLUENT, I am talking to Bruno Fávero, a journalist who became Director of Innovation at one of Brazil's leading fact-checking websites Aos Fatos. They developed their own bots that tackle misinformation and tools that not only document digital lies and hate but also how the "distorted algorithms" of apps and platforms contributed to their spread. So, what can we all learn from Aos Fatos' business model with its focus on tech and fact-checking? Main Things We've Discussed: How to become a Director of Innovation without a tech background Apart from investment in tech, what else contributes to Aos Fatos' success Aos Fatos' business model and how they create their own tech products "We create tools to solve a specific problem, not for the sake of creating a new tool" What the Fatima bot is and how it helps to fight misinformation What kind of relationship Aos Fatos has with its audience How they try to reach out to Gen Z The word "innovation" and how it has become empty A new tool to fact-check live events/debates, etc. Distorted algorithms and Aos Fatos' project called Golpeflix Social media platforms, how they became unhealthy and how journalists can navigate them to distribute quality journalism How the perception of facts and truth has changed in Brazil in recent years How the media industry took people's trust for granted and now needs to earn it by being more transparent and diligent Relationships with Meta and other Big Tech companies: liability, yet necessity? Can these relationships be re-negotiated? How social media has contributed to the loss of trust in professional journalists The biggest challenge Aos Fatos faces as a newsroom and Bruno as a Director of Innovation, and what the solutions to those challenges are The biggest misconception of generative AI in the context of journalism/storytelling Ways to use generative AI more creatively - creation of new user interfaces might be one of them A lifehack from Bruno on how to use smaller generative AI models The future of journalism…
A
AI-FLUENT Podcast

1 India’s Tattle Co-Founder Tarunima Prabhakar on Future of AI in Addressing Harmful Online Content and Fighting Misogyny with Equitable AI 47:19
Tattle, one of India's pioneering civic tech organisations using AI to combat online gender-based violence. In this new episode of AI-FLUENT, Tattle's co-founder Tarunima Prabhakar shares insights into Tattle's innovative projects, including Uli, which helps women navigate harmful online content, and their Deepfake Analysis Unit deployed during India's recent elections. We explore the complex challenges of making technology accessible to less tech-savvy women while balancing relationships with Big Tech platforms and governments. Main Topics We Discussed in This Episode: What is the role of civic tech organisations in India? One of Tattle's first projects - Uli - was aimed at helping women deal with harmful and offensive online content. How does Uli work? What about rural, less tech-savvy women, how to help them? How does Tattle leverage AI and machine learning to identify and combat abusive or harmful online content? What are the key technical challenges in building such systems? Another of Tattle's projects is called Deepfake Analysis Unit. It was introduced during India's 2024 elections, but continues to work until now. They collaborate on it with fact-checkers and forensic experts. How does it work? How does Tattle work with social media platforms or other online spaces to implement its tools and what are the biggest challenges in getting these platforms to adopt Tattle's solutions? On relationships with Big Tech and how they can be re-imagined/re-negotiated On collusion between Big Tech and governments There's a risk that tech solutions like Uli may only reach a small, tech-savvy subset of women. How does Tattle ensure it doesn't create a bubble that excludes those who need these tools the most? Where do they see the future of AI in addressing harmful online content? Are there emerging technologies or approaches that could revolutionise this space? On the most painful lessons learnt as co-founder of a civic tech organisation Lifehack from Tarunima for those who want to start their own civic tech startup. What is Tarunima's personal criteria of impact and success?…
A
AI-FLUENT Podcast

1 Rana Arafat on AI Manipulation, Disinformation and Algorithmic Bias during India’s 2024 Elections and the Israel-Gaza War 40:33
We usually talk about biased data and inaccurate interpretations regarding technology that comes from the East, China for example. Yet we rarely discuss the lack of transparency and biased data produced by Western tech companies. So it was refreshing to have this conversation with Rana Arafat, Assistant Professor in Digital Journalism at City, University of London. How are Arab newsrooms, especially in Egypt, Lebanon and the UAE, adapting to generative AI technologies? How do the governments of these countries regulate and control AI technology, and how do Big Tech companies operate in authoritarian countries in the Middle East? In this new episode of AI-FLUENT, we also discussed Indian Elections and the Israel-Gaza War through the lens of AI manipulation, disinformation and algorithmic bias. Main Topics We Discussed in This Episode: The most surprising discovery for Rana whilst researching AI manipulation, generative disinformation, and algorithmic bias in the Global South How technology in authoritarian countries oppresses people more than empowers them The governments' involvement in regulating and controlling generative AI technology. The Saudi government, for example, has its own chatbot, as does the UAE government. How unbiased is their data? The importance of cross-pollination between journalists, developers, product designers, etc. Rana specifically examined Egyptian, Lebanese and United Arab Emirates newsrooms on three levels: narrative, practice and technological infrastructure. What did she discover about all three levels? How Big Tech companies operate in authoritarian countries in the Middle East Regarding algorithmic bias - how Rana researched it and her most important findings Algorithms as a form of censorship What shadowbanning is and how it was used during the Israel-Palestine war How pro-Palestinian content creators tried to evade social media algorithms The Indian elections and how generative AI technology was used and misused during the 2024 elections How we as a media industry and society can protect ourselves from technology which, as we see, can be used as a weapon of propaganda and misinformation. What are possible solutions? Regarding the teaching of future journalists - what's missing in the current education system? Is it keeping pace with all the rapid changes in the tech and media industry?…
A
AI-FLUENT Podcast

1 Marius Dragomir on How Big Tech Turned from Solution to Problem, and the Future of Public Service Media in a Hyper-personalised Storytelling World 59:20
Marius Dragomir, the Founding Director of the Media and Journalism Research Centre, shares with me the most surprising findings from his recent research on propaganda narratives, as well as the revealing discoveries about ownership and financial aspects of the 100 AI tools most used by journalists. Remember that 67 % of these 100 tools lack critical data on ownership and finances. Check the tools you are using. How can the media industry become less dependent on Big Tech? One of the possible solutions lies with audiences and the private sector. We are discussing the case of the Czech Republic – what makes it so special? Main things we've discussed in this episode: One of the Media and Journalism Research Centre's recent research articles was titled "Ownership and Financial Insights of AI Used in Journalism", and we are discussing its findings. They looked into the 100 AI tools most used by journalists and found that only 33 % of AI tool companies demonstrate sufficient transparency, with 67 % lacking critical data on ownership, finances and other basic information. There is one AI-powered fact-checking tool mentioned in the research article called Clarity, owned by former Israeli military personnel. The owners promoted it as the best fact-checking tool for the Israeli-Palestinian war. How might this opacity affect journalistic independence and credibility in the long term? The impact technology is having on fact-checking in general BBC's research on AI assistants' news accuracy. Their tests of ChatGPT, Copilot, Gemini, and Perplexity exposed major flaws. 51 % of AI responses had significant issues. 19 % of responses introduced errors when citing the BBC. 13 % misquoted or made up BBC content entirely. The Centre's research on disinformation and propaganda narratives in different parts of the world. Marius shares trends and examples that surprised him in terms of how these narratives are being shaped and distributed. Big Tech and media - a healthy or toxic relationship? How Big Tech turned from a liberating tool to an oppressive one, from a solution to a problem for many journalists Marius shares his observations on how Big Tech companies work closely with governments, prioritising the government's information on search engines and social media. We are talking about Europe here, not only authoritarian regimes. What are the solutions - how can the media industry become less dependent on Big Tech? One of the possible solutions lies with audiences and the private sector. The case of the Czech Republic. On regulations and ethics: what would be Marius's top three AI-related regulations that he would issue immediately? When regulations work and when they don't What media organisations need to change to survive and stay relevant On public service journalism. What is the biggest challenge that public service journalism in Europe faces today? And what are the possible solutions? Talking about the evolving media ecosystems in Europe, Marius came up with four distinct models: the Corporate model, the Public Interest model, the Captured model (high government control) and the Atomised model (journalism for sale, private interests driven). How relevant are all of these models in a world where generative AI is becoming a new storytelling medium? In that world where every viewer is an audience of one - what will the perception of facts as such be? And how might that type of storytelling medium change the perception of non-fiction reporting? Given the trend towards hyper-personalised storytelling through AI, how might this affect the traditional role of public service media in creating shared national conversations and cultural touchpoints? Are we risking further social fragmentation? What will replace the increasingly commercialised and disengaging social media?…
A
AI-FLUENT Podcast

1 India's Factly Founder Rakesh Dubbudu on Misinformation Trends, DeepSeek's Signal to the Media Industry, and Tech Giants' Political Allegiances 58:25
Do you believe you care about facts more than people in India? For a split second, did you notice any bias in your quick mental response? Whom do you trust more when it comes to news: a well-established media brand or your closest friend who runs a popular current affairs YouTube channel? Who do you think is more likely to spread misinformation? As Meta and other tech giants show their allegiance to Trump's administration and become increasingly partisan, will financial connections with these platforms become a reputational liability for the media industry? We discuss these and many other technology, storytelling, and misinformation-related issues in this episode of AI-FLUENT with Rakesh Dubbudu, founder and CEO of India's fact-checking website, Factly. Main Topics We Discussed in This Episode: How much do people in India care about facts? The significant silent population - they need fact-checkers, not people on the extremes The relationship between fact-checking websites in India: friends, competitors or enemies? DeepSeek: what signals does this AI tool send to the media industry? Censorship in Indian media The most frequent questions Factly's journalists ask themselves about generative AI Why people with generalist skills who can connect the dots will shine in the world of AI The biggest misconception about AI in the storytelling industry The biggest challenge for Factly in general and for Rakesh as CEO in particular Factly's relationship with Meta as a fact-checking partner after Mark Zuckerberg announced the end of fact-checking on Facebook in the US, with likely further expansion As Meta and other tech giants show their allegiance to Trump's administration and become increasingly partisan, will financial connections with them become a reputational liability for the media industry? How Factly is working to decrease its reliance on external tech platforms How Factly ensures unbiased data input when building their own AI tools How they maintain and measure public trust in their fact-checking operations amid declining trust in traditional media globally, particularly when handling politically sensitive topics in India The specific patterns and trends of misinformation in India How misinformation can be addressed holistically Factly's revenue model and their use of AI tools to reimagine their business model The book Rakesh is currently reading to better understand generative AI…
A
AI-FLUENT Podcast

1 Mariano Blejman, CEO of SmartStory.ai and Founder of Media Party, on Neuroscience and How It Helps to Understand Audience Behaviour and News Consumption 37:59
Why do we choose to read or watch something - are we manipulated by algorithms? Do we have any cognitive independence? How and where will we receive news in the near future? In a world where the majority of content will be created by AI, how will we know whom to trust? And what does neuroscience tell us about all of that? In a nutshell, these are the questions we tried to answer in this episode with Mariano Blejman, an Argentinian media entrepreneur and the founder of the Media Party festival in Buenos Aires. Main Topics We Discussed in This Episode: Synthetic democracies and the role of AI in creating them What neuroscience tells us about why we choose certain content When truth becomes as scarce as water, it will start creating value The broken business model of the media: opportunities to fix it or rebuild from scratch In a world where the majority of content will be created by AI, how will we know whom to trust Haven't heard about a BBC car? Well, it doesn't exist yet, but it might be one of the ways people will receive their news in the future Narcissism is one of the media industry's problems How to use neuroscience to understand news consumption and audience behaviour in general. Mariano refers to Annemarie Dooling's research Mariano explains what his startup is doing to help newsrooms understand their audiences' behaviour. He is fascinated with neuroscience and keeps returning to it. I am becoming increasingly fascinated with it too :-) Mariano's predictions on the future of news consumption: content will flow to you without being asked. Your behaviour will activate a prompt, rather than you asking for a prompt - and other predictions How will the relationship between media and tech industries develop: marriage, divorce, or a never-ending affair? Examples from Latin America of how journalists use AI tools to avoid censorship How journalists can have more agency to influence the way generative AI technology is developing Mad cow world And about neuroscience again…
A
AI-FLUENT Podcast

1 Madhav Chinnappa on How to Get from an Electric Bike to a Jet Plane Phase of Using Generative AI in Journalism and Beyond 48:58
Technology has a value but doesn't have values, reminds us Madhav Chinnappa. It's up to us humans who have values to define how the technology will be used and whether it will bring more good than evil. The majority of us are now in a so-called efficiency phase of using AI tools, thinking mainly about how to optimise things. However, those who will jump to a creativity phase more quickly will have an advantage. Ask yourself which phase you are in now. And if you are still only in the first one, you really should be thinking about how to move on to the second, and quickly. Main Topics We Discussed in This Episode: How we use AI without realising we are doing it From an electric bike phase of AI to jet airplane - and how the jet airplane phase will transform storytelling Free tools versus paid tools Content creation versus business model Licensing and how platforms can compensate content creators: what is fair? Is Human Native a sort of eBay where the rights holders and AI developers meet? Extractive versus sustaining technology Interest in non-English languages and how you as a non-English speaker/data owner can use this as an advantage Common patterns of how newsrooms use or misuse generative AI AI won't replace you - it will augment you Misconception of traffic and how some newsrooms will pay a price for that What aspects of using generative AI we are not thinking about Ethical considerations and risks Efficiency play versus creativity, audience-focused play How AI technology changes your audience's behaviour and what it means for the journalism industry The reality about the audience's view on AI will not be defined by news; it will be defined by other parts of their life Should we label AI-generated or human-created content, or both? Trust and interdependency of different industries and how they use generative AI Do generative AI tools reduce the amount of intention and meaning in the world? Do we as journalists have agency to influence how AI technology is developing, and if you feel you don't - what to do to have more agency…
A
AI-FLUENT Podcast

1 Rappler's Gemma Mendoza on Innovation, the Role of Gen Z in the Newsroom, the Proliferation of Deepfakes and Journalism's Evolution 50:14
As a media organisation, they stand on three pillars: journalism, technology and the wisdom of crowds. From the very beginning, they have worked at the intersection of journalism, technology and community engagement. It was co-founded by a woman who was awarded the Nobel Peace Prize for safeguarding freedom of expression and for her efforts to address corruption in her country. They were the first media organisation in their country to publish guidelines on AI use in 2023. They were among ten digital media organisations selected by OpenAI to participate in "innovative experiments around deliberative technologies". By now, I am sure you have guessed that in a new episode of the AI-FLUENT podcast, I am talking to Gemma Mendoza. Along with Maria Ressa and other journalists, she was one of the co-founders of Rappler, the most prominent Filipino news website. Now she leads digital innovation and disinformation research at Rappler. Main Topics We Discussed In This Episode: How to balance the speed and efficiency that AI offers with Rappler's commitment to slow journalism and deep investigative reporting AI's impact on the relationship between journalists and their sources The biggest misconception about generative AI in journalism and the most surprising aspects of its development Rappler—one of the most famous Filipino news websites in the world—stands on three pillars: journalism, technology and the wisdom of crowds. How does their newsroom rely on this wisdom? How they utilise AI at the intersection of journalism, technology and community engagement At Rappler, they create their own AI tools in-house. What determines whether they create their own tool rather than use existing market solutions? How do they address ethical implications when using their own data, including ensuring it isn't biased? The role and input of Gen Z journalists in Rappler's newsroom Required changes in journalism schools' educational systems to better prepare future journalists for the new reality Gemma, who leads Rappler's efforts to address disinformation in digital media, shares recent examples of how AI tools have made their work more efficient Patterns and peculiarities in how people use deepfake technologies In 2023, Rappler became the first Filipino newsroom to publish guidelines on AI use. Gemma's recommendations for other newsrooms worldwide on approaching AI—what are the crucial aspects not to overlook? The future of journalism in three words Gemma's case for journalism: why should a 15-year-old become a journalist in this AI-driven world?…
A
AI-FLUENT Podcast

Are you a small newsroom or a one-person content creator, and are you, like this episode's guest, happy to fail fast and fail often? You might not have a designated tech team, yet you want to use AI-powered tools to speed up your work and solve certain problems. Are you tempted to use AI to produce even more content? Pause here and reconsider: might there be better ways to use generative AI to advance your work and develop a deeper understanding of those whom you serve? We are discussing all of this and more with Tshepo Tshabalala from LSE's JournalismAI global initiative, which helps small newsrooms devise ideas and solutions for using AI wisely and responsibly. I started this conversation by asking Tshepo what kinds of projects they work on with different newsrooms around the world. Main Topics We Discussed In This Episode: Why is this network beneficial for small newsrooms applying for their fellowships? Quick solutions versus long-term AI strategy Examples of AI-powered solutions from different newsrooms Common misperceptions of AI-powered tools in newsrooms JournalismAI works with Why does your audience come (or not) to you? Do you really know why? Ethical implications of using AI in a newsroom: guidelines or no guidelines? Creativity versus meaning: does generative AI reduce us to recyclers of meaning rather than creators of it? Example of a "wow project" that Tshepo has come across recently Blitz questions…
A
AI-FLUENT Podcast

1 Alan Soon, Co-founder of Singapore-based Splice Media: What Do We Get Out of Good Journalism and Why Should We Pay for It? 43:42
If journalism is valuable, as many of us think, why don't people pay for it? That's the question Alan returned to several times during our conversation. And he actually gave the answer, an existential answer to this question. What's the purpose of your newsroom? Why do you exist as a media organisation? What audience needs do you serve? If you are able to answer these questions honestly without any corporate fluff, you come closer to answering the money question: why should people pay for your content? 'We have gone past peak content', Alan says. Nobody wakes up in the morning wanting more content - people want to get stuff done, they want a sense of community, they want to learn something new, they want to have fun etc. The problem is, as Alan points out, that many newsrooms still operate as if people wake up wanting more content for the sake of content. Wake up! Main Topics We Discussed in This Episode: How AI Enables Media Solopreneurs Supply versus Demand Side Thinking: How Does It Work in Journalism? How to Use AI to Get a Deeper Understanding of What Audiences Want No One Wakes Up in the Morning Wanting More Content - Why Do We Create So Much of It Then? What Is a Better Use of AI-powered Tools, Apart from Creating Content? How to Stand Out in a Crowded Content Market Wherever You Are We All Pay for Stuff and Services, Why Not for News? "What People Buy Is Very Different from What People Get, or What People Want" Creativity versus Recycling Old Narratives AI and the Nature of Originality How to Measure AI Impact on Revenue Generation, Audience Engagement and Credibility Where Does Obsession with Optimising Everything Lead Us? AI as a Co-thinker and Co-founder of Your Potential Start-up Lifehack from Alan on Using AI in the Context of Storytelling On the Future of Journalism Come back and listen to us on January 9, 2025 for an all new episode.…
A
AI-FLUENT Podcast

1 Claudia Báez from Colombia’s Cuestión Pública on how to stay ahead of the game in investigative journalism 33:04
You can investigate serious corruption cases, but it doesn't mean that the way you talk about it to your audience should be super serious and boring. What if you come up with an AI-powered avatar that uses facts checked by human journalists, yet speaks to the audience in a sarcastic voice as an assistant rather than a know-it-all expert? Claudia Báez, a digital innovator and co-founder of the Colombian investigative website Cuestión Pública, thinks that investigative journalism should by no means be delivered in a way that is appealing only to men in suits over 40. She created an AI-powered tool - Odín - to help her team of journalists stay relevant to the current news agenda, reach different audiences, and, of course, save time and money. I talked to Claudia about this and much more, so maybe you, as a storyteller anywhere in the world, can be inspired and apply some of these insights in your everyday professional life. Main topics we discussed in this episode: Why Odín and how does this AI-powered tool help the team of Cuestión Pública stay relevant to audiences in Colombia? What's the most important question regarding AI we should ask ourselves first? What are the challenges investigative journalists face in Colombia, and how does technology help them solve some of those problems? Investigative journalism doesn't have to be delivered in a serious and often boring way. Generative AI gives us many opportunities to experiment with formats aimed at different audiences. On trust and transparency, and why journalists need to collaborate Claudia’s favourite AI tools with examples from Spain, Argentina and Venezuela On the future of journalism…
A
AI-FLUENT Podcast

1 Code for Africa’s Chris Roper and Amanda Strydom on AI, disinformation and the future of journalism 47:41
In this episode, I am talking to Chris Roper, Deputy CEO of Code for Africa and Amanda Strydom, Senior Programme Manager for CivicSignal, a programme within Code for Africa, which maps and offers insights into media ecosystems in Africa using research and machine learning tools. We talked about how the newsrooms they work with apply AI in their professional life. What are the AI-related issues that African newsrooms are truggling with and what kind of solutions are they coming up with? Code for Africa is also well-known for their work in tackling mis- and disinformation. Chris and Amanda talk about different ways their organisation is helping African journalists to fight disinformation in a foundational way and what’s the role of AI in it. Main topics we’ve discussed in this episode: -The role of AI in helping journalists and citizens tackle mis/disinformation -Generational differences in perceiving misinformation -Ethical policies of using AI in a newsroom: how to approach them -Who owns your data which you share with AI tools -The environmental effect of AI -Life hacks from Chris and Amanda: how to use AI in a storytelling context -The future of AI and how it will shape the future of journalism -Does AI create more inequality in the Global South -What help journalists can get from Code for Africa and how they can collaborate with the organisation -AI-related regulations and laws: the ideal and real scenarios -The importance of AI tools in investigative journalism -Exciting AI projects Chris and Amanda are working on now…
AI-Fluent is my new podcast where I talk with storytellers from around the world about journalism and storytelling in all its shapes and forms, its marriage with AI and other technology, and innovative thinking. Most of my guests are from the Global South, so it's a rare opportunity to listen to people with different perspectives, different challenges, and solutions they have to offer. New episode every Friday…
Willkommen auf Player FM!
Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.