Artwork

Inhalt bereitgestellt von Netzpolitik Podcast – netzpolitik.org. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Netzpolitik Podcast – netzpolitik.org oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

NPP 239 zum AI Act des EU-Parlaments: Ambitionierter Aufschlag für die Regulierung Künstlicher Intelligenz

 
Teilen
 

Archivierte Serien ("Inaktiver Feed" status)

When? This feed was archived on December 07, 2021 08:57 (2+ y ago). Last successful fetch was on November 06, 2021 06:40 (2+ y ago)

Why? Inaktiver Feed status. Unsere Server waren nicht in der Lage einen gültigen Podcast-Feed für einen längeren Zeitraum zu erhalten.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 305285214 series 146598
Inhalt bereitgestellt von Netzpolitik Podcast – netzpolitik.org. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Netzpolitik Podcast – netzpolitik.org oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Eine Europafahne vor blauem Himmel. In der Mitte der Fahne lässt sich leicht ein künstliches Gehirn erkennen.

https://netzpolitik.org/wp-upload/2021/10/npp-239-ai-act.mp3

Das Europaparlament will Künstliche Intelligenz stärker regulieren. Das geht aus einer Resolution hervor, über die Anfang Oktober abgestimmt wurde. Vor allem im Strafvollzug soll der freien Nutzung von KI ein Riegel vorgeschoben werden: Datenbanken zur Gesichtserkennung und Predictive Policing sollen beispielsweise verboten werden. Algorithmen müssen transparenter gemacht werden.

Zwar handelt es sich bei der Resolution nur um einen nicht bindenden Vorschlag. Eine Schlagrichtung lässt sich darin dennoch erkennen. Wir haben mit zwei Expertinnen von European Digital Rights gesprochen: Sarah Chander und Ella Jakubowska. Die beiden sind vom Inhalt der Resolution positiv überrascht – wenn auch mit Vorbehalt. Im Podcast erklären sie Alexander Fanta und Chris Köver, warum die Regulierung von Künstlicher Intelligenz gerade jetzt so wichtig ist und warum wir es hier mit dem bisher ambitioniertesten Vorstoß in diese Richtung zu tun haben.


Hier ist die MP3 zum Download. Es gibt den Podcast wie immer auch im offenen ogg-Format.


Links und Ressourcen

Unsere Gäste


Unseren Podcast könnt ihr auf vielen Wegen hören. Der einfachste: In dem eingebundenen Player hier auf der Seite auf Play drücken. Ihr findet uns aber ebenso auf iTunes, Spotify oder mit dem Podcatcher eures Vertrauens, die URL lautet dann netzpolitik.org/podcast/. Wie immer freuen wir uns über Anregungen, Kritik, Lob und Ideen, entweder hier in den Kommentaren oder per Mail an chris@netzpolitik.org oder serafin.dinges@netzpolitik.org.

Quellen

Interview von Alexander Fanta und Chris Köver. Moderation und Produktion von Serafin Dinges. Außerdem dabei: Övünç Güvenisik.


Gesamtes Transkript

Serafin: Hey, Serafin hier. Ihr hört den Podcast von netzpolitik.org.

Das europäische Parlament befürchtet, dass der Einsatz künstlicher Intelligenz die Freiheitsrechte der Bürger:innen gefährdet. Anfang des Monats hat das europäische Parlament eine Resolution mit Details zur Regulierung von KI verabschiedet und erstaunlich klare Worte gefunden. Es geht um den sogenannten AI Act.

Gesichtserkennung soll darin zum Beispiel verboten werden. Predictive Policing, also Polizeiarbeit auf der Grundlage von künstlicher Intelligenz, bekommt ebenfalls eine Absage. Algorithmen sollen transparent gemacht werden.

Die Abstimmung im Europaparlament hat medial nicht besonders viel Aufmerksamkeit bekommen. Aber wenn die Vorschläge des Parlaments übernommen werden, könnten wir es mit der weltweit ersten und weitreichendsten Regulierung von Künstlicher Intelligenz zu tun haben.

Heute: Alexander Fanta und Chris Köver haben mit Sarah Chander und Ella Jakubowska gesprochen. Die beiden arbeiten für die Organisation European Digital Rights – kurz EDRi. Die Expertinnen für Künstliche Intelligenz und Biometrische Überwachung
haben uns erzählt, warum die Abstimmung so wichtig war und was genau drin steckt.

Leider sprechen die beiden kein Deutsch, deshalb ist der Großteil unseres Podcasts heute auf Englisch. Ihr findet mehr deutschsprachige Infos auf netzpolitik.org

Serafin: Alex, worum ging es bei der Abstimmung?

Alex: Das EU-Parlament setzt sich seit Jahren auseinander mit Biometrie und der heiklen Frage, wie kann Europa ethisch mit dieser Frage umgehen. Zumal, wenn es in einen heiklen, sensiblen Bereich, wie Strafverfolgung hineingeht. Und was das europäische Parlament vor Kurzem gemacht hat, ist, es hat eine Abstimmung über einen Bericht gemacht, der zwar in sich noch keine rechtliche Wirkung hat, aber der als prägend für spätere Entscheidungen gilt. Und in diesem Bericht hat das europäische Parlament gesagt, dass es klare Schranken geben muss für den Einsatz von biometrischer Überwachung in Echtzeit im öffentlichen Raum. Sarah von EDRi hat die Wahl auch als eine Art Lackmustest bezeichnet.

Sarah: we called this a litmus test for the general sense of the European Parliament on some of the main issues related to data protection, privacy, anti-discrimination, freedom of expression. All of these issues vis a vis AI. And although there have been a number of sort of parliament reports on this, and previously they were very general. And now this report specifically relates to the positions parliament and law enforcement uses of artificial intelligence. And the reason I think the law enforcement focus of this report is is a good litmus test for our rights and freedoms is because we then basically within basically deciding, OK, what is the parliament’s view of how far invasive technology should be able to be used by the state, particularly in cases where people, individuals, groups, communities don’t have very much control over it? We’ve seen other European Parliament and policymakers discussing A.I. in consumer contexts, A.I. in various different contexts where actually you might have some say basically over or at least some knowledge about whether AI systems are being used on you.

Serafin: Warum war denn diese Abstimmung sowas besonderes? Was war anders als bei bisherigen Debatten um KI im Europaparlament?

Alex: Das EU-Parlament hat immer wieder gerungen mit seiner Position. Erst vor einigen Monaten gab es eine Abstimmung wo das Parlament gesagt hat, es soll ein Moratorium geben auf Gesichtserkennung im öffentlichen Raum, aber sich nicht durchringen konnte, die Sache komplett abzusagen. Was wir jetzt sehen, ist, dass das Parlament immer stärker in die Richtung geht, aus ethischen Gründen Gesichtserkennung eine völlige Absage zu erteilen und das ist die Debatte, die wir dann auch haben werden, mit der eigentlichen Verordnung, wo das Parlament in einigen Monaten seine Position festlegen wird.. Ein gutes Beispiel dafür ist „Predictive Policing“, also der Einsatz von KI, um mögliche Verbrechen vorherzusagen.

Sarah: There have been many studies about predictive policing and we’ve seen that actually they disproportionately often target for over policing communities of poor working class communities, racialized communities, places where migrant people with migrant backgrounds live. And so it’s a fundamentally question about actually should the police be able to make decisions on the basis of things that haven’t happened yet? Precrime behavior department and we thought this was very progressive in this very recent report, said that they oppose the use of AI by law enforcement authorities to make behavioral predictions on people on the basis of a bunch of criteria so including historical data and information about our past behavior, but also location and many other things like that. So this was a really strong stance in essence against predictive policing, which is one of the main reasons we thought this was a very progressive step and a good sign for the future legislative process.

Chris: But. There’s also another point that you that you mentioned that is in there, it’s the ban of biometric mass surveillance. So I mean, there’s been a lot of discussion going on about facial recognition used by the police. But I mean, actually, we’re not just talking about facial recognition, right? Because there’s other other ways to to use biometric surveillance. So what is the stance that the parliament is taking on this issue?

Ella: So absolutely, as you say and as a EDRI and the Reclaim Your Face campaign, we’ve actually been campaigning for a long time against any use of facial recognition or other systems that recognize the way that people walk. Other things about their behaviors and bodies when it’s used in ways that lead to mass surveillance. So that could be the use of facial recognition drones on people protesting or other kind of biometric quote-unquote smart systems in parks or shopping centers or schools. And so what we’ve seen in this new report from the parliament is is really it’s democracy in action because we’ve had 62000 people from across the EU and 65 civil society organizations from across a really broad range of justice and human rights issues, all calling for a ban on biometric mass surveillance. And that’s exactly what we’ve seen now being passed in this parliamentary report. They’ve explicitly called to ban biometric mass surveillance in law enforcement contexts.

Ella: the parliament really clearly said that they are refusing to water down this protection of fundamental rights against biometric mass surveillance because there were attempts from certain minorities in the parliament to use fear or to use a lot of the mythology around amazing, shiny new technologies to try and say that there shouldn’t be limitations on police use of facial recognition and other biometric mass surveillance systems. And the parliament threw that out, they said no, we are very clearly going to put people and people’s rights first. Secondly, they’ve also reversed the burden of proof back to where it should be. So we’ve had a lot of calls from certain parts of the parliament from industry saying, Let’s just try this stuff out. Let’s let’s not regulate it. Let’s let it run its course as if it’s somehow something natural and then work out what to do. And this report again is saying, No, absolutely not. That’s not how human rights work, actually. You need to prove that this thing is not going to harm people, not going to unduly restrict their rights to privacy free expression nondiscrimination. And if you can’t prove that, then you need to ban it. And that’s exactly what the EU fundamental rights framework says. We need to do.

Ella: When you look at the projects that are getting millions and millions of euros of public money to do with biometrics, it’s really horrifying stuff. It’s attempts to create lie detectors. Despite the fact that there’s no scientific basis for that. It’s about attempts to criminalize people on the move, attempts to see if people are suspicious. And so the whole ideology underneath that research and innovation can be really, really sinister. So the parliament again is saying you shouldn’t be researching and innovating with people’s rights and freedoms. Privacy equality are not something to put into a lab and test with. People are not guinea pigs and law enforcement should not have less scrutiny and be able to do whatever they like without checks and balances. They should have more scrutiny. And we’ve seen around the world how when they’re unchecked, the use of these technologies by police can be some of the most profoundly harmful uses of technology in society.

Chris: Yeah, I think it’s really interesting that you mentioned the the lie detector research think that that is technology that has been researched to to border control and who’s entering the European Union, right?

Ella: Yes, that’s right, that’s been a really notorious project called I Border Control and everything from the the reason why they’re trying to do it, which is to detect if people on the move are supposedly lying, if they’re untrustworthy is really, you know, not something that we should be outsourcing to technology. It’s already there. Much wider structural problems in the immigration and border space in that it’s a very discretionary process. It’s very hard already to know if people are being treated fairly and for those of us in civil society, for example, to contest unfair decisions. So when you add on top of that, already very murky and opaque context where people are put in a very vulnerable position and very disempowered compared to the authorities are facing. You add technology on top of it that doesn’t have a scientific basis that’s been shown to be very prejudiced, has been shown to really interpret that racialized people, for example, are angrier than white people, which is a piece of research that we saw in the US with a lot of the most common algorithms. We’re starting from a place of a really, really unjust assumptions. And by using that technology, we’re just amplifying that. But we think it’s also really important to note that even if there were a scientific basis for this sort of thing, which that isn’t, but even if there were, we still shouldn’t be doing it because people in that in the migration context deserve to be treated fairly, to have proper processes, to have things happen in a manner that they can understand that they can contest and trying to make a lie, you know, an automated lie detector. A part of that just tramples on so many of the really important principles of due process and the right to good administration, which are vital for people in that position to be able to have any sort of a say in what’s happening to them. And as if all that’s not bad enough that the whole AI border control experiment has also been done in a completely untransparent way to the point where an MEP, Patrick Bryant, has actually taken the commission to court for failing to disclose almost any information about what they’re trying to do here with public money.

Chris: Yeah, I think it’s a great example because it shows what or how this this technology is being deployed or it could be deployed in the future in the European Union. If you know, if we leave it entirely unregulated because a lot of the examples that we hear that are really dystopian from the US context, but I think it’s a great example to show now what is already happening in the context of the European Union as well.

Ella: Totally. And you know, I do want to make the point that we often think, Oh, China, the US, that’s where these really scary things are happening. But actually, there’s a lot of it going on in the EU as well. We’ve released a lot of reports looking at the evidence and in the Netherlands, for example, that already using these automated AI based systems to predict if people are being aggressive and using that to deploy police vehicles, you know, who are primed ready to intervene in what they think is a violent conflict, which is for the people that are being marked out as aggressive. Could you know, really, you could have profound impacts on them? And if you are somebody you know, often a racialized or minorities person that already faces disproportionately high amounts of police intervention, you could actually really be at home with this. And these trends are also replicated in other uses of biometric mass surveillance we’re seeing in Germany, for example, people going in and out of gay bars being surveilled and having their faces used to identify and track them. And then in the broader sense of other uses of AI and Sarah might be better equipped to talk about this than I am. We’ve also seen some really serious threats to to to fairness to the delivery of social services and welfare provision. So maybe, Sarah, you want to jump in on that?

Sarah: All of these systems fundamentally have a first point of contact, which are the marginalized, the poor criminalized in our society.

Sarah: So one particular case study that we’ve been keeping an eye on for a time and the a number of risk assessment tools that are used in the Netherlands in terms of social services, which is very much overlapped with policing. So systems which have been designed to test whether young people are potentially either posing a risk of being criminals in the future, posing a risk of being social nuisances, or being posing a risk of having child protection issues. All of these have basically involved a massive growing infrastructure of collecting data about particular young people. And when we say particular young people, we mean kids from working class backgrounds, kids from families with a migrant background, kids that are racialized. So the broken system in the Netherlands basically used a really wide range of data, including whether the kids had high school absences, whether the kids were from families with single mothers and also whether their kids were in areas that were known to be previously high crime. And when we say known to be high crimes, it means what do they have an existing police presence and therefore a high number of arrests? And also, which crimes are we talking about? So often this these systems focus on crimes that we associate with poor people. So burglaries and fights in the public space as opposed to financial crimes crimes that we associate with rich people, white collar crimes, as you say. So essentially, what this system was doing was collecting information, all of this social information and transferring it into the criminal hands. So making it a criminal matter to assess people’s social situation as if this is a good indicator of whether these children were future criminals and therefore the police need to keep that eye on them. Often this is framed in very paternalistic and framing, and so we had arguments for these technologies such as that. Well, these are this is a point of surveillance, yes, but it’s of the idea that the police can keep an eye on these kids to keep them out of. Tribal collectives on the ground there have been saying that these systems have been making children’s lives a nightmare. Children as young as 14 and sometimes even younger been basically profiled many times in the week every day. The psychological effects of that are being felt to be a criminal, even if there’s nothing that these these kids have ever done and really instilling the notion that from a very young age, certain populations simply need to be controlled by the police.

Sarah: what we’re doing when we think about these systems is also not just expanding the reach of the criminal justice system, but we’re also increasingly involving private companies into functions that were previously public functions. So many of these systems were have been designed by a private company. So PredPol for Existence is one of the most famous companies associated with predictive policing systems. Basically, from the point of design, even up to the point of delivery, the private company makes huge decisions about how the system works, what type of data it uses and how the algorithm functions in order to generate these risk pools. In essence, then what we have is a broader problem beyond discrimination, a broader problem of who controls the decisions made in the public space, in the public function, the more and more we see air used in the criminal law space, the more and more you see air used in the social services space and in migration control spaces. We see the corruption of public functions into the realm of the public, into the realm of the private sphere. And that should be a much broader concern for us in terms of democracy and accountability of our public actors. If the decisions that are being made in the public sphere about social welfare, policing, migration, control, in essence, very determined by private technologies, innocent, how can we as individuals challenge them in democratic means? This is another question beyond these individual questions of our rights that we need to challenge because we cannot necessarily easily challenge them in a legal form that fundamentally economic and political questions.

Alex: can you tell us a little bit about what influenced the commission in actually writing the draft? Do you think that the industry that certain industry groups, for instance, had a had a large influence on the way that the draft has been made?

Sarah: So the at the comission stage, the the AI Act followed the regular process, right? So there was a public consultation on the European Commission’s before it before it released the the EC. So you had citizens, you had academics, you had civil society and you had industry inputting into that. And I think always generally, there’s a question about the role, particularly of big tax and lobbying on digital legislation at the EU level. And if you look at and the report by the Corporate Europe Observatory, you really see very astounding figures about just how much Big Tech spends on lobbying the European Union institutions. I don’t think the act is an exception to that, particularly because you’ll see from civil societies level in the Corporate Europe Observatory is mapping. You’ll see the number of meetings that Big Tech have access even at the European Commission level to take an influence to influence this type of legislation. The same with the Digital Services Act and the Digital Markets Act. So already like we’re at a power imbalance because civil society just cannot spend the same amount of money that Big Tech can and therefore has more resources. But I would say also more access to the policymaking institutions and that really, at least for me, it really plays out in the language that is used, the ideology that is put forward around the act. So at the same time, we are supposed to accept that this act has fundamental rights at the center and that is designed to promote a human centric approach to AI regulation. But also it’s a very heavily dominated by industry language. So it’s very huge claims consistently made about the need not to stunt innovation, the need to promote the uptake of AI and all of this coming from this very strong ideology from industry that we cannot be too strong on limiting how technology is used because this will put Europe at a disadvantage. So then coming into what I said previously about the geopolitical background to all of this such that the Europe wants to take a role as a leader in AI to be able to compete with, particularly with China. A lot of industry discourse has played on that and have emphasized the need for the Europeans. The EU legislation not to be too harsh, particularly using arguments that the U.S. and Europe need to unite. Otherwise, they will have already lost the battle with China, which was very much the language of the home of the former Google CEO, who talked earlier this year against the Act, even in its current form. So really, we’re seeing like an intense influence, I would say, an intense influence of industry over this process, which means then you sort of left with this to reality. The European Union institutions are asking us to accept that we can have both. We can have the complete. And we cannot in any way limit innovation. But we also we also put forward fundamental rights. We would like, I think, to to find this middle ground. But I think on the some of the issues that we’ve put as red lines, facial recognition in the public space, predictive policing systems, A.I. systems that are designed to detect our emotion emotions despite the false signs that they are based on. These are red lines that you cannot innovate out of because simply, they fundamentally compromise the basic principles of freedom of anti-discrimination, et cetera. And so I think that the industry influences is massive on it and is also pushed the discourse to be somewhat this contradictory, particularly on the fundamental right side.

Alex: If I just may continue in the vein of it, I was wondering how the how member states actually influence and shape that discourse because a lot of the kind of representatives of European industry also now pushing in today is fear and feel that they need to kind of follow the trail that the American companies have have broken. So do you do you feel that? This particular member states are pushing pushing this line of having more innovation at the expense of the human rights focus.

Ella: If if I jump in here, I think absolutely. As Sara said, there is this very strong industry influence that absolutely doesn’t just happen at a European Commission level. In some ways, that happens even more strongly and often more unseen for those of us in the Brussels bubble when it does happen at the member state level. We’ve seen both from some individual commissioners, but also from a number of member state governments. This desire to be the first to have positive air strategies really without thinking about what it means for society, but also what it means for the environment, for example, if we see churning out more and more A.I. as the end goal in itself. And on a personal level, I used to work in the tech sector and I know that data scientists, developers, they often try things out because they are cool, because it’s it’s amazing to see what you are able to do rather than because you need it or because it’s the right thing to do. And so I’ve seen from the other side, you know, the need to make sure that when things are going to market, when things are being made available in the EU, that they are the right things and not just the things that one person with no understanding of human rights and anti-discrimination thought was cool. Because when we let that happen, we’ve seen studies come out around the world about AI facial recognition systems that claim to be able to predict somebody’s political orientation and even their sexual orientation based on their face structure and their expressions. And that’s something that goes all the way back to Victorian era, discredited race, science and phonology. With this new veneer of scientific credibility. Behind this, this AI impulse. So we really do know the the the cliche. I think it’s it’s Batman. Maybe the just because you can doesn’t mean you should. And I think it’s really, really important that we we remember that and I do want to give credit where credit is due to the European Commission because what they’ve tried to do here is incredibly difficult. This is the first comprehensive AI proposed legislative proposal in the world, and what they’ve done is approached it in the way that you would approach consumer product safety. And we have found that there are some key ways than in the act itself that it’s not really worked. It’s not been able to fit to the context of AI and the fact that using AI, as Sarah was saying earlier, can have a really transformative impact on our societies and sometimes in ways that are really negative for us as people. So you know that this act has clearly not been formulated as a piece of fundamental rights legislation, and that’s really where it falls down and on the facial recognition and biometrics point that we’ve talked about quite a lot. Although there’s some cautiously positive bits on biometrics, there are elements where we could even see it as deregulation, that this act is undermining important principles that we already have in fundamental rights law in the EU. And so really, at that kind of highest level of this framework that we’re looking at for A.I., there are some real significant problems, but it just isn’t prioritizing people.

Alex: Would you say that that journalists could do a better job at kind of helping to accompany the process and report consistently on on the actual process of lawmaking?

Ella: And I can maybe give some of my own personal insights in this because I’ve been in that in the EU policy space for about two years now, and it is incredibly complex. And even when you try and break it down to it, it’s simple. It’s kind of bare bones. There’s there’s still so many different rules and exceptions, and it is a it’s a difficult thing to get your head around. But I guess we could take it up to even a step back and think about how a lot of people I believe in the EU don’t even really know the fundamentals. They might see the EU as this homogenous bloc and not even have that knowledge that there were different institutions. How that, at least in theory, helps us to achieve democratic consensus. Because actually, what I found personally, as I’ve learned more and more is that. For all its faults, the EU does do a really good job of something that is exceptionally difficult and complicated. There are now 27 countries being represented and I think it’s something like three hundred and forty million people. So how do you find rules that are fair and and representative of the majority of those people? So I think both coming from within the institutions, but also absolutely journalists could help us to communicate that and civil society as well. We try and do that, but it can be easier to appeal to those people that already get it. And it’s a very complicated policy analysis. So we also try and as a jury in the last few years, we’ve been really ramping up our campaigning capacity to be able to reach people more broadly to show why these processes matter, convince people, hopefully that they’re that interesting. So I think working together to dispel some of the the presumptions that the EU is this far away bureaucratic institution that’s irrelevant to most of our lives and actually seeing that that each of us, how everything we do every day has a connection to what happens in the halls of Brussels and things like the European Citizens‘ Initiative that we’ve been running for the most of this year is one way that we are trying to help people see how their voices can actually directly input into the legislative process. Because the European Citizens‘ Initiative is an official tool that’s at the disposal of EU citizens by the European Commission, that allows us to actually make a demand for a piece of law. And that’s that’s quite a powerful and unique thing. But what we’ve found is we’ve been running our ECI is that almost nobody in all of the EU knows that they exist, let alone that we all have a right to sign them to create our own. And we’ve had some coverage from the press about this ECI. I think so far in the mainstream media, we are yet to properly demonstrate to people why that is exciting, why that is relevant. And so I think on all of us, yeah, we could absolutely do more to to show how it how powerful and interesting this space actually is.

Das war der Podcast von netzpolitik.org. Vielen Dank an Sarah Chander und Ella Jakubowska von European Digital Rights. Das Gespräch geführt haben Alexander Fanta und Chris Köver. Die Folge wurde produziert von mir, Serafin Dinges. Unser Titellied ist von Trummerschlunk; zusätzliche Musik von mir.

Mehr Infos zum heutigen Thema findet ihr natürlich auf netzpolitik.org und in den Shownotes.

Wie immer vielen vielen Dank an unsere Spender:innen. Netzpolitik wird komplett von euch getragen – für die Freiheit und Unabhängigkeit die uns das erlaubt, sagen wir Danke! Wenn ihr auch ein paar Euros spenden wollt, findet ihr alle Möglichkeiten dazu auf netzpolitik.org

Das war’s für heute. Bis in ein paar Wochen!


Hilf mit! Mit Deiner finanziellen Hilfe unterstützt Du unabhängigen Journalismus.

  continue reading

82 Episoden

Artwork
iconTeilen
 

Archivierte Serien ("Inaktiver Feed" status)

When? This feed was archived on December 07, 2021 08:57 (2+ y ago). Last successful fetch was on November 06, 2021 06:40 (2+ y ago)

Why? Inaktiver Feed status. Unsere Server waren nicht in der Lage einen gültigen Podcast-Feed für einen längeren Zeitraum zu erhalten.

What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.

Manage episode 305285214 series 146598
Inhalt bereitgestellt von Netzpolitik Podcast – netzpolitik.org. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Netzpolitik Podcast – netzpolitik.org oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Eine Europafahne vor blauem Himmel. In der Mitte der Fahne lässt sich leicht ein künstliches Gehirn erkennen.

https://netzpolitik.org/wp-upload/2021/10/npp-239-ai-act.mp3

Das Europaparlament will Künstliche Intelligenz stärker regulieren. Das geht aus einer Resolution hervor, über die Anfang Oktober abgestimmt wurde. Vor allem im Strafvollzug soll der freien Nutzung von KI ein Riegel vorgeschoben werden: Datenbanken zur Gesichtserkennung und Predictive Policing sollen beispielsweise verboten werden. Algorithmen müssen transparenter gemacht werden.

Zwar handelt es sich bei der Resolution nur um einen nicht bindenden Vorschlag. Eine Schlagrichtung lässt sich darin dennoch erkennen. Wir haben mit zwei Expertinnen von European Digital Rights gesprochen: Sarah Chander und Ella Jakubowska. Die beiden sind vom Inhalt der Resolution positiv überrascht – wenn auch mit Vorbehalt. Im Podcast erklären sie Alexander Fanta und Chris Köver, warum die Regulierung von Künstlicher Intelligenz gerade jetzt so wichtig ist und warum wir es hier mit dem bisher ambitioniertesten Vorstoß in diese Richtung zu tun haben.


Hier ist die MP3 zum Download. Es gibt den Podcast wie immer auch im offenen ogg-Format.


Links und Ressourcen

Unsere Gäste


Unseren Podcast könnt ihr auf vielen Wegen hören. Der einfachste: In dem eingebundenen Player hier auf der Seite auf Play drücken. Ihr findet uns aber ebenso auf iTunes, Spotify oder mit dem Podcatcher eures Vertrauens, die URL lautet dann netzpolitik.org/podcast/. Wie immer freuen wir uns über Anregungen, Kritik, Lob und Ideen, entweder hier in den Kommentaren oder per Mail an chris@netzpolitik.org oder serafin.dinges@netzpolitik.org.

Quellen

Interview von Alexander Fanta und Chris Köver. Moderation und Produktion von Serafin Dinges. Außerdem dabei: Övünç Güvenisik.


Gesamtes Transkript

Serafin: Hey, Serafin hier. Ihr hört den Podcast von netzpolitik.org.

Das europäische Parlament befürchtet, dass der Einsatz künstlicher Intelligenz die Freiheitsrechte der Bürger:innen gefährdet. Anfang des Monats hat das europäische Parlament eine Resolution mit Details zur Regulierung von KI verabschiedet und erstaunlich klare Worte gefunden. Es geht um den sogenannten AI Act.

Gesichtserkennung soll darin zum Beispiel verboten werden. Predictive Policing, also Polizeiarbeit auf der Grundlage von künstlicher Intelligenz, bekommt ebenfalls eine Absage. Algorithmen sollen transparent gemacht werden.

Die Abstimmung im Europaparlament hat medial nicht besonders viel Aufmerksamkeit bekommen. Aber wenn die Vorschläge des Parlaments übernommen werden, könnten wir es mit der weltweit ersten und weitreichendsten Regulierung von Künstlicher Intelligenz zu tun haben.

Heute: Alexander Fanta und Chris Köver haben mit Sarah Chander und Ella Jakubowska gesprochen. Die beiden arbeiten für die Organisation European Digital Rights – kurz EDRi. Die Expertinnen für Künstliche Intelligenz und Biometrische Überwachung
haben uns erzählt, warum die Abstimmung so wichtig war und was genau drin steckt.

Leider sprechen die beiden kein Deutsch, deshalb ist der Großteil unseres Podcasts heute auf Englisch. Ihr findet mehr deutschsprachige Infos auf netzpolitik.org

Serafin: Alex, worum ging es bei der Abstimmung?

Alex: Das EU-Parlament setzt sich seit Jahren auseinander mit Biometrie und der heiklen Frage, wie kann Europa ethisch mit dieser Frage umgehen. Zumal, wenn es in einen heiklen, sensiblen Bereich, wie Strafverfolgung hineingeht. Und was das europäische Parlament vor Kurzem gemacht hat, ist, es hat eine Abstimmung über einen Bericht gemacht, der zwar in sich noch keine rechtliche Wirkung hat, aber der als prägend für spätere Entscheidungen gilt. Und in diesem Bericht hat das europäische Parlament gesagt, dass es klare Schranken geben muss für den Einsatz von biometrischer Überwachung in Echtzeit im öffentlichen Raum. Sarah von EDRi hat die Wahl auch als eine Art Lackmustest bezeichnet.

Sarah: we called this a litmus test for the general sense of the European Parliament on some of the main issues related to data protection, privacy, anti-discrimination, freedom of expression. All of these issues vis a vis AI. And although there have been a number of sort of parliament reports on this, and previously they were very general. And now this report specifically relates to the positions parliament and law enforcement uses of artificial intelligence. And the reason I think the law enforcement focus of this report is is a good litmus test for our rights and freedoms is because we then basically within basically deciding, OK, what is the parliament’s view of how far invasive technology should be able to be used by the state, particularly in cases where people, individuals, groups, communities don’t have very much control over it? We’ve seen other European Parliament and policymakers discussing A.I. in consumer contexts, A.I. in various different contexts where actually you might have some say basically over or at least some knowledge about whether AI systems are being used on you.

Serafin: Warum war denn diese Abstimmung sowas besonderes? Was war anders als bei bisherigen Debatten um KI im Europaparlament?

Alex: Das EU-Parlament hat immer wieder gerungen mit seiner Position. Erst vor einigen Monaten gab es eine Abstimmung wo das Parlament gesagt hat, es soll ein Moratorium geben auf Gesichtserkennung im öffentlichen Raum, aber sich nicht durchringen konnte, die Sache komplett abzusagen. Was wir jetzt sehen, ist, dass das Parlament immer stärker in die Richtung geht, aus ethischen Gründen Gesichtserkennung eine völlige Absage zu erteilen und das ist die Debatte, die wir dann auch haben werden, mit der eigentlichen Verordnung, wo das Parlament in einigen Monaten seine Position festlegen wird.. Ein gutes Beispiel dafür ist „Predictive Policing“, also der Einsatz von KI, um mögliche Verbrechen vorherzusagen.

Sarah: There have been many studies about predictive policing and we’ve seen that actually they disproportionately often target for over policing communities of poor working class communities, racialized communities, places where migrant people with migrant backgrounds live. And so it’s a fundamentally question about actually should the police be able to make decisions on the basis of things that haven’t happened yet? Precrime behavior department and we thought this was very progressive in this very recent report, said that they oppose the use of AI by law enforcement authorities to make behavioral predictions on people on the basis of a bunch of criteria so including historical data and information about our past behavior, but also location and many other things like that. So this was a really strong stance in essence against predictive policing, which is one of the main reasons we thought this was a very progressive step and a good sign for the future legislative process.

Chris: But. There’s also another point that you that you mentioned that is in there, it’s the ban of biometric mass surveillance. So I mean, there’s been a lot of discussion going on about facial recognition used by the police. But I mean, actually, we’re not just talking about facial recognition, right? Because there’s other other ways to to use biometric surveillance. So what is the stance that the parliament is taking on this issue?

Ella: So absolutely, as you say and as a EDRI and the Reclaim Your Face campaign, we’ve actually been campaigning for a long time against any use of facial recognition or other systems that recognize the way that people walk. Other things about their behaviors and bodies when it’s used in ways that lead to mass surveillance. So that could be the use of facial recognition drones on people protesting or other kind of biometric quote-unquote smart systems in parks or shopping centers or schools. And so what we’ve seen in this new report from the parliament is is really it’s democracy in action because we’ve had 62000 people from across the EU and 65 civil society organizations from across a really broad range of justice and human rights issues, all calling for a ban on biometric mass surveillance. And that’s exactly what we’ve seen now being passed in this parliamentary report. They’ve explicitly called to ban biometric mass surveillance in law enforcement contexts.

Ella: the parliament really clearly said that they are refusing to water down this protection of fundamental rights against biometric mass surveillance because there were attempts from certain minorities in the parliament to use fear or to use a lot of the mythology around amazing, shiny new technologies to try and say that there shouldn’t be limitations on police use of facial recognition and other biometric mass surveillance systems. And the parliament threw that out, they said no, we are very clearly going to put people and people’s rights first. Secondly, they’ve also reversed the burden of proof back to where it should be. So we’ve had a lot of calls from certain parts of the parliament from industry saying, Let’s just try this stuff out. Let’s let’s not regulate it. Let’s let it run its course as if it’s somehow something natural and then work out what to do. And this report again is saying, No, absolutely not. That’s not how human rights work, actually. You need to prove that this thing is not going to harm people, not going to unduly restrict their rights to privacy free expression nondiscrimination. And if you can’t prove that, then you need to ban it. And that’s exactly what the EU fundamental rights framework says. We need to do.

Ella: When you look at the projects that are getting millions and millions of euros of public money to do with biometrics, it’s really horrifying stuff. It’s attempts to create lie detectors. Despite the fact that there’s no scientific basis for that. It’s about attempts to criminalize people on the move, attempts to see if people are suspicious. And so the whole ideology underneath that research and innovation can be really, really sinister. So the parliament again is saying you shouldn’t be researching and innovating with people’s rights and freedoms. Privacy equality are not something to put into a lab and test with. People are not guinea pigs and law enforcement should not have less scrutiny and be able to do whatever they like without checks and balances. They should have more scrutiny. And we’ve seen around the world how when they’re unchecked, the use of these technologies by police can be some of the most profoundly harmful uses of technology in society.

Chris: Yeah, I think it’s really interesting that you mentioned the the lie detector research think that that is technology that has been researched to to border control and who’s entering the European Union, right?

Ella: Yes, that’s right, that’s been a really notorious project called I Border Control and everything from the the reason why they’re trying to do it, which is to detect if people on the move are supposedly lying, if they’re untrustworthy is really, you know, not something that we should be outsourcing to technology. It’s already there. Much wider structural problems in the immigration and border space in that it’s a very discretionary process. It’s very hard already to know if people are being treated fairly and for those of us in civil society, for example, to contest unfair decisions. So when you add on top of that, already very murky and opaque context where people are put in a very vulnerable position and very disempowered compared to the authorities are facing. You add technology on top of it that doesn’t have a scientific basis that’s been shown to be very prejudiced, has been shown to really interpret that racialized people, for example, are angrier than white people, which is a piece of research that we saw in the US with a lot of the most common algorithms. We’re starting from a place of a really, really unjust assumptions. And by using that technology, we’re just amplifying that. But we think it’s also really important to note that even if there were a scientific basis for this sort of thing, which that isn’t, but even if there were, we still shouldn’t be doing it because people in that in the migration context deserve to be treated fairly, to have proper processes, to have things happen in a manner that they can understand that they can contest and trying to make a lie, you know, an automated lie detector. A part of that just tramples on so many of the really important principles of due process and the right to good administration, which are vital for people in that position to be able to have any sort of a say in what’s happening to them. And as if all that’s not bad enough that the whole AI border control experiment has also been done in a completely untransparent way to the point where an MEP, Patrick Bryant, has actually taken the commission to court for failing to disclose almost any information about what they’re trying to do here with public money.

Chris: Yeah, I think it’s a great example because it shows what or how this this technology is being deployed or it could be deployed in the future in the European Union. If you know, if we leave it entirely unregulated because a lot of the examples that we hear that are really dystopian from the US context, but I think it’s a great example to show now what is already happening in the context of the European Union as well.

Ella: Totally. And you know, I do want to make the point that we often think, Oh, China, the US, that’s where these really scary things are happening. But actually, there’s a lot of it going on in the EU as well. We’ve released a lot of reports looking at the evidence and in the Netherlands, for example, that already using these automated AI based systems to predict if people are being aggressive and using that to deploy police vehicles, you know, who are primed ready to intervene in what they think is a violent conflict, which is for the people that are being marked out as aggressive. Could you know, really, you could have profound impacts on them? And if you are somebody you know, often a racialized or minorities person that already faces disproportionately high amounts of police intervention, you could actually really be at home with this. And these trends are also replicated in other uses of biometric mass surveillance we’re seeing in Germany, for example, people going in and out of gay bars being surveilled and having their faces used to identify and track them. And then in the broader sense of other uses of AI and Sarah might be better equipped to talk about this than I am. We’ve also seen some really serious threats to to to fairness to the delivery of social services and welfare provision. So maybe, Sarah, you want to jump in on that?

Sarah: All of these systems fundamentally have a first point of contact, which are the marginalized, the poor criminalized in our society.

Sarah: So one particular case study that we’ve been keeping an eye on for a time and the a number of risk assessment tools that are used in the Netherlands in terms of social services, which is very much overlapped with policing. So systems which have been designed to test whether young people are potentially either posing a risk of being criminals in the future, posing a risk of being social nuisances, or being posing a risk of having child protection issues. All of these have basically involved a massive growing infrastructure of collecting data about particular young people. And when we say particular young people, we mean kids from working class backgrounds, kids from families with a migrant background, kids that are racialized. So the broken system in the Netherlands basically used a really wide range of data, including whether the kids had high school absences, whether the kids were from families with single mothers and also whether their kids were in areas that were known to be previously high crime. And when we say known to be high crimes, it means what do they have an existing police presence and therefore a high number of arrests? And also, which crimes are we talking about? So often this these systems focus on crimes that we associate with poor people. So burglaries and fights in the public space as opposed to financial crimes crimes that we associate with rich people, white collar crimes, as you say. So essentially, what this system was doing was collecting information, all of this social information and transferring it into the criminal hands. So making it a criminal matter to assess people’s social situation as if this is a good indicator of whether these children were future criminals and therefore the police need to keep that eye on them. Often this is framed in very paternalistic and framing, and so we had arguments for these technologies such as that. Well, these are this is a point of surveillance, yes, but it’s of the idea that the police can keep an eye on these kids to keep them out of. Tribal collectives on the ground there have been saying that these systems have been making children’s lives a nightmare. Children as young as 14 and sometimes even younger been basically profiled many times in the week every day. The psychological effects of that are being felt to be a criminal, even if there’s nothing that these these kids have ever done and really instilling the notion that from a very young age, certain populations simply need to be controlled by the police.

Sarah: what we’re doing when we think about these systems is also not just expanding the reach of the criminal justice system, but we’re also increasingly involving private companies into functions that were previously public functions. So many of these systems were have been designed by a private company. So PredPol for Existence is one of the most famous companies associated with predictive policing systems. Basically, from the point of design, even up to the point of delivery, the private company makes huge decisions about how the system works, what type of data it uses and how the algorithm functions in order to generate these risk pools. In essence, then what we have is a broader problem beyond discrimination, a broader problem of who controls the decisions made in the public space, in the public function, the more and more we see air used in the criminal law space, the more and more you see air used in the social services space and in migration control spaces. We see the corruption of public functions into the realm of the public, into the realm of the private sphere. And that should be a much broader concern for us in terms of democracy and accountability of our public actors. If the decisions that are being made in the public sphere about social welfare, policing, migration, control, in essence, very determined by private technologies, innocent, how can we as individuals challenge them in democratic means? This is another question beyond these individual questions of our rights that we need to challenge because we cannot necessarily easily challenge them in a legal form that fundamentally economic and political questions.

Alex: can you tell us a little bit about what influenced the commission in actually writing the draft? Do you think that the industry that certain industry groups, for instance, had a had a large influence on the way that the draft has been made?

Sarah: So the at the comission stage, the the AI Act followed the regular process, right? So there was a public consultation on the European Commission’s before it before it released the the EC. So you had citizens, you had academics, you had civil society and you had industry inputting into that. And I think always generally, there’s a question about the role, particularly of big tax and lobbying on digital legislation at the EU level. And if you look at and the report by the Corporate Europe Observatory, you really see very astounding figures about just how much Big Tech spends on lobbying the European Union institutions. I don’t think the act is an exception to that, particularly because you’ll see from civil societies level in the Corporate Europe Observatory is mapping. You’ll see the number of meetings that Big Tech have access even at the European Commission level to take an influence to influence this type of legislation. The same with the Digital Services Act and the Digital Markets Act. So already like we’re at a power imbalance because civil society just cannot spend the same amount of money that Big Tech can and therefore has more resources. But I would say also more access to the policymaking institutions and that really, at least for me, it really plays out in the language that is used, the ideology that is put forward around the act. So at the same time, we are supposed to accept that this act has fundamental rights at the center and that is designed to promote a human centric approach to AI regulation. But also it’s a very heavily dominated by industry language. So it’s very huge claims consistently made about the need not to stunt innovation, the need to promote the uptake of AI and all of this coming from this very strong ideology from industry that we cannot be too strong on limiting how technology is used because this will put Europe at a disadvantage. So then coming into what I said previously about the geopolitical background to all of this such that the Europe wants to take a role as a leader in AI to be able to compete with, particularly with China. A lot of industry discourse has played on that and have emphasized the need for the Europeans. The EU legislation not to be too harsh, particularly using arguments that the U.S. and Europe need to unite. Otherwise, they will have already lost the battle with China, which was very much the language of the home of the former Google CEO, who talked earlier this year against the Act, even in its current form. So really, we’re seeing like an intense influence, I would say, an intense influence of industry over this process, which means then you sort of left with this to reality. The European Union institutions are asking us to accept that we can have both. We can have the complete. And we cannot in any way limit innovation. But we also we also put forward fundamental rights. We would like, I think, to to find this middle ground. But I think on the some of the issues that we’ve put as red lines, facial recognition in the public space, predictive policing systems, A.I. systems that are designed to detect our emotion emotions despite the false signs that they are based on. These are red lines that you cannot innovate out of because simply, they fundamentally compromise the basic principles of freedom of anti-discrimination, et cetera. And so I think that the industry influences is massive on it and is also pushed the discourse to be somewhat this contradictory, particularly on the fundamental right side.

Alex: If I just may continue in the vein of it, I was wondering how the how member states actually influence and shape that discourse because a lot of the kind of representatives of European industry also now pushing in today is fear and feel that they need to kind of follow the trail that the American companies have have broken. So do you do you feel that? This particular member states are pushing pushing this line of having more innovation at the expense of the human rights focus.

Ella: If if I jump in here, I think absolutely. As Sara said, there is this very strong industry influence that absolutely doesn’t just happen at a European Commission level. In some ways, that happens even more strongly and often more unseen for those of us in the Brussels bubble when it does happen at the member state level. We’ve seen both from some individual commissioners, but also from a number of member state governments. This desire to be the first to have positive air strategies really without thinking about what it means for society, but also what it means for the environment, for example, if we see churning out more and more A.I. as the end goal in itself. And on a personal level, I used to work in the tech sector and I know that data scientists, developers, they often try things out because they are cool, because it’s it’s amazing to see what you are able to do rather than because you need it or because it’s the right thing to do. And so I’ve seen from the other side, you know, the need to make sure that when things are going to market, when things are being made available in the EU, that they are the right things and not just the things that one person with no understanding of human rights and anti-discrimination thought was cool. Because when we let that happen, we’ve seen studies come out around the world about AI facial recognition systems that claim to be able to predict somebody’s political orientation and even their sexual orientation based on their face structure and their expressions. And that’s something that goes all the way back to Victorian era, discredited race, science and phonology. With this new veneer of scientific credibility. Behind this, this AI impulse. So we really do know the the the cliche. I think it’s it’s Batman. Maybe the just because you can doesn’t mean you should. And I think it’s really, really important that we we remember that and I do want to give credit where credit is due to the European Commission because what they’ve tried to do here is incredibly difficult. This is the first comprehensive AI proposed legislative proposal in the world, and what they’ve done is approached it in the way that you would approach consumer product safety. And we have found that there are some key ways than in the act itself that it’s not really worked. It’s not been able to fit to the context of AI and the fact that using AI, as Sarah was saying earlier, can have a really transformative impact on our societies and sometimes in ways that are really negative for us as people. So you know that this act has clearly not been formulated as a piece of fundamental rights legislation, and that’s really where it falls down and on the facial recognition and biometrics point that we’ve talked about quite a lot. Although there’s some cautiously positive bits on biometrics, there are elements where we could even see it as deregulation, that this act is undermining important principles that we already have in fundamental rights law in the EU. And so really, at that kind of highest level of this framework that we’re looking at for A.I., there are some real significant problems, but it just isn’t prioritizing people.

Alex: Would you say that that journalists could do a better job at kind of helping to accompany the process and report consistently on on the actual process of lawmaking?

Ella: And I can maybe give some of my own personal insights in this because I’ve been in that in the EU policy space for about two years now, and it is incredibly complex. And even when you try and break it down to it, it’s simple. It’s kind of bare bones. There’s there’s still so many different rules and exceptions, and it is a it’s a difficult thing to get your head around. But I guess we could take it up to even a step back and think about how a lot of people I believe in the EU don’t even really know the fundamentals. They might see the EU as this homogenous bloc and not even have that knowledge that there were different institutions. How that, at least in theory, helps us to achieve democratic consensus. Because actually, what I found personally, as I’ve learned more and more is that. For all its faults, the EU does do a really good job of something that is exceptionally difficult and complicated. There are now 27 countries being represented and I think it’s something like three hundred and forty million people. So how do you find rules that are fair and and representative of the majority of those people? So I think both coming from within the institutions, but also absolutely journalists could help us to communicate that and civil society as well. We try and do that, but it can be easier to appeal to those people that already get it. And it’s a very complicated policy analysis. So we also try and as a jury in the last few years, we’ve been really ramping up our campaigning capacity to be able to reach people more broadly to show why these processes matter, convince people, hopefully that they’re that interesting. So I think working together to dispel some of the the presumptions that the EU is this far away bureaucratic institution that’s irrelevant to most of our lives and actually seeing that that each of us, how everything we do every day has a connection to what happens in the halls of Brussels and things like the European Citizens‘ Initiative that we’ve been running for the most of this year is one way that we are trying to help people see how their voices can actually directly input into the legislative process. Because the European Citizens‘ Initiative is an official tool that’s at the disposal of EU citizens by the European Commission, that allows us to actually make a demand for a piece of law. And that’s that’s quite a powerful and unique thing. But what we’ve found is we’ve been running our ECI is that almost nobody in all of the EU knows that they exist, let alone that we all have a right to sign them to create our own. And we’ve had some coverage from the press about this ECI. I think so far in the mainstream media, we are yet to properly demonstrate to people why that is exciting, why that is relevant. And so I think on all of us, yeah, we could absolutely do more to to show how it how powerful and interesting this space actually is.

Das war der Podcast von netzpolitik.org. Vielen Dank an Sarah Chander und Ella Jakubowska von European Digital Rights. Das Gespräch geführt haben Alexander Fanta und Chris Köver. Die Folge wurde produziert von mir, Serafin Dinges. Unser Titellied ist von Trummerschlunk; zusätzliche Musik von mir.

Mehr Infos zum heutigen Thema findet ihr natürlich auf netzpolitik.org und in den Shownotes.

Wie immer vielen vielen Dank an unsere Spender:innen. Netzpolitik wird komplett von euch getragen – für die Freiheit und Unabhängigkeit die uns das erlaubt, sagen wir Danke! Wenn ihr auch ein paar Euros spenden wollt, findet ihr alle Möglichkeiten dazu auf netzpolitik.org

Das war’s für heute. Bis in ein paar Wochen!


Hilf mit! Mit Deiner finanziellen Hilfe unterstützt Du unabhängigen Journalismus.

  continue reading

82 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung