I am happy to have joined the project Feminist Approaches to Labour Collectives (FemLab.co), a seed-funded initiative by the International Development Research Center (IDRC), Canada. As part of the Erasmus University Rotterdam’s team, I conduct a stakeholder analysis and coordinate the project’s website and social media activities. Learn more about the project here:
“The user is always right” has been a dominating Silicon Valley mantra for over a decade. It is not only a question of superficialities such as interface design but deeply shapes the internet and the policies around it. The focus on users has been interpreted as democratizing but latest with scandals like the Snowden revelations or Cambridge Analytica’s attempts to manipulate US elections through Facebook, this shiny image has faded. Still, the internet economy largely rests on user trust. The burden of assessing the risks related to platform usage is mostly on them.
Together with my colleague Patrick Sumpf, we analyze this problem with the help of system theory in a recent German article for the renowned sociology journal Soziale Welt. The text appeared in a special edition on Digital Sociology edited by Sabine Maasen and Jan-Hendrik Passoth:
König, R. and Sumpf, P. (2020): Hat der Nutzer immer Recht? Zum inflationären Rückgriff auf Vertrauen im Kontext von Online-Plattformen. Soziale Welt, Sonderband 23, Soziologie des Digitalen – Digitale Soziologie?
Online-Plattformen bilden die Infrastruktur für einen schnellen und simplen Austausch zwischen verschiedenen Interaktionspartnern (z.B. Nutzer, Entwickler, Werbende). Auf der einen Seite wurde das plattform-basierte Web 2.0 durch seine einfach bedienbaren Oberflächen für die breite, auch weniger technisch affine Bevölkerung nutzbar. Auf der anderen Seite führte dies zu einem gesteigerten Black-Boxing der hintergründigen sozio-technischen Komplexität des Netzes. Gleichzeitig werden Risiken und Unsicherheiten zu einem großen Teil auf die Nutzer übertragen, denn von diesen wird erwartet, dass sie bei der Zustimmung zu Geschäftsbedingungen (und weiteren Regelwerken) informierte Entscheidungen treffen. Das hier emergierende System basiert somit fundamental auf Vertrauen in und durch Nutzer. Diese Vertrauensbasierung wird im Zeitalter von Big Data und dem Internet der Dinge weiter verstärkt, indem sich die digitale und die physische Welt zunehmend vermischen und Datenflüsse noch intransparenter werden. Wir analysieren diese Entwicklung aus der Perspektive der Vertrauensforschung und folgern, dass Nutzervertrauen inflationäre Ausmaße angenommen hat – mit weitreichenden Implikationen für die Governance von Plattformen und die digitale Soziologie.
I’m happy to have contributed to an interesting special issue in First Monday edited by Payal Arora and Hallam Stevens on “Data-driven models of governance across borders”. My article focuses on lay perspectives on big data and is based on three citizen conferences we conducted in the ABIDA-project. You can read it open access here.
Together with colleagues from the University of Münster I´m editing a special issue on “The Tracked Society. Interdisciplinary Approaches on Online Tracking”, which includes a workshop on June 21st/22nd 2018 at the University of Amsterdam with a keynote by by Anne Helmond, Fernando van der Vlist and Esther Weltevrede (Digital Methods Initiative, University of Amsterdam). Funding for travel costs is available. See the full call here (PDF).
|05.03.2018||Deadline for abstracts|
|26.03.2018||Selection of accepted abstracts|
|04.06.2018||Deadline for drafts|
|03.09.2018||Submission of full papers to editors|
|01.10.2018||Feedback by editors|
|02.11.2018||Submission of revised full papers to journal|
The recent Amsterdam Privacy Conference was accompanied by a delicate debate: How can a conference on privacy be taken seriously if it is sponsored by some of the most prominent violators of privacy? Facebook stands out here as a “Diamond sponsor” but there were more troublesome companies involved, including Google, Microsoft and Palantir (check the conference website for the full list). This resulted in a small shitstorm during the last few days. Indeed, such practices should be critically discussed. Unfortunately, most of the online grumbling I read so far does not contribute much in this regard. Here´s why.
An easy target
The problematic nature of the liaison seems obvious: How can a conference on privacy be neutral if it is financed by the very actors who threaten privacy? To some the problem was so clear, it didn´t even need to be pointed out:
— Aral Balkan (@aral) October 23, 2015
tweeted Aral Balkan, an activist and designer, in response to a picture with the sponsors´ logos. The tweet received quite a bit of attention, including a citation in a Motherboard article. Many applauded him, e.g. Sidney Vollmer who found the list of sponsors “What the fuck, indeed”. The question was raised: “Is #PrivacyWeek [the conference’s official hashtag] legit? Or white washing by co’s that really just want to get rid of this privacy thing?” (@meneerharmsen). Yes, the alliance is an easy target and I understand the suspicion. I myself was, well, let´s say surprised, when I saw the list of sponsors the first time. By the way, that was months ago and did not require any investigative skills or insight knowledge. The list screamed at you as soon you visited the APC2015-website.
However, I also noticed the numerous distinguished speakers who did not fit at all to the whitewashing hypothesis. Along with prominent academics who can hardly be accused of being lax on privacy issues (e.g. Helen Nissenbaum, Viktor Mayer-Schönberger, Julie E. Cohen) even Max Schrems gave a keynote at the conference. In case that doesn´t ring a bell: Schrems is probably Facebook´s current enemy number one as he successfully filed a number of far-reaching court cases against the Internet giant with his initiative Europe-vs-Facebook. This was reason enough for me to take the conference seriously and to draw my own conclusions by visiting it. Unfortunately, many of the critics didn´t seem to find it worth or necessary to even listen to the speakers. To them, the sponsoring itself was reason enough to dismiss the entire conference, including its organizers, speakers and content. The conflict was portrayed as a fundamental contradiction that cannot possibly be bridged:
No, sorry, you don’t get to be sponsored by corporations that erode privacy & human rights *and* say you care about privacy & human rights.
— Aral Balkan (@aral) October 26, 2015
Guilt by association – a cheap tactic
This simple “guilt by association” tactic, which seems to be intuitively right to many people in the debate, comes with a significant problem: It dismisses everybody and everything under the umbrella of the event. That´s not only a prejudiced move, it´s also unfair as it illegitimately accuses even the most critical voices of whitewashing. Moreover, it´s a cheap tactic that can be used in many contexts. To illustrate that, let´s focus on of the critic Aral Balkan as an example. Should we scream “WTF?!” because Aral acts as an outspoken Facebook opponent while he himself busily feeds the object of his criticism with a constant flow of data (see his FB-profile here)? Oh, and he was also not bothered by the fact that Google, Microsoft and IBM partnered with re:publica, an influential annual conference in Berlin where he gave a talk. I could probably find more examples of this kind, but let´s not go down that dirty road any further. Obviously, I neither want to discredit Aral nor re:publica here (please forgive me to abuse you for this example). My point is that the tactic of simple and quick accusations doesn´t get us very far. It targets the wrong people and indiscriminately dismisses even the most valuable contributions.
Instead, I believe we need a more honest debate about the threats connected to Internet players. One that’s not confusing intuitive bashing of an easy target with critical thinking. One that asks the right questions and doesn’t fall for the most obvious (but not necessarily most accurate and helpful) answers.
In defense of reasoning
So what are the right questions and answers? I wouldn’t be so arrogant to claim I have them. But here are some ideas of how to get to them:
- Let’s try to base our judgements on observations not prejudices.
I obviously haven’t attended all sessions of the APC2015 but the ones that I sat through surely can’t be accused of Facebook whitewashing. This includes a session organized by Facebook itself in which I learned the expression “data rape” for the omnipresent service-for-profile business model. Sure, the sponsoring still might have its questionable implications. But then let’s please talk about them instead of unfounded speculation and random accusations. Sidney Vollmer assumes that the sponsoring had no influence on the content, just to add: “But we don’t know that for sure, and trust is the thing at play in these matters.” Well no, not if you actually had the chance to come and witness the event for yourself. Sidney had this chance but decided not to come as he “couldn’t shake this queasy feeling, after seeing the sponsors.” In other words, he preferred to rely on his prejudices instead of getting his own picture of the event. Too bad, I´m sure his piece on the issue would have been even more worth reading.
- Let’s be truly skeptical.
The actual problems are more complicated than the cheap “Facebook is buying scholars” rhetoric suggests. For example, why do academic conferences and institutions need to rely on such sponsors in the first place? What does it mean when the public sphere is more and more moving into a commercial space? (A question that was actually debated during the conference, by the way.) If the sponsoring had a bad effect, what exactly is it that´s problematic? What could actually be observed in this regard?
- Let’s have an honest and open debate!
Yes, the big players have a fundamental impact on our lives and therefore we need to observe and also regulate them carefully. But I don´t believe in the simple and one-sided answers that many critics offer. As a matter of fact, many of them actually don´t seem to take their own arguments too seriously, otherwise they wouldn´t be heavy users of the services they bash day and night. Let´s be honest with ourselves. We´re in a dilemma. We love and fear the new technologies. “Going dark”, i.e. total non-usage, is not an option for most of us. Thus, we need a way to constructively deal with them. I don´t find it wrong to include the companies in the conversation on our way. If they finance the debate without forcing an agenda on us, even better.
Let´s move on
Nope, Facebook didn´t just “buy” the Amsterdam Privacy Conference, at least not in the sense that many implicitly or explicitly suggest. The APC2015 was a successful conference with critical and enlightening contributions and debates. This is exactly what we need, not thoughtless shitstorms which don´t even help the critics. By drawing on cheap tactics like guilt by association, they weaken the case they might actually have. There are surely good arguments against conference sponsoring but mere association is not one of them. To be fair, some of the points worth discussing have been brought forward already. Let´s stick to them and let´s remove the fog that has been created by hasty accusations.
More importantly, let´s talk about the actual content of APC2015! This is probably where I agree the most with Sidney Vollmer: No matter whether there was a direct impact on the conference’s content, the sponsoring surely left its mark on the credibility, which did not help the cause of privacy. Now we maul each other about the (il)legitimacy of conference sponsoring instead of talking about the actual topic of the conference.
Maybe we should just treat the whole thing with a little bit more coolness and move on. I loved Zizi Papacharissi´s mocking comment on Facebook´s rather awkward giveaway:
— Zizi Papacharissi (@zizip) October 25, 2015
Facebook, you need a bit more than chocolate and a conference to convince us. Sponsoring critics, let´s get back to the actual issues that we have to discuss urgently. And yes, please come to APC2016, no matter who will be the sponsor.
Disclaimer: I´m not affiliated with the organizers or Facebook nor did I receive any payments from them. However, APC2015 provided a pretty cool collection of goodies, including the now famous Facebook chocolate but I haven´t opened it (yet). I´m not writing this to please or insult anybody. I simply hope for a more constructive and truly critical debate.
A few weeks ago, I visited the Privacy and Freedom conference at the Center for Interdisciplinary Research (ZiF), Bielefeld University, which was very informative and insightful. It was organized by the project Transformation of Privacy (funded by Volkswagen Foundation). Like our new ABIDA project, it combines very different disciplinary perspectives including law, political science, communication science and informatics to tackle the challenges resulting from ICT – here with a particular focus on privacy and freedom.
The ambivalence of privacy
The 2-day conference brought together scholars with diverse backgrounds, giving a broad but also detailed and differentiated look into the relation between privacy and freedom today and in the past. Although not always easy to follow, accounts from the political theory angle (delivered by Dorota Mokrosinska, Andrew Roberts, Sandra Seubert, the keynote speaker Annabelle Lever and not to forget the critical audience of around 50 people) gave very valuable contributions for a deeper understanding of the interplay of various important factors to consider when we think about these terms. Privacy is relative, historically but also socially. Towards the end of the conference, Rüdiger Grimm referred to a powerful example to illustrate this: Claude Monet’s painting “Le dejeuner.”
Today, it is difficult to imagine that this scene could be controversial in any regard. But the jury of the Salon in Paris rejected it because such “intimate” moments were regarded as too private for a public audience. This seems hard to believe in an age in which people share every smallest moment of their lives with a (potential) mass audience via social media.
Privacy is also relative in a social sense, differing from context to context as Dorota Mokrosinska pointed out: We have no problem to reveal our naked bodies to our doctors but wouldn´t show them our bank account. At the same time, bankers may know all details about our financial life while most of us will be rather reluctant to strip naked in front of them. This is exactly the challenge we are facing in the context of Big Data, as the new methods and technologies collect, combine and correlate evermore types of data that used to be either not existent or not connected.
Privacy and power
However, it is not enough to conceptualize privacy as individual secrets that ought to be protected from others. While in some contexts privacy may be an enabler of freedom by providing personal autonomy it can be a tool for repression in others. For example, women have been systematically kept away from public life by tying them to the privacy of their households. Therefore, we can never talk about privacy without talking about equality as Annabelle Lever pointed out in her passionate keynote. Privacy comes with costs and benefits but these are very unequally distributed, Lever explained. Quite clearly, this means we need to consider a classic sociological topic if we want to understand privacy and its implications: power structures. Sandra Seubert also referred to this in her talk: Drawing on Adorno and Horkheimer´s reflections on the cultural industry she described how users of popular web platforms contribute to stabilizing power structures. Since these platforms have penetrated more or less all areas of life, resistance is almost impossible and the individual is practically forced to co-produce the power of external forces. Nevertheless, I agree with a comment from the audience, reminding us that the power structures in the online context do not just reproduce “old” power structures as they also give new power to the individual (e.g. when individuals threaten others´ privacy by publishing confidential material about them).
Empirical perspectives on privacy
A strength of the conference was that the audience wasn´t left with these important but also partly highly abstract theoretical reflections on privacy. Sessions on the communication and information science perspectives (and beyond) were helpful here to contextualize these thoughts with empirical, experimental and technical insights. The various examples showcased how our data-driven world challenges privacy, leading repeatedly to the normative question of what we should do about it. Fortunately, this question was asked in light of the knowledge resulting from the presented research projects. This prevented to remain in a purely imaginative sphere of wishful thinking. As Laura Brandimarte pointed out with regard to her research perspective of behavioral economics: We need to understand how people actually make decisions – not just how they should decide. She and her colleagues conducted a number of experiments on the perception of privacy threats, e.g. by giving people varying options for controlling their privacy in a web survey. Finding that more options do not necessarily lead to more privacy-aware actions the researchers conclude:
“The paradoxical policy implication is that Web 2.0 applications, by giving greater freedom and power to reveal and publish personal information, may lower the concerns that people have regarding control over access and usage of that information. “ (Brandimarte et al. 2013, quoted from pre-print)
Anyway, most people don´t seem to be very concerned about their data. Sven Jöckel studied smartphone users´ heuristics for selecting apps. The majority of the participants (62 %) would spend only around 2 seconds with reading the app permission as he observed in his small case study. Bigger factors seem to be branding effects or recommendations by friends. This rational became clear when Jöckel referred to a user who appeared very privacy-concerned at first, since he refused to install an app due to its extensive demand for permission rights. Yet, when he selected another app he did not pay any attention to the given rights because he recognized the brand which he obviously trusted.
Given such observations, it becomes clear that raising users´ awareness about the terms they agree on when signing up for a service is an important goal. Stefan Katzenbeisser also stressed missing awareness as one of the key obstacles for privacy protection tools in his talk. From his informatics point of view he strongly and convincingly criticized suggestions made by some policy-makers to deliberately weaken data protection to enable surveillance, citing PGP-developer Phil Zimmermann´s (1999) famous sentence: “If privacy is outlawed, only outlaws will have privacy”.
So what can we do?
There are several initiatives in this direction (see for example this helpful blog post by Ann Wuyts). However, developing icons that are truly telling is rather challenging. As Fischer-Hübner and her colleagues pointed out in an ENISA report, many icons “(…) do not seem to be very intuitive and not easily and unmistakably recognizable by their symbolic depictions” (Tschofenig et al. 2013: 26). Just take a look at the icons suggested by the former vice president of the European Parliament, Alexander Alvaro above and the problem becomes evident. Thus, tools for enhancing “ex post privacy” (e.g. by giving insights into how our data is processed) are equally important while in both cases user friendliness is crucial, as Fischer-Hübner argued.
Privacy-veteran Roger Clarke also referred to usability as one of the factors for why the various privacy-enhancing technologies (PETs) he introduced in his talks haven´t been adopted more widely. Yet, he sees such technical solutions as an appropriate answer to what he has coined “Privacy-Invasive Technologies”. After many years of experience as a privacy activist, Clarke has apparently lost faith in institutional solutions in favor for individualistic approaches:
“Unfortunately, the winners are generally the powerful, and the powerful are almost never the public and almost always large corporations, large government agencies, and organised crime. In the information era, the maintenance of freedom and privacy is utterly dependent on individuals understanding, implementing and applying technology in order to protect free society against powerful institutions.” (Clarke 2015)
By the way, I recommend to check out Clarke´s incredibly comprehensive website which has charmingly withstood all web design trends since its establishment in 1994. You can also find his related paper and his slides (PDF) there.
In the last regular session the various challenges connected to privacy and freedom were addressed from the legal perspective. This was a much needed point of view, as legislators struggle to keep up with the rapidly growing privacy threats in the context of the Internet. At the same time, the state plays an ambivalent role, as a protector but also violator of citizen rights. From Snowden we learned that many conspiracy theories on surveillance are indeed not conspiracy theories, as Philipp Richter reminded the audience. Our privacy and freedom is threatened through the digital, so the laws have to become digital themselves. “If code is law, law must be code” Richter argued in reference to Lawrence Lessig´s famous quote. Right now, laws are usually not directed to specific technologies. On the one hand, this allows them to stay valid even for future technologies. On the other hand, this means decisions have to be made more and more by the judiciary, leading to legal uncertainty and reduced governmental power, as Richter convincingly stated. Johannes Eichenhofer as well as Gerrit Hornung gave an impression of the various existing laws and institutions relevant for “e-privacy” – reaching from Germany´s constitutional court, the EU and international law to the service providers who were portrayed as (potential) violators but also guards of e-privacy. Unlike the other sessions, this one was held in German. Given the rather special language of the German legal system, this is understandable. The organizers´ solution were translators who simultaneously delivered the talks and discussions in English. I have the utmost respect for the people facing this incredibly difficult task. However, I doubt that it was a good solution to transfer this challenge from the speakers to the translators who then had to deal with it in real-time.
Altogether, the “Privacy and Freedom” conference gave an encompassing yet profound and thought-provoking overview of the complex and diverse issues around these terms. The interdisciplinary approach was not only necessary, it even worked in a productive way. For me personally, the conference served almost as an introduction to some of the important topics we will have to faith in our project ABIDA – Assessing Big Data. As a policy-advising project, we will also be forced to tackle a question raised by Stefan Dreier at the closing session of the conference which sums up the almost dilemma-like situation policy-makers have to face: Where do we draw the thin line between autonomy-enabling, paternalistic and freedom-limiting governance?
Brandimarte, L., Acquisti A. and Loewenstein, G. (2013): Misplaced Confidences: Privacy and the Control Paradox, Social Psychological and Personality Science 4 (3), pp. 340-347.
Clarke, R. (2015): Freedom and Privacy: Positive and Negative Effects of Mobile and Internet Applications. Notes for the Interdisciplinary Conference on ‘Privacy and Freedom’, 4-5 May 2015 Bielefeld University.
Tschofenig, H., et al. (2013): On the security, privacy and usability of online seals. An overview, European Union Agency for Network and Information Security (ENISA). https://www.enisa.europa.eu/activities/identity-and-trust/library/deliverables/on-the-security-privacy-and-usability-of-online-seals/
Zimmermann, Philip (1999): Why I Wrote PGP. https://www.philzimmermann.com/EN/essays/WhyIWrotePGP.html
I´m part of a small group of researchers who started a public debate series at ITAS in Karlsruhe called technik.kontrovers (“controversial technology”). Our idea is that Technology Assessment should also interact with the general public as our topics have significant societal implications and there is a lot to be learned from each other. After our successful start with an evening on robotics in December 2014, I was happy to contribute as a speaker together with my colleague Reinhard Heil.
Once more our institute´s foyer reached its limits when it was filled with a diverse and very engaged audience on March 18th. There were more than enough controversial topics to discuss under the umbrella of the evening´s title “Smart. Networked. Transparent? Life in the data cloud” (German: “Smart. Vernetzt. Gläsern? Leben in der Datenwolke”): data collection through smartphones, (ab)using web surfing habits for credit scoring, personalizing insurances by analyzing individual driving behavior or health information, to name just a few. Reinhard and I gave an introduction into the wide field of Big Data and the Internet of Things in form of a dialogue with pre-defined roles: I was supposed to play an enthusiastic tech-optimist who can´t wait to try pretty much every app and gadget out there, while Reinhard acted as a slightly paranoid guy trying to keep his data profile as low as possible.
Surprisingly, I did not find it that hard to play my role as a tech-enthusiast. The overwhelming majority of the crowd had a negative outlook on the topics discussed. When the moderators asked them whether Big Data generally might improve their life, around 45 voted “no” whereas only 15 chose “yes”. Granted, our general perspective was rather critical and some of my “pro” arguments could easily be perceived negatively. For example, Minority Report´s vision of personalized advertising probably appears rather nightmarish to some and Larry Page´s claim that the analysis of health data could save 100,000 lives a year indeed could be called “ethical blackmailing” as Reinhard pointed out.
This critical bias was intended. I believe that the optimistic point of view on the developments connected to Big Data does not need much support at the moment. The Silicon Valley and its popular products has more than enough power and influence, many politicians would love to gather evermore data to enable an encompassing surveillance regime, scientists love the new possibilities coming with new data treasures and even the smallest local businesses tend to believe the promises of an dramatically increased efficiency through an automated analysis of production processes etc.
However, when I was confronted with this strong skepticism towards Big Data, I felt pushed to defend the new opportunities connected to this technology – not because it was my pre-defined role but because I actually believe it is important to keep both sides in mind. No doubt, the risks connected to Big Data have to be taken seriously. However, the reactions towards privacy and security threats are too often within diametric extremes: helpless fatalism or paranoid alarmism. Instead, we need a well-informed and balanced debate, wise decision-making with careful and considerate regulation where necessary. I hope our new project Assessing Big Data (ABIDA) will help to build a foundation for this.
Nevertheless, I very much enjoyed our heated debates and I´m looking forward to the next public evening on a completely different topic: The future of eating. By the way, once more the evening was documented artistically with a visual recording by Jens Hahn which looks pretty cool, in my opinion:
I´m excited to be on board of a brand new project: Assessing Big Data (ABIDA). Funded by the German Federal Ministry of Education and Research (BMBF) with more than EUR 6 million, we are going to study the societal opportunities and risks of Big Data for a period of four years, together with our partners at the University of Münster and working groups from multiple German universities. Check out KIT´s press release for more information and stay tuned for our upcoming project website.
I just came back from a quite interesting evening in Berlin. Wikimedia Germany has invited Saskia Sell and me to discuss about Eli Pariser´s Filter Bubble as part of their series Digitale Kompetenzen (in German). Saskia started by reminding us that information filtering is nothing new as we have always depended on gatekeepers and our own filter mechanisms and individual biases. We both agreed that Pariser´s book is highly techno-deterministic and I also pointed out that first studies on personalization in search engine results do not support his fear of becoming trapped in personalized information bubbles. However, I do believe that a naive use of search engines might get users into bubbles of one-sided information. As an example, I pointed to the research I did with Erik Borra on how 9/11 has been represented at Google over time. We found that the query “9/11” lead mostly to sites representing alternative (“conspiracy”) accounts of the attacks until Google rolled out its Panda update. This change of the algorithm apparently worked in favor for sites representing the “mainstream” version of the event.
I was very happy about the pretty engaged audience which helped to create lively discussions (partly also on Twitter under #digikompz). One of the key points was that the opaque algorithms which filter our information should become more transparent and that their users need a specific form of literacy to deal with them in a constructive way. Wikimedia Germany´s Digikompz-series is a good step in that direction as it exactly aims at educating the public about the pitfalls of digital communication. There is one more event coming up which I can only recommend. It will be streamed live and can also be watched afterwards. Also our evening on the Filter Bubble can be re-watched here:
9/11 online and the participatory dilemma
Together with Tobias Audersch we just published an article in the German Internet magazine Telepolis. It connects to my previous work on how 9/11 is represented on Google (I presented first results together with Erik Borra at the Society of the Query #2 conference) and how the case is negotiated on Wikipedia´s discussion pages (see my article in Information, Communication & Society). In both cases I was interested in the politics of exclusion, since the September 11 attacks are subject of a heated debate with fundamentally diverging interpretations. This challenges any kind of gatekeeper who has to decide what information is relevant – whether human or algorithmic. We observed that in the light of such controversy, the prevailing mechanism of selecting information takes a rather conservative approach by favoring well-established sources. This can be interpreted as a participatory dilemma: The Internet potentially allows for participation beyond the established knowledge hierarchies but exactly because it is used heavily in that regard, these hierarchies get reproduced even harsher online.
How can we overcome the participatory dilemma?
These observations were criticized especially by the digitally very active so-called 9/11 Truth Movement who believes that the “official” account of the attacks is wrong. Instead, their alternative accounts usually suggest that certain details indicate a complicity of the US government. Naturally, supporters of this perspective are not content with Wikipedia´s and Google´s politics of exclusion. Some of them have also used my work as a proof for an unjust form of censorship within the Wikipedia community. This provoked me to think further and to take a personal stand.
Indeed, I´m not a supporter of the Truth Movement and their accounts. I used to moderate a forum with Tobias Audersch in which we discussed their myriads of arguments in detail. None of them could convince me. However, this became an interesting case for my sociological perspective as it touches many questions discussed in my discipline: How does knowledge get socially constructed? What is the role of experts in a democratic society? How do the new online channels shape society and how does society shape them? One of the most crucial questions resulting from my research and experience with the Truth Movement is how can we overcome the participatory dilemma?
Anomaly hunting – on the methodological issues of the Truth Movement
For more conservative observers, the case is easy: They usually regard supporters of the 9/11 Truth Movement as conspiracy theorists who don´t deserve to be taken seriously. But whether we like it or not, these perspectives have become a “mainstream political reality” as Time-author Lev Grossman has once put it and opinion polls show. Therefore, radical exclusion might not be an adequate answer for democratic societies. More importantly, we may wonder how we can structure online debates in a way that allows for diverging viewpoints. My worry is not the exclusion of absurd theories, my worry is the across-the-board elimination of anything that contradicts conservative mainstream views.
While it is easy to criticize Wikipedia and Google for their gatekeeping policies, we also need to understand their problematic position: They need to find quick and easily accessible answers to complex and controversial questions. The 9/11 Truth Movement has an easier job as their main motto is “ask questions, demand answers”. They have done so by accumulating countless apparent inconsistencies in the “official” version of the event. As Tobias and I argue in the Telepolis-article, this is neither enough nor convincing because one can easily turn this around: In the rare cases in which alternative accounts have actually been spelled out, one can find just as many inconsistencies (if not more) as in what they call the “official story” of 9/11. But usually the movement does not even develop any kind of narrative. Instead they just collect lists of apparent inconsistencies, operating as anomaly hunters, as we argue. However, whether you are a journalist, historian, wikipedian or search engine provider, listing doubts and open questions is not enough. Your task is to provide answers. If the 9/11 Truth Movement wants to be taken seriously, they have to do exactly that. Otherwise, they should not be surprised if they keep getting excluded.
My first mini shitstorm and the crisis of the comment section
It was an interesting experience for me to publish with Telepolis. Their audience is not only far bigger than that of an average academic article, it is also very different. Numerous articles of some of the German protagonists of the Truth Movement have appeared here and the forum is full of their supporters. Thus, it wasn´t surprising that our article received over 470 comments within a week and the feedback was overwhelmingly negative. However, the majority of the comments didn´t even address our main arguments. Instead, the article apparently has functioned for many as nothing but an initiator to continue old debates. This is disappointing as we hoped to go beyond this point by tackling the roots of the conflict instead of its symptoms. But what I observed was extremely unfruitful discussions which did not lead anywhere. Of course, this is nothing new but rather just another example for the crisis of the comment section. The constantly unproductive discourse in this format has already motivated major news sites like the German Süddeutsche Zeitung to close down their comment sections. This is also another proof of the participatory dilemma: Participation is limited by participation. Needless to say, this is a very unsatisfying development and we need to find new ways to allow for constructive online discourse as also Sascha Dickel has argued.