Special issue on online tracking at New Media & Society

I´m very happy that our special issue “The Tracked Society: Interdisciplinary Approaches on Online Tracking” is out at New Media & Society. I edited it together with colleagues from the ABIDA-project: Steffen Uphues, Verena Vogt and Barbara Kolany-Raiser. Find out more:

Table of contents
Related Twitter thread

Abstract from our introduction:

Online tracking in its various forms is a backbone of digitalization that has sparked hopes and fears alike: It opens up new opportunities for users and businesses as it enables individually targeted content. At the same time, the encompassing tracking of often unaware and ill-informed users and the opaque practices of data procession has alarmed critics from multiple sides. How can we better understand but also proactively and constructively shape the emerging Tracked Society? Our special issues seek to shed light on these questions from various perspectives and disciplines. In this introduction, we give a brief overview of the topic in general and our special issue in particular.

Full text here

Digital pessimism: Can we break out of the negativity loop?

Watching the Netflix documentary “The Social Dilemma” was my latest experience with a long list of dystopian takes on internet and society. Just a glimpse on the recent books below gives an impression of how pessimistic the outlook on this once welcomed technology has become. This motivated me to think more about it in a new post for the FemLab.Co blog titled:

Digital pessimism: Can we break out of the negativity loop?

I´m going to connect to these thoughts in a presentation for a webinar at Pondicherry University’s Department of Electronic Media and Mass Communication on September 23rd.

dystopian_literature

Digital exclusion: It’s not a bug, it’s a feature

On September 7, 2020, I´m giving a talk at the Data Wise program at University of Groningen. It’s based on my PhD thesis and related work. Check out the abstract below.

datawise-logo

When the internet grew into a mainstream technology, it came with the widespread expectation that it will have inclusive and democratizing effects on society. Never before did so many people have access to so much information. Maybe even more significant were the new possibilities to produce and disseminate it. Suddenly, there was an unprecedented independence from the gatekeepers who used to control information flows. In 2020, there is not much left of this initial euphoria. Stories on fake news, hate speech, privacy violations and algorithmic discrimination frame our perception of the internet. Many blame digital platforms. They dominate the internet and have emerged as its gatekeepers. But how do they actually perform this task? Would we be better off without them or can their decision-making at least be improved? The presentation gives an introduction into platforms’ gatekeeping processes and discusses a value that has regained grounds to make them more inclusive: Diversity.

 

Position as stakeholder analyst and social media coordinator for FemLab.co project

I am happy to have joined the project Feminist Approaches to Labour Collectives (FemLab.co), a seed-funded initiative by the International Development Research Center (IDRC), Canada. As part of the Erasmus University Rotterdam’s team, I conduct a stakeholder analysis and coordinate the project’s website and social media activities. Learn more about the project here:

femlabLogo

Article on trust and platforms in German sociology journal Soziale Welt

“The user is always right” has been a dominating Silicon Valley mantra for over a decade. It is not only a question of superficialities such as interface design but deeply shapes the internet and the policies around it. The focus on users has been interpreted as democratizing but latest with scandals like the Snowden revelations or Cambridge Analytica’s attempts to manipulate US elections through Facebook, this shiny image has faded. Still, the internet economy largely rests on user trust. The burden of assessing the risks related to platform usage is mostly on them.

Together with my colleague Patrick Sumpf, we analyze this problem with the help of system theory in a recent German article for the renowned sociology journal Soziale Welt. The text appeared in a special edition on Digital Sociology edited by Sabine Maasen and Jan-Hendrik Passoth:

König, R. and Sumpf, P. (2020): Hat der Nutzer immer Recht? Zum inflationären Rückgriff auf Vertrauen im Kontext von Online-Plattformen. Soziale Welt, Sonderband 23, Soziologie des Digitalen – Digitale Soziologie?

Abstract (German):
Online-Plattformen bilden die Infrastruktur für einen schnellen und simplen Austausch zwischen verschiedenen Interaktionspartnern (z.B. Nutzer, Entwickler, Werbende). Auf der einen Seite wurde das plattform-basierte Web 2.0 durch seine einfach bedienbaren Oberflächen für die breite, auch weniger technisch affine Bevölkerung nutzbar. Auf der anderen Seite führte dies zu einem gesteigerten Black-Boxing der hintergründigen sozio-technischen Komplexität des Netzes. Gleichzeitig werden Risiken und Unsicherheiten zu einem großen Teil auf die Nutzer übertragen, denn von diesen wird erwartet, dass sie bei der Zustimmung zu Geschäftsbedingungen (und weiteren Regelwerken) informierte Entscheidungen treffen. Das hier emergierende System basiert somit fundamental auf Vertrauen in und durch Nutzer. Diese Vertrauensbasierung wird im Zeitalter von Big Data und dem Internet der Dinge weiter verstärkt, indem sich die digitale und die physische Welt zunehmend vermischen und Datenflüsse noch intransparenter werden. Wir analysieren diese Entwicklung aus der Perspektive der Vertrauensforschung und folgern, dass Nutzervertrauen inflationäre Ausmaße angenommen hat – mit weitreichenden Implikationen für die Governance von Plattformen und die digitale Soziologie.

CfP: The Tracked Society. Interdisciplinary Approaches on Online Tracking

Together with colleagues from the University of Münster I´m editing a special issue on “The Tracked Society. Interdisciplinary Approaches on Online Tracking”, which includes a workshop on June 21st/22nd 2018 at the University of Amsterdam with a keynote by by Anne Helmond, Fernando van der Vlist and Esther Weltevrede (Digital Methods Initiative, University of Amsterdam). Funding for travel costs is available. See the full call here (PDF).

Important dates

 

05.03.2018 Deadline for abstracts
26.03.2018 Selection of accepted abstracts
04.06.2018 Deadline for drafts
21.-22.06.2018 Workshop
03.09.2018 Submission of full papers to editors
01.10.2018 Feedback by editors
02.11.2018 Submission of revised full papers to journal

 

 

Nope, Facebook didn´t buy the Amsterdam Privacy Conference

The recent Amsterdam Privacy Conference was accompanied by a delicate debate: How can a conference on privacy be taken seriously if it is sponsored by some of the most prominent violators of privacy? Facebook stands out here as a “Diamond sponsor” but there were more troublesome companies involved, including Google, Microsoft and Palantir (check the conference website for the full list). This resulted in a small shitstorm during the last few days. Indeed, such practices should be critically discussed. Unfortunately, most of the online grumbling I read so far does not contribute much in this regard. Here´s why.

An easy target

The problematic nature of the liaison seems obvious: How can a conference on privacy be neutral if it is financed by the very actors who threaten privacy? To some the problem was so clear, it didn´t even need to be pointed out:

tweeted Aral Balkan, an activist and designer, in response to a picture with the sponsors´ logos. The tweet received quite a bit of attention, including a citation in a Motherboard article. Many applauded him, e.g. Sidney Vollmer who found the list of sponsors “What the fuck, indeed”. The question was raised: “Is #PrivacyWeek [the conference’s official hashtag] legit? Or white washing by co’s that really just want to get rid of this privacy thing?” (@meneerharmsen). Yes, the alliance is an easy target and I understand the suspicion. I myself was, well, let´s say surprised, when I saw the list of sponsors the first time. By the way, that was months ago and did not require any investigative skills or insight knowledge. The list screamed at you as soon you visited the APC2015-website.

Or not?

However, I also noticed the numerous distinguished speakers who did not fit at all to the whitewashing hypothesis. Along with prominent academics who can hardly be accused of being lax on privacy issues (e.g. Helen Nissenbaum, Viktor Mayer-Schönberger, Julie E. Cohen) even Max Schrems gave a keynote at the conference. In case that doesn´t ring a bell: Schrems is probably Facebook´s current enemy number one as he successfully filed a number of far-reaching court cases against the Internet giant with his initiative Europe-vs-Facebook. This was reason enough for me to take the conference seriously and to draw my own conclusions by visiting it. Unfortunately, many of the critics didn´t seem to find it worth or necessary to even listen to the speakers. To them, the sponsoring itself was reason enough to dismiss the entire conference, including its organizers, speakers and content. The conflict was portrayed as a fundamental contradiction that cannot possibly be bridged:

At the Amsterdam Privacy Conference 2015: Guilt by association?

At the Amsterdam Privacy Conference 2015: Guilt by association?

Guilt by association – a cheap tactic

This simple “guilt by association” tactic, which seems to be intuitively right to many people in the debate, comes with a significant problem: It dismisses everybody and everything under the umbrella of the event. That´s not only a prejudiced move, it´s also unfair as it illegitimately accuses even the most critical voices of whitewashing. Moreover, it´s a cheap tactic that can be used in many contexts. To illustrate that, let´s focus on of the critic Aral Balkan as an example. Should we scream “WTF?!” because Aral acts as an outspoken Facebook opponent while he himself busily feeds the object of his criticism with a constant flow of data (see his FB-profile here)? Oh, and he was also not bothered by the fact that Google, Microsoft and IBM partnered with re:publica, an influential annual conference in Berlin where he gave a talk. I could probably find more examples of this kind, but let´s not go down that dirty road any further. Obviously, I neither want to discredit Aral nor re:publica here (please forgive me to abuse you for this example). My point is that the tactic of simple and quick accusations doesn´t get us very far. It targets the wrong people and indiscriminately dismisses even the most valuable contributions.

Instead, I believe we need a more honest debate about the threats connected to Internet players. One that’s not confusing intuitive bashing of an easy target with critical thinking. One that asks the right questions and doesn’t fall for the most obvious (but not necessarily most accurate and helpful) answers.

In defense of reasoning

So what are the right questions and answers? I wouldn’t be so arrogant to claim I have them. But here are some ideas of how to get to them:

    1. Let’s try to base our judgements on observations not prejudices.

I obviously haven’t attended all sessions of the APC2015 but the ones that I sat through surely can’t be accused of Facebook whitewashing. This includes a session organized by Facebook itself in which I learned the expression “data rape” for the omnipresent service-for-profile business model. Sure, the sponsoring still might have its questionable implications. But then let’s please talk about them instead of unfounded speculation and random accusations. Sidney Vollmer assumes that the sponsoring had no influence on the content, just to add: “But we don’t know that for sure, and trust is the thing at play in these matters.” Well no, not if you actually had the chance to come and witness the event for yourself. Sidney had this chance but decided not to come as he “couldn’t shake this queasy feeling, after seeing the sponsors.” In other words, he preferred to rely on his prejudices instead of getting his own picture of the event. Too bad, I´m sure his piece on the issue would have been even more worth reading.

    1. Let’s be truly skeptical.

The actual problems are more complicated than the cheap “Facebook is buying scholars” rhetoric suggests. For example, why do academic conferences and institutions need to rely on such sponsors in the first place? What does it mean when the public sphere is more and more moving into a commercial space? (A question that was actually debated during the conference, by the way.) If the sponsoring had a bad effect, what exactly is it that´s problematic? What could actually be observed in this regard?

    1. Let’s have an honest and open debate!

Yes, the big players have a fundamental impact on our lives and therefore we need to observe and also regulate them carefully. But I don´t believe in the simple and one-sided answers that many critics offer. As a matter of fact, many of them actually don´t seem to take their own arguments too seriously, otherwise they wouldn´t be heavy users of the services they bash day and night. Let´s be honest with ourselves. We´re in a dilemma. We love and fear the new technologies. “Going dark”, i.e. total non-usage, is not an option for most of us. Thus, we need a way to constructively deal with them. I don´t find it wrong to include the companies in the conversation on our way. If they finance the debate without forcing an agenda on us, even better.

Let´s move on

Nope, Facebook didn´t just “buy” the Amsterdam Privacy Conference, at least not in the sense that many implicitly or explicitly suggest. The APC2015 was a successful conference with critical and enlightening contributions and debates. This is exactly what we need, not thoughtless shitstorms which don´t even help the critics. By drawing on cheap tactics like guilt by association, they weaken the case they might actually have. There are surely good arguments against conference sponsoring but mere association is not one of them. To be fair, some of the points worth discussing have been brought forward already. Let´s stick to them and let´s remove the fog that has been created by hasty accusations.

More importantly, let´s talk about the actual content of APC2015! This is probably where I agree the most with Sidney Vollmer: No matter whether there was a direct impact on the conference’s content, the sponsoring surely left its mark on the credibility, which did not help the cause of privacy. Now we maul each other about the (il)legitimacy of conference sponsoring instead of talking about the actual topic of the conference.

Maybe we should just treat the whole thing with a little bit more coolness and move on. I loved Zizi Papacharissi´s mocking comment on Facebook´s rather awkward giveaway: 

Facebook, you need a bit more than chocolate and a conference to convince us. Sponsoring critics, let´s get back to the actual issues that we have to discuss urgently. And yes, please come to APC2016, no matter who will be the sponsor.

Disclaimer: I´m not affiliated with the organizers or Facebook nor did I receive any payments from them. However, APC2015 provided a pretty cool collection of goodies, including the now famous Facebook chocolate but I haven´t opened it (yet). I´m not writing this to please or insult anybody. I simply hope for a more constructive and truly critical debate.

APC2015 goodies: How sweet is Facebook´s chocolate?

APC2015 goodies: How sweet is Facebook´s chocolate?

Conference report: “Privacy and Freedom” (Bielefeld, May 4-5, 2015)

A few weeks ago, I visited the Privacy and Freedom conference at the Center for Interdisciplinary Research (ZiF), Bielefeld University, which was very informative and insightful. It was organized by the project Transformation of Privacy (funded by Volkswagen Foundation). Like our new ABIDA project, it combines very different disciplinary perspectives including law, political science, communication science and informatics to tackle the challenges resulting from ICT – here with a particular focus on privacy and freedom.

The ambivalence of privacy

The 2-day conference brought together scholars with diverse backgrounds, giving a broad but also detailed and differentiated look into the relation between privacy and freedom today and in the past. Although not always easy to follow, accounts from the political theory angle (delivered by Dorota Mokrosinska, Andrew Roberts, Sandra Seubert, the keynote speaker Annabelle Lever and not to forget the critical audience of around 50 people) gave very valuable contributions for a deeper understanding of the interplay of various important factors to consider when we think about these terms. Privacy is relative, historically but also socially. Towards the end of the conference, Rüdiger Grimm referred to a powerful example to illustrate this: Claude Monet’s painting “Le dejeuner.”

Claude Monet´s painting „Le déjeuner“: Too private for its time?

Claude Monet´s painting „Le déjeuner“: Too private for its time?

Today, it is difficult to imagine that this scene could be controversial in any regard. But the jury of the Salon in Paris rejected it because such “intimate” moments were regarded as too private for a public audience. This seems hard to believe in an age in which people share every smallest moment of their lives with a (potential) mass audience via social media.
Privacy is also relative in a social sense, differing from context to context as Dorota Mokrosinska pointed out: We have no problem to reveal our naked bodies to our doctors but wouldn´t show them our bank account. At the same time, bankers may know all details about our financial life while most of us will be rather reluctant to strip naked in front of them. This is exactly the challenge we are facing in the context of Big Data, as the new methods and technologies collect, combine and correlate evermore types of data that used to be either not existent or not connected.

Privacy and power

However, it is not enough to conceptualize privacy as individual secrets that ought to be protected from others. While in some contexts privacy may be an enabler of freedom by providing personal autonomy it can be a tool for repression in others. For example, women have been systematically kept away from public life by tying them to the privacy of their households. Therefore, we can never talk about privacy without talking about equality as Annabelle Lever pointed out in her passionate keynote. Privacy comes with costs and benefits but these are very unequally distributed, Lever explained. Quite clearly, this means we need to consider a classic sociological topic if we want to understand privacy and its implications: power structures. Sandra Seubert also referred to this in her talk: Drawing on Adorno and Horkheimer´s reflections on the cultural industry she described how users of popular web platforms contribute to stabilizing power structures. Since these platforms have penetrated more or less all areas of life, resistance is almost impossible and the individual is practically forced to co-produce the power of external forces. Nevertheless, I agree with a comment from the audience, reminding us that the power structures in the online context do not just reproduce “old” power structures as they also give new power to the individual (e.g. when individuals threaten others´ privacy by publishing confidential material about them).

Empirical perspectives on privacy

A strength of the conference was that the audience wasn´t left with these important but also partly highly abstract theoretical reflections on privacy. Sessions on the communication and information science perspectives (and beyond) were helpful here to contextualize these thoughts with empirical, experimental and technical insights. The various examples showcased how our data-driven world challenges privacy, leading repeatedly to the normative question of what we should do about it. Fortunately, this question was asked in light of the knowledge resulting from the presented research projects. This prevented to remain in a purely imaginative sphere of wishful thinking. As Laura Brandimarte pointed out with regard to her research perspective of behavioral economics: We need to understand how people actually make decisions – not just how they should decide. She and her colleagues conducted a number of experiments on the perception of privacy threats, e.g. by giving people varying options for controlling their privacy in a web survey. Finding that more options do not necessarily lead to more privacy-aware actions the researchers conclude:

“The paradoxical policy implication is that Web 2.0 applications, by giving greater freedom and power to reveal and publish personal information, may lower the concerns that people have regarding control over access and usage of that information. “ (Brandimarte et al. 2013, quoted from pre-print)

Anyway, most people don´t seem to be very concerned about their data. Sven Jöckel studied smartphone users´ heuristics for selecting apps. The majority of the participants (62 %) would spend only around 2 seconds with reading the app permission as he observed in his small case study. Bigger factors seem to be branding effects or recommendations by friends. This rational became clear when Jöckel referred to a user who appeared very privacy-concerned at first, since he refused to install an app due to its extensive demand for permission rights. Yet, when he selected another app he did not pay any attention to the given rights because he recognized the brand which he obviously trusted.

Given such observations, it becomes clear that raising users´ awareness about the terms they agree on when signing up for a service is an important goal. Stefan Katzenbeisser also stressed missing awareness as one of the key obstacles for privacy protection tools in his talk. From his informatics point of view he strongly and convincingly criticized suggestions made by some policy-makers to deliberately weaken data protection to enable surveillance, citing PGP-developer Phil Zimmermann´s (1999) famous sentence: “If privacy is outlawed, only outlaws will have privacy”.

So what can we do?

So what can we do to enhance users´ awareness of privacy threats? I always thought that making terms of use more accessible, for example by providing easily understandable icons could be a good first step to enhance what Simone Fischer-Hübner called “ex ante privacy” in her presentation.

privacy icons

Privacy icons suggested by the former vice president of the European Parliament Alexander Alvaro

There are several initiatives in this direction (see for example this helpful blog post by Ann Wuyts). However, developing icons that are truly telling is rather challenging. As Fischer-Hübner and her colleagues pointed out in an ENISA report, many icons “(…) do not seem to be very intuitive and not easily and unmistakably recognizable by their symbolic depictions” (Tschofenig et al. 2013: 26). Just take a look at the icons suggested by the former vice president of the European Parliament, Alexander Alvaro above and the problem becomes evident. Thus, tools for enhancing “ex post privacy” (e.g. by giving insights into how our data is processed) are equally important while in both cases user friendliness is crucial, as Fischer-Hübner argued.

Privacy-veteran Roger Clarke also referred to usability as one of the factors for why the various privacy-enhancing technologies (PETs) he introduced in his talks haven´t been adopted more widely. Yet, he sees such technical solutions as an appropriate answer to what he has coined “Privacy-Invasive Technologies”. After many years of experience as a privacy activist, Clarke has apparently lost faith in institutional solutions in favor for individualistic approaches:

“Unfortunately, the winners are generally the powerful, and the powerful are almost never the public and almost always large corporations, large government agencies, and organised crime. In the information era, the maintenance of freedom and privacy is utterly dependent on individuals understanding, implementing and applying technology in order to protect free society against powerful institutions.” (Clarke 2015)

By the way, I recommend to check out Clarke´s incredibly comprehensive website which has charmingly withstood all web design trends since its establishment in 1994. You can also find his related paper and his slides (PDF) there.

Legal challenges

In the last regular session the various challenges connected to privacy and freedom were addressed from the legal perspective. This was a much needed point of view, as legislators struggle to keep up with the rapidly growing privacy threats in the context of the Internet. At the same time, the state plays an ambivalent role, as a protector but also violator of citizen rights. From Snowden we learned that many conspiracy theories on surveillance are indeed not conspiracy theories, as Philipp Richter reminded the audience. Our privacy and freedom is threatened through the digital, so the laws have to become digital themselves. “If code is law, law must be code” Richter argued in reference to Lawrence Lessig´s famous quote. Right now, laws are usually not directed to specific technologies. On the one hand, this allows them to stay valid even for future technologies. On the other hand, this means decisions have to be made more and more by the judiciary, leading to legal uncertainty and reduced governmental power, as Richter convincingly stated. Johannes Eichenhofer as well as Gerrit Hornung gave an impression of the various existing laws and institutions relevant for “e-privacy” – reaching from Germany´s constitutional court, the EU and international law to the service providers who were portrayed as (potential) violators but also guards of e-privacy. Unlike the other sessions, this one was held in German. Given the rather special language of the German legal system, this is understandable. The organizers´ solution were translators who simultaneously delivered the talks and discussions in English. I have the utmost respect for the people facing this incredibly difficult task. However, I doubt that it was a good solution to transfer this challenge from the speakers to the translators who then had to deal with it in real-time.

Altogether, the “Privacy and Freedom” conference gave an encompassing yet profound and thought-provoking overview of the complex and diverse issues around these terms. The interdisciplinary approach was not only necessary, it even worked in a productive way. For me personally, the conference served almost as an introduction to some of the important topics we will have to faith in our project ABIDA – Assessing Big Data. As a policy-advising project, we will also be forced to tackle a question raised by Stefan Dreier at the closing session of the conference which sums up the almost dilemma-like situation policy-makers have to face: Where do we draw the thin line between autonomy-enabling, paternalistic and freedom-limiting governance?

References

Brandimarte, L., Acquisti A. and Loewenstein, G. (2013): Misplaced Confidences: Privacy and the Control Paradox, Social Psychological and Personality Science 4 (3), pp. 340-347.

Clarke, R. (2015): Freedom and Privacy: Positive and Negative Effects of Mobile and Internet Applications. Notes for the Interdisciplinary Conference on ‘Privacy and Freedom’, 4-5 May 2015 Bielefeld University.
http://www.rogerclarke.com/DV/Biel15.html

Tschofenig, H., et al. (2013): On the security, privacy and usability of online seals. An overview, European Union Agency for Network and Information Security (ENISA). https://www.enisa.europa.eu/activities/identity-and-trust/library/deliverables/on-the-security-privacy-and-usability-of-online-seals/

Zimmermann, Philip (1999): Why I Wrote PGP.                https://www.philzimmermann.com/EN/essays/WhyIWrotePGP.html

Smart. Networked. Transparent? A public evening on life in the data cloud

I´m part of a small group of researchers who started a public debate series at ITAS in Karlsruhe called technik.kontrovers (“controversial technology”). Our idea is that Technology Assessment should also interact with the general public as our topics have significant societal implications and there is a lot to be learned from each other. After our successful start with an evening on robotics in December 2014, I was happy to contribute as a speaker together with my colleague Reinhard Heil.

ITAS_TK_Big_Data

Full house at ITAS´ public evening “Smart. Networked. Transparent? Life in the data cloud”. Photo by Jonas Moosmüller.

Once more our institute´s foyer reached its limits when it was filled with a diverse and very engaged audience on March 18th. There were more than enough controversial topics to discuss under the umbrella of the evening´s title “Smart. Networked. Transparent? Life in the data cloud” (German: “Smart. Vernetzt. Gläsern? Leben in der Datenwolke”): data collection through smartphones, (ab)using web surfing habits for credit scoring, personalizing insurances by analyzing individual driving behavior or health information, to name just a few. Reinhard and I gave an introduction into the wide field of Big Data and the Internet of Things in form of a dialogue with pre-defined roles: I was supposed to play an enthusiastic tech-optimist who can´t wait to try pretty much every app and gadget out there, while Reinhard acted as a slightly paranoid guy trying to keep his data profile as low as possible.

Surprisingly, I did not find it that hard to play my role as a tech-enthusiast. The overwhelming majority of the crowd had a negative outlook on the topics discussed. When the moderators asked them whether Big Data generally might improve their life, around 45 voted “no” whereas only 15 chose “yes”. Granted, our general perspective was rather critical and some of my “pro” arguments could easily be perceived negatively. For example, Minority Report´s vision of personalized advertising probably appears rather nightmarish to some and Larry Page´s claim that the analysis of health data could save 100,000 lives a year indeed could be called “ethical blackmailing” as Reinhard pointed out.

This critical bias was intended. I believe that the optimistic point of view on the developments connected to Big Data does not need much support at the moment. The Silicon Valley and its popular products has more than enough power and influence, many politicians would love to gather evermore data to enable an encompassing surveillance regime, scientists love the new possibilities coming with new data treasures and even the smallest local businesses tend to believe the promises of an dramatically increased efficiency through an automated analysis of production processes etc.

However, when I was confronted with this strong skepticism towards Big Data, I felt pushed to defend the new opportunities connected to this technology – not because it was my pre-defined role but because I actually believe it is important to keep both sides in mind. No doubt, the risks connected to Big Data have to be taken seriously. However, the reactions towards privacy and security threats are too often within diametric extremes: helpless fatalism or paranoid alarmism. Instead, we need a well-informed and balanced debate, wise decision-making with careful and considerate regulation where necessary. I hope our new project Assessing Big Data (ABIDA) will help to build a foundation for this.

Nevertheless, I very much enjoyed our heated debates and I´m looking forward to the next public evening on a completely different topic: The future of eating. By the way, once more the evening was documented artistically with a visual recording by Jens Hahn which looks pretty cool, in my opinion:

ITAS_TK_Jens_Hahn

Visual recording artist Jens Hahn in action. Photo by Jonas Moosmüller.