The world is rapidly digitalizing, and the deployment of data offers many opportunities for economic development, achieving sustainability and a better quality of life. There are, however, considerable concerns about the misuse of (personal) data and undesirable outcomes of unbridled use of data. These concerns are legitimate, but we’re also running the risk of becoming too defensive when it comes to data, missing out on big opportunities and, more importantly, our selective opposition to data sharing may have undesirable effects.
Like the great technologies of our past, digital technology enables us to increase our wealth and, more importantly, actually improve our well-being. On the one hand, technology can have direct financial benefits, such as cheaper services or more efficient use of energy and resources. On the other hand, and perhaps more crucially, technology enables us to improve our quality of life by facilitating matters such as better healthcare or a cleaner living environment. Opportunities are arising in our own daily lives as citizens and consumers, as well as in the public space, where we can organize matters more intelligently, better, more honestly and in a cleaner way. Data is the most vital resource in this, as data and the knowledge and insights it yields can help us to make existing processes more efficient or otherwise smarter and better. Along with all these promising prospects the datafied society offers, the other side of the coin is that there are great concerns over the use of (personal) data and the possible violation of our right to privacy and, worse, our civil rights. The societal and political knee-jerk reaction to this is to limit data sharing as much as possible in hopes of eliminating as many risks as possible. It’s questionable, however, whether this is the right and most productive approach.First, this is causing us to miss out on great opportunities, for individuals and society as a whole. This can never be a valid argument for releasing all possible data to solve any problem that needs fixing. We have to be more fastidious about this issue and ask ourselves to what purposes we’re willing to allow the use of our data. At the moment, there seems to be an imbalance, in that we are willing to offer up our data to various (relatively anonymous) tech companies without asking any questions or setting conditions. Though this yields clear “rewards”, these rewards are often not related to the data we release or generate. In fact, we often don’t even know what they (can) do with our data, outside of personalizing the ads we see. We’re much more cautious with parties closer to us (such as the government or health insurers) and with applications in which the purpose of using our data is clear, visible and more concrete (such as the coronavirus app). In other words, the clearer and more concrete the value of our data is, the more reluctant we are to release it. That might make sense, because it’s easier for use to imagine our data being misused (e.g. resulting in higher health insurance premiums), but it should also be clear how this, most valuable, data could work to our own or collective advantage. Second, we’re running the risk that, in the absence of reliable and/or individual data, inaccurate, incomplete or contextual data will be used, potentially resulting in disadvantageous decisions. That is, the role of data will certainly expand because of the promise it holds and the ubiquitous tendency to ascribe importance to anything that’s measurable. Conversely, we also have the tendency to reduce “problems” to what is easily scaled and solved by means of (digital) technology (which Evgeni Morozov calls solutionism). This implies that it’s clearly in our best interest to make sure that data about ourselves is in fact complete and accurate. If it’s not, we will be subject to judgment and treatment based on non-specific data that’s publicly accessible (e.g. features of the neighborhood we live in).As mentioned, the promise of the datafied society is now at odds with concerns over the use of personal data. The only possible way to reconcile these two will be to develop systems that enable citizens to explicitly release data to parties that will use it for something of value, without relinquishing all control of their data. It’s also imperative that it becomes much clearer what these parties use the data for exactly and how this benefits the citizen or society as a whole. Many initiatives have already attempted to develop this kind of system and fix the internet, but there hasn’t been any real breakthrough as of yet. Hopefully, our (selectively) defensive attitude towards data sharing will eventually make way for a more wholehearted embrace of these systems that enable us to get the best out of our data.