Zuckerberg’s Testimony and the Fallacy of Consent

Facebook’s Mark Zuckerberg completed his second day of congressional testimony today, answering a range of questions about the company’s…

Zuckerberg’s Testimony and the Fallacy of Consent

Facebook’s Mark Zuckerberg completed his second day of congressional testimony today, answering a range of questions about the company’s privacy and security practices. Zuckerberg generally came across as smart, well intentioned, and well rehearsed.

One line of inquiry jumped out to me.

On Tuesday, Senator John Kennedy pressed Zuckerberg for the ability to delete data. For increased control over how data is used. For transparency into who might have access to the data. In each case, Zuckerberg confirmed that all of the controls Senator Kennedy was requesting were already available. In many cases, these controls have been available for years.

Put into privacy terms, notice, choice, and transparency had already been baked into the system. In addition, while Facebook has experienced hacks, Zuckerberg was unaware of any such incidents resulting in data theft.

On paper, Facebook seemed to have developed over time, product features that reflected all the essential pillars of any privacy framework.

And yet … here we are.

The crisis in confidence for Facebook at the moment is absolutely, at least in part, a privacy problem. Facebook ingests a massive amount of data from a wide range of sources. The company allows advertisers to leverage this data in controversial ways, and have allowed free and relatively unmonitored access to random developers all around the world. This data has been used to manipulate people, aggravate existing social tensions, and has been pulled out for all manner of experiments by scrupulous and unscrupulous folks with developer accounts.

Unfortunately, the usual response to these concerns, the escalation, using our legacy privacy toolset, is often ‘consent.’

If notice, choice, and transparency aren’t cutting it, the consumer should have to provide their prior consent.

In fact, Senators Richard Blumenthal (D-CT) and Ed Markey (D-MA) debuted a new bill (literally, ‘The CONSENT Act’) on Tuesday. Europe is trending heavily in this direction with the ePrivacy Regulation and GDPR.

Consent is actually a distraction.

If consent is the primary obstacle between companies and concerning uses of data, it will be used against consumers in insidious ways.

Privacy policies are long, buried, and completely opaque to the common consumer. As a vehicle for conveying useful information to consumers, privacy policies (and terms of service) are almost completely worthless. This came up repeatedly in the questions from Senators on Tuesday.

Privacy policies have become convenient punching bags in public forums. But privacy policies are, in effect, contracts. And like all contracts, they are written by lawyers. And their intended audience is … lawyers.

Just as in a professional relationship, contracts are not substitutes for effective communication, but you still want to have a contract to fall back on.

So consent, using an interface that comes out and meets the user where they are, written in plain spoken language, should bridge the divide, right?

This is absurd.

When consumers come to a website or download an app, they are looking for news or whatever the value proposition was that drew them to the service in the first place. They do not arrive at a service wanting to dive into the complicated value chains that underly the service, or with enough context to make granular decisions about which data will be collected or how it will be used.

If we impose this discussion on the user when they arrive, we’ll have to use terms so general that they can hardly serve to properly inform.

Worse still, we are capturing the user’s attention in the consent dialogue. They will make every effort to escape our captivity, ducking around, clicking on whatever is expedient.

The net benefit, from an informed user population point of view, of these forced consent discussions with consumers, won’t take us anywhere close to a majority of the population. Our informed user conversion funnel will not move the needle. Consumers will dodge them. Consumers that are caught in them will click without reading. Only a fraction of the consumers that read them will fully understand the context and implications of the decision. Even if they understand them, many will be frozen, unsure of consequences of their decision.

Are we ready for a world where privacy plugins are trained to identify and suppress the annoying pop-up consent dialogues appearing across the internet?

At the highest level, the harm that we are trying to prevent is the abusive and irresponsible use of data. In this context, consent, especially for baseline practices that should be common across the service, is hardly helpful. All this does is give a company cover, having captured consent from a distracted and harried consumer, to do whatever it wants.

Architects are not allowed to design unsafe buildings, provided that they come with lobby sign-in books collecting consent for the unsafe conditions.

Predatory lending practices are restricted. Those restrictions are not moot if the consumer signs a form consenting to be victimized.

A call for rational defaults and accountability

If we want to protect against cavalier data use, we should be focusing on rational defaults and accountability.

Rational defaults:

Consumers do not expect their data to be used against them for embarrassment or to render harmful judgments out of context. They do not expect the custodian of their data to make that data freely available at massive and unregulated scale. Default settings should be designed to be consistent with these expectations. If I am comfortable with taking on more risk than the rational defaults provide, ask me for my consent then, in context.

Accountability:

Services are increasingly platforms. Platforms are complicated systems where the primary actors on the underlying data tend not to be the owner of the platform, but thousands of clients, partners, and developers. These platforms connect with other platforms, creating wholly new applications of the underlying data and exploding the volume of entities that can experiment with the data. We have reasonable systems of accountability for companies that operate directly on their own data, but platforms, so far, have frustrated accountability.

Accountability is not binary. In most cases, no platform is singularly responsible for what happens. And no platform will be 100% secure and fully anticipate all of the potential impacts. But diffuse and complicated responsibility cannot produce a vacuum of accountability. If I come to your service, and as a result, bad things happen … you have to be accountable. And you have to make every effort to make sure that it doesn’t happen again. The platform enabled the connections. If those connections were not trustworthy, the platform needs to re-evaluate its relationships.

Consent still has a role to play in a properly functioning privacy regime, as do the usual notice, choice, and transparency tools. No program would be complete without using these tools in their appropriate contexts. But these are just tools. Our emphasis needs to shift to rational defaults and accountability.