I had the big privilege of getting to attend the BruCon Conference in Gent on September 26 2013. It's an annual security conference which is attended by professionals, students and academics in the field. I had chosen to speak about the first eight articles of the electronic identification and trust services regulation at the conference. They cover electronic identification, and I am particularly critical of some of the Commission's policy choices in the file. As it turns out, there is also much that can be said about the articles regulating trust services (or "certification authorities" as they are more commonly known in other circles). In addition there were some questions raised by the audience that I will try to cover below. If I forget a particular question it's not out of malice. The address is as I have written it, not necessarily as I spoke it.
Address:
I'm very happy to be here and to be able to speak to you all today. I have opted for talking today about the electronic identification regulation which was proposed by the European Commission last year in July[1]. I've been working with this in the parliament for the past year and it's in many ways a good illustration of many big questions that are facing society about the internet, identity, surveillance, privacy, security and how these things relate to individuals and their socities.
So, first of all, what does the regulation aim to do? The regulation aims to give people in different member states a way of accessing eGovernment services in other European countries. When a specific eGovernment service in member state A requires authentication, the regulation means to make it possible for a citizen of member state B to access this service with the their member state B issued electronic identification. The problem is that member states have chosen many different ways of issuing electronic identification. Another problem is that there is a general perception that electronic identification has not been very successfully implemented or adopted by citizens or consumers. Rather than using government issued identification on the internet, citizens feel more comfortable relying on their Facebook-login or, in many cases, creating different logins for every site.
There's been some pushes towards stuff like OpenID on the internet, but OpenID often doesn't fulfill the requirements that a government would have on its own services. Tax declarations online, for instance, you want to be able to ensure that they are actually made by the person who should be making them. Same with some health care things.
But electronic identification is also something that we expect to apply on companies. The European Union is moving towards eProcurement[2], which is when companies have to participate in procurement on the internet, and so we need verifiable ways of ensuring that the public authority which is procuring is in contact with the right entrepreneur.
The solutions often rely somehow on certificates, and therefore the Commission has also aimed to regulate what they call ”trusted service providers”. These trusted service providers would be more known in common technical language as ”certificate authorities”. Many member states rely on what they call ”qualified certificate authorities”. In practise, the qualification in this case just means that the member state recognises the qualified certificate as secure and reliable in a given transaction with the government. The rules for how to qualify certificates are derived from a European law from 1999, and was never really used in all member states – for instance in my country, Sweden.
Qualified TSPs have also suffered a number of problems which are undesirable from the point of view of good governance. The DigiNotar failure[3] in the Netherlands was clearly unconvenient from the perspective of the government.
And so we had this proposal that tried to create interoperability between different member states solutions for electronic identification, and fix the problem of vulnerable CAs.
The regulation was proposed in two different parts: the first part of the regulation covered electronic identification, and the second part covered more or less CAs, or trusted service providers[4]. Then there were about 20 articles – which for referens is many – covering various forms of qualified things that the Commission envisaged would be necessary in the future, digitally boosted Europe[5].
Electronic identification is a touchy subject in many member states. In some member states, like Ireland and Great Britain, government issued ID-cards are completely rejected by citizens and people in those countries every time they're proposed[6]. In other member states, like Germany and many central European countries they have constitutions which require different parts of the government not to cross-run databases of citizens: effectively, every citizen or resident will have a health care persona, a social service persona, an educational system persona, and so forth, because the idea is that if the government can collect too much information about every citizen in the same place this could lead to very negative consequences for the citizen if the government starts acting arbitrarily or against the interests of the citizens[7]. In other member states, like Sweden, Estonia or Finland, we have personal registration numbers that are unique and for every individual and that helps the government cross-run databases when necessary. In at least Sweden this used to not be too easy to do, but with information technologies being deployed very quickly in all parts of society it should be a relatively trivial exercise to completely map any citizen in Sweden with respect to their interactions with any public service or authority.
The European Commission's proposal evaded most of these difficulties and was largely a roadmap to how one makes different types of electronic stamps, signatures and identification procedures that public authorities later have to consider ”truthful”. Basically a set of technical criteria for what is to be considered authentic and genuine in different member states.
This led to much confusion in the European Parliament. We are not a technical institution, but a political institution and we cannot consider ourselves being the best agents to make technical decisions for what is true and genuine and what is not true or genuine. It is even a fact that the different member states use different systems for establishing what is true or genuine, so with the many different backgrounds of members of the european parliament we had problems seeing what the purpose was of this file or why it was politically interesting.
But it turned out that the European Union has sponsored a lot of research into why there is a potentially large political impact of this file.
So the first thing is that identity in general is a highly philosophical concept – who am I? What are we? What is Europe? Many people spend entire life-times pondering these issues, and most of us never reach any satisfactory answers.
So after we understand that we don't have any good answers to the question on who we are, comes the second question – what is my identity in relation to the government. This is where the different member states have adopted very different approaches, and so different cultural backgrounds give many different answers.
It's a question that the Commission had hoped to avoid by introducing an interoperability framework for all the various Member State solutions, so that everyone could keep their own solution while at the same time allowing their citizens to interact with the public services of other member states. However, the European Commission has also sponsored a rather large body of reserach in this field in the past years, and so when I met representatives from the Future Identities in the Information Society (FIDIS)[8] and Attribution-Based Credentials For Trust (ABC4Trust)[9] projects I was given to understand that the Commission had actually rather cautiously decided to discard most of the big investments they had themselves made in figuring out how to make authentication of citizens work online in a secure and primarily privacy-friendly way.
The problem with governments is that we are forced to interact with them in a number of circumstances. We can't help providing lots of information about ourselves, our families, our wage situation, housing, et c to the tax office. The tax office could be said to legitimately need this information, but so it's a lot of information about us as persons which if it is arbitrarily spread could lead to negative consequences for us in our working life, with our friends, family or other things. We generally expect confidentiality of some sort from our tax authorities. Similarly health or dental care services – we more or less have to interact with these public services, at least until we're legally adult. Schools, social services, the job centre.
The government will normally run all these public services, and the general privacy friendly idea is that because it is now so easy to cross-run and cross-reference databases, the interactions need to be unlinkable. It should not be possible to find out that you, the citizen, in the same day ordered a chlamydia test on a public health service website and then filled in your tax returns or requested a building permit for a veranda extension on your summer house.
The idea of unlinkability is particularly strong in the German constitution. In Germany it is mandated by constitution that public authorities don't cross-run or profile their own citizens based on the totality of their interactions with public authorities. And so – if you had an encounter with a law enforcement officer, but you also had to go to the hospital, neither the hospital nor police will or should normally be allowed to find out that you visited the other. Unlinkability in this case means that you stop one party which is very powerful from getting too much information and therefore much more power about another party which is very weak.
In Sweden we have many specialised laws for government registers where we restrict the ability of a public authority to cross-run their databases with those of another public authority or service. However, the unique identifiers of all citizens makes it both convenient and easy to do such a mash-up should one want to. So the idea of unlinkability exists, in the law, but the databases over citizens' interactions with the government are not technically constructed in a way which is suitable for living up to the spirit of the law, as it were. Also because public authorities apparently frequently sell data about citizens to private companies[10], it is always possible to aggregate or mash-up the data through a third-party private actor.
But EU research projects had made another insight: in order to reduce the size of databases, and therefore reduce the harm of security breaches or data leaks, and protect the privacy of the users and the confidentiality of the interactions, one could use something called ”anonymous authentication” or ”attribution-based credentials”. This is when you would provide only the information necessary for a specific purpose to identify yourself. If it was needed for me to demonstrate that I am legally allowed to buy tobacco products, I would demonstrate that I am in fact not born in 1995 or after, rather than demonstrating that I am born on August 30th 1987. The resulting data trail from me would be information about ”someone born before 1995 used this service” rather than ”Amelia Andersdotter, 1987-08-30 used this service”. While in the first case, it's relatively difficult even after a data leak to link the use of the service back to me as a person. In the second case, it is of course inevitable that such a link arises.
To me, at that time, and this was October or November 2012, it seemed counterintuitive that the Commission had disregarded its own research programs, and that we further were not considering the institutional effects of the law proposal we had before us. Also, I am very privacy minded, and I believe that preservation of privacy is an essential aspect of maintaining a good power balance between individuals, groups, governments and companies. Individuals and groups of individuals need privacy in themselves, and for themselves.
So I wanted, politically, to advance the idea of unlinkability and attribution-based credentials. The problem is I had this messy and seemingly very technical file that made little sense.
For those of you who are unfamiliar with the parliaments' work, we are allowed to make any and however large changes in a text proposed by the European Commission that we wanted. But it requires us to know the nature of the changes we want to do. Often work in the European Parliament rather becomes a changing of some semantic things in the proposal, rather than an overhaul of political and technical direction.
At the same time that I was working on this in the European Parliament, I was looking a lot for information about different systems in member states. An Austrian colleague helped me find more info about the Austrian eID – it's not seen as a succees because only 10% of all Austrians use it, there's no real service market around it, it's based on smartcards, I guess. In Sweden they had worked really hard for several years to put up a SAML2 federation [with SAML being just a generic standard for authenticating users in a system], which could replace other forms of e-authentication online. A friend of mine was upset with that because SAML2 systems keep track of who the user interacts, and so rather than the unlinkability I described above you have perfect linkability.
I also am upset – I think the decision to use this particular standard in Sweden is derived from complete idiocy and lack of attention. It is obvious that most citizens will not like for there to be an IT-guy running a database over all of their interactions with the government. Swedish municipalities and regions were also not so happy with the government for pushing that kind of tracking of public interactions – municipalities and regions deal with citizens in their day to day affairs, so they have to have a system they trust and that citizens trust and that makes citizens trust them.
Sweden had investigated this topic for 3 or 4 years before they made the decision[11]. Nowhere in 4 years and thousands of pages of text do they envisage that HOW the authentication works may affect how it is perceived. Apparently the reason for this decision is two-fold: first some tech guy runs a system at a Swedish university which is SAML2. It works for him to manage I guess students, teachers, et c, and so he assumes it will also run a nation state well. But a state and all of its public services at every level of governance is a very different place from a university. While I can relate to why, as a technical guy, you wouldn't think about things like that, it is completely mind-boggling to me why no one in the government thought about this either! That is really extremely worrying.
The universal identifier in Sweden, which I mentioned and that makes linkability very easy between databases in Sweden, has been controversial for many years. A lot of people want it gone. So these tech guys have requested to have the universal identifier out of the government e-authentication system and succeeded. And then when I asked them ”how could you mistake a government for a university?” they said actually they make it more difficult with tracking because the unique identifier isn't there. I woke up a few months later, early in the morning, and thought, well they've actually just replaced the universal identifier with themselves. Either you have a number which allows you to connect databases easily with each other, or you have an IT guy who keeps track of all your databases.
In general the Swedish system has given me some big pains: another time that I woke up early in the morning because of this system was when I realized someone had told me we were setting up this nifty SAML-thing because the military liked it. It dawned on me suddenly, three months after, that there are good reason to question why the military, out of all institutions you normally find in a state, would want to have an easy way of tracing and making a database of all citizen interactions with all public institutions all the time.
Some people I knew wanted to become part of this new tracking federation because they were upset with the tracking and wanted to find a way to hack the system and make it useless so that it would go away. In that particular case I had a minor existential crisis: the nature of decision making has been studied for a long time, and this group of people had made a classical trade-off between compromise and ethics as described by Max Weber, a German political scientist from sometime way back[12]. Compromise versus ethics means the process of reaching a decision: you have to reach a decision, but you have to do it with others, so you may have to compromise to get a decision. How much do you water down your ethics to reach the decision you have to make?
This group didn't want a bad tracker. So they wanted to become a good tracker. But what is a good tracker? Someone who can be trusted not to use all the highly personal information about how citizens do or have to interact with governments for unpleasant things, that don't sell this information, and so forth. Also generally if you have a big database normally the government will have access to it whenever they want. So choosing to be a ”good tracker” will always mean that you are participating in the tracking – it's a compromise you make with your anti-tracking ethics to ensure that there is an option which is less bad than other options that may exist. But then again, if it's a bad system to run a government on, maybe one shouldn't compromise in that way. The ethical thing to do is to not participate in a tracking and tracing system, because ultimately it's the tracking and tracing in themselves that are problematic, not which particular entity is doing it.
The other thing is that some parliamentarians in national parliament in Sweden had been very clear with wanting ”the same” system online and offline[13]. And so I thought, what does ”the same” mean in this case? I have a national ID card from Sweden and most people I show it to will remember that my picture is very bad – it really is spectacularly bad – but not exactly how (many people have asked me to show it twice, for instance), or they extract the information they require and then they forget. This is because my ID card is normally read by humans. For all commercial transactions, when I buy tobacco for instance, actually no information about me as such is stored. If you ask the shop attendants 2 hours later, chances are they will have already forgotten even that someone authenticated themselves for a tobacco purchase in their very shop. So there's not really that much tracing of the use of an ID card by a central authority. Electronically it's much more difficult to ensure that there's no central agent tracing all the times authentication happens. Humans also learn to recognise each other after some time – I can go to my dentist and they recognise me by face. A computer might learn this if it is located in the same room as me, but if it's a server on which the government service for health care and which is not in the same city as me, chances are more slim.
So the systems online and offline would by definition not be ”the same” but then which sameness does one want: the same in that the privacy of the individual is somehow protected and the general institutional power balance that has been carefully deviced over many hundreds of years is protected, or the same in that access should be possible under whatever conditions? I don't think the Swedish national parliamentarians had really thought very deeply about what they requested, but it's strange because it's a very political issue how you balance power and information in a society. This is exactly the type of thing that normally we would expect politicians to think about very carefully. What should society be like? Who should have what power over whom and when? How can that power be exercised? How do we ensure that abuses of power can be resolved – so that is, how do we solve the conflicts that arise when someone with power abuses it with respect to someone without power?
The Swedish example is a beautiful story of how technology for public infrastructures was seen as some magically thingie-maging that could not be anything other than positive. It's a story of technical naivite with respect to politics, and political naivite with respect to technology. Nowhere in the entire process did anyone consider that a citizen's relationship with their public services and authorities is quite fundamental to the machinations of the society we find ourselves in but they really should have. Especially political people need to think about these things.
But going back the the European level, I had decided to at least try and remedy these technical and political mistakes from Sweden at least partially. We can technically make whatever changes we want in a political file, but it's rare that the Parliament makes big changes. I was considering ways in which I accomplish ethically and politically that which I wanted to do without changing too much but actually the Commission's text was so far from doing anything at all, that I ended up tabling 141 amendments, on a file with only 42 articles and 51 recitals. That's quite a lot, but because most of us in the parliament recognise a bad proposal when we see it, even if we may not immediately or even ever know how to fix a bad proposal, I have been tolerated.
The thing is it's quite obvious why we don't want a random tech maintenance person somewhere to be able to casually look up when or why we've been in contact with health care, for instance. Or why we don't want all of the information about what and how we do at school to be sold to advertisers so that they can more easily target people at our universities. But the devil is in the details. Because we didn't actually vote on this yet (but we're voting soon) I'm somehow in this constant state of concern that by now we have well understood the problems, politically, ethically and systemically, but we will not be able to write the legal text in a technically correct way. If you make a given set of moral and political choices, liability, risk, duties and obligations need to be allocated to different parts of the system in specific ways and this is.. Difficult. It's not obvious at all how one would do this.
But it's something that we, the legislators, will definitely have to do if we're going to put public services and all these systems online. That is why I say we have to regulate the internet. It's an old discussion of course. Already in the late 1990s there was an argument that the architecture needs to be regulated, because the architecture decides ultimately what we can or cannot do, or what we must and mustn't do[14]. Some people back then, and even now, argued that technology changes too quickly to be regulated so it makes no sense to regulate. I think this latter argument is a bit daft – copyright law can be said to have regulated the internet since the internet emerged. It took some time to get the caselaw and court cases, but the regulation was always there. The same thing with banking – a bank does not become unregulated only because it has operations online. It has strict regulations on liabilities and risks in its activities regardless of how it providers its services. We didn't see a lot of technical architecture regulation yet – the regulation we have in place now describes the duties that fall on human agents behind the architecture or that operate the architecture, but as we've seen over this last summer these human agents don't always act very predictably or in a trustworthy way.
And so finally, Europe is going through a big ordeal at this time. The legislation that I have just described is important for the reason that it could implement a privacy-by-design obligation on some technical systems, also describing what such a privacy-by-design obligation could be: unlinkable transactions based on anonymous authentication, or attribution-based credentials.
But we have also the large discussions on the general data protection regulation[15]. That regulation is very fundamental for how we, as a continent, will make our future. It sets the frameworks for market operators, companies, governments, everyone, on how we deal with data protection and privacy. What we've seen in those discussions is very heavy lobbying, especially American lobbying, and especially against a strong privacy protection. But we also see governments that are very unwilling to set a direction towards strong privacy-protecting legal frameworks[16]. It's worthwhile to look up more information on the general data protection regulation, because optimally we want it to influence many things in a direction of more secure and more privacy-friendly technologies[17, 18].
How to deal with privacy and data protection technically I understand is not always a trivial problem, but mostly very interesting ones. I hope that many of you here today go out to become innovators and entrepreneurs that have the legal framework that you need to make the most of such innovation and markets. I want to thank you for your attention, and I hope that this was at least somehow helpful in understanding also a political view of challenges around regulating and legislating on the boundary between politics and technology.
[1] http://ec.europa.eu/dgs/connect/en/content/electronic-identification-fol...
[2] http://ec.europa.eu/internal_market/publicprocurement/e-procurement/inde...
[3] See for instance http://www.esecurityplanet.com/browser-security/diginotar-when-trust-goe... or do an internet search. It was really given much attention when it happened.
[4] Shameless self-promotion but it's anyway good for overview: https://ameliaandersdotter.eu/dossiers/eid
[5] I liked "Burdens of Proof" by Jean-Francois Blanchette. A perfectly sarcastic yet very informative overview of how technical policy and technical technologies fail.
[6] http://www.no2id.net/ for instance. Proposals to create national IDs have been stopped many times in both jurisdictions. Many essays have been written on this topic.
[7] A decent amount of German language information: https://www.datenschutzzentrum.de
[10] See for instance this article: http://www.kristdemokraterna.se/Media/Nyhetsarkiv/Kristdemokrater-vill-g... But there are longer texts that to my knowledge aren't published online that connect it back to Avgiftsförordning 1992 with earlier legislation and the Swedish principle of transparency.
[11] http://www.government.se/sb/d/12840/a/158256
[12] Wikipedia summary sufficient to understand context, I thought: https://en.wikipedia.org/wiki/Politics_as_a_Vocation
[13] http://www.riksdagen.se/sv/Dokument-Lagar/Forslag/Motioner/E-legitimatio...
[14] Lawrence Lessig, Code v2: http://www.codev2.cc/
[15] http://ec.europa.eu/justice/data-protection/document/review2012/com_2012...
[16] See, in Swedish: https://dataskydd.net/sammanfattningar-regeringen/
[18] http://www.respect-my-privacy.eu
Questions
- What about privacy and security problems with smart meters? Are they addressed?
Not really. Smart meters are a solution looking for a problem in the vast majority of member states and they seem to create more problems than they solve wherever they go. There are however no easy remedies to this problem. The infiltration of standardisation bodies for electric grids seems to have begun more than 20 years ago and it is by now a consolidated view that smart meters, despite their flaws, solve some problem: for instance that of teenagers wanting to find out, in retrospect, which electrical appliances have been used in a household. In for instance Sweden, the security agency now has access to communications to and from smart meters to ensure that there is sufficient information to investigate any attacks against the grid over the internet after they've happened. That is wasn't a good idea to put electricity networks on the internet in a first place is striking nobody. The original problem, which was that of creating variable demand in a world where the grid is filled with renewable energies, is not solved - smart meters haven't accomplished any changes to that effect and what we're left with is a very messy technology than can fail in so many ways from both privacy and security perspectives that it's doubtful if this was a really talented path to travel down in the first place. It is especially clear with this fundamentally important infrastructure that smart technologies require smart policies. Electricity is vital to our economy and our socities and it's stupid to gamble with it in this way.
- Is it really necessary to regulate the architecture though? What about innovation?
There are plenty of big and unresolved issues inside every possible type of architectural regulation. One mistake commonly made in Europe is to assume that all architectural choices are unregulated in the United States: on the contrary they appear to be having a very deliberate industrial agenda that they also follow up over time. The electronic identification regulation is an extremely sad example of how Europe isn't doing that at all. Similar for the data protection regulation: we have steered our research, education and industry down a data protection friendly path for many years, and then suddenly we've decided in loads of legislation that actually we don't want that type of industrial development after all. This is really harmful to human rights and to industry.
Add new comment