Okay. So what we want to do is design a voting system, suitable for a large national election, that resists clandestine tampering by an attacker with serious resources (possibly the same people running the election), and at the same time is transparent enough that the voters have confidence in the system. Let's get started! :D
Crowdsourced verification
The core element of the system I'm proposing is a live printed display of cast votes. The votes would be displayed so that you could tell who the vote was for, and include a bit of other data such that the person who cast the ballot could see that it was theirs (a random number or picture shown to them after their ballot is cast would work), but should otherwise be completely anonymous. The votes are printed on the top half of a long roll of paper, which slides by after votes are printed on it. The top half of the paper is under glass, so that people can see it but not mess with it; the bottom half is available for people to write on. They would be allowed to write anything they want - a signature, a message, a random drawing, whatever. People would be encouraged to take pictures of the ballot roll at this stage, and take the pictures home with them. (In poorer countries, where people wouldn't have cameras necessarily, a few digital cameras and printers could be provided there.)
After the election, the vote totals would be counted by other means. The printed record, created solely for recounts and verification, would be made public - either by posting high resolution images of it online, in a country where the vast majority of citizens have internet access, or by posting copies in public places. We now have a very strong method of verification. No part of the ballot record could be modified after the election, because people could be holding photographs of any part of it, allowing people to easily prove electoral fraud. If the computer numbers are questioned, recounts are possible, because a full record of all votes cast is available. Even if the numbers aren't questioned, I fully expect suspicious people to recount some ballots in their own region anyway.
Furthermore, the system so far still preserves the secrecy of individual ballots, assuming closed voting booths. An attacker that wants to target people voting for a certain candidate could monitor the printed votes as they go by, but this is both obvious and imprecise, and can be countered by adding an unpredictable delay of a few minutes before the vote is actually printed. An attacker could also hide tiny cameras in all the voting booths. This is harder to counter, but can be mitigated by making the voting booths closed on all sides, and as spare as possible on the inside. Overall, though, this provides a very robust method of verification.
Voting Machines
With a strong verification system, you could really use any old voting machines for the electronic vote count. Even a Diebold or PES machine might be good enough. On the other hand, having just one layer of security is no fun - let's go ahead and make bulletproof voting machines too.
The TPM is usually understood to be a device to protect copyrights or some nonsense, by preventing tampering with some DRM scheme. This is largely because of a lack of imagination on all sides. There are several very general security problems in computing which a TPM can address. Malicious hardware added by an enemy is difficult to detect, but a TPM can remotely attest to the hardware in a machine. It's nigh-impossible to prevent people from running modified versions of software, so if somebody joins your network with a malicious version of your code, you either have to design a Byzantine fault-tolerant protocol (which is difficult and can be computationally expensive), or give up. A TPM can prevent unsigned software from running, on the other hand, and remotely attest to the software's validity. A TPM can also provide secure storage, through hardware-based encryption. Yes, all of these things can be used to restrict what you can do with a computer, but that's kind of the point.
Let's look at what this gives you in a voting machine.
Initially, you have a set of private keys, which are kept separate from each other and secured individually. Every device used in an election would have its built-in public key signed by all these keys, and hold those signatures. When the machines are networked together for an election, any device can recognize and verify any other device, by requesting its public key and the signatures, along with a remote attestation. (Devices aren't just restricted to the voting machines themselves - you also need to authenticate the printers, for example, or an attacker could use modified ones that print whatever.) In this way you can verify that every device used in an election isn't tampered with in various ways.
Each vote would be recorded, along with the machine it's taken on and the time of day, in a secure encrypted file, and distributed to all the other machines in the polling place. Recording extra data allows for data analysis if the result of the election is called into question (this machine only recorded votes for this candidate? this machine was active during times when the polling place should have been closed? HMM); distributing the results to other machines provides insurance in case of hardware failure. A gossip protocol could be used here, so that putting two machines on the same network would automatically have them exchange votes. This could be applied over a network to automatically send votes to a central location, or if there's no network connectivity, the votes could be sent on a burned CD, or a flash drive, or whatever. Having the votes signed by the individual machines means that transmission over an untrusted medium isn't a problem, as long as they're signed in aggregate and not individually. (Would it be problematic to have every machine store potentially every vote for the whole election? Assume 32 bits for a timestamp, 32 bits for a unique machine ID, and 32 bits for the vote itself - twelve bytes to record a vote. A few hundred million votes would fit comfortably on a large flash drive, even if you add in cryptographic signatures of the votes from each machine.)
Gossip protocols are neat. Let's designate one machine as the master machine; it will collect the final tally. (I say one, but there's no reason it can't be a set of machines, for fault tolerance.) Once it receives all the votes from a given machine through the gossip protocol, it can send back a notification through the same gossip protocol saying "the master machine has received all these votes from X". When X receives that message, it knows that all the votes recorded on it are included in the final tally, and can turn on a light or something to let poll workers know. (I'm assuming, obviously, that all messages are signed and encrypted; it would be extremely silly for them not to be.)
Weaknesses
I think this system is pretty good, but no system is perfect. Even though we have two parallel counts going on which support each other - the documented paper count, and the signed electronic count - I'll count something as a vulnerability even if it could only affect either of these in isolation. After all, sticking vulnerabilities together is easy.
This sort of system is only really useful in places that actually profess to be democratic - if some despot insists on staging an election, but still wants to fake the results, all this system will do is prove what everybody already knows. It'd be nice to put together a system that generates accurate vote totals in the face of open electoral fraud, but I'm far from convinced that that's even possible. Again, if the attacker is sufficiently motivated and doesn't care about appearing democratic, they could just post armed guards inside every voting booth.
Letting people write on the running sheet of votes is important, since it makes the whole sheet significantly harder to later reprint and fake, but it also introduces a vulnerability: vandals could simply destroy the printout, in which case we'd be dependent on the electronic tally. More generally, if somebody wants to wreck the polling place, a strategically-placed bomb could halt voting in a given location. This issue is better addressed by law enforcement.
The electronic security of this system depends on how much you trust your hardware manufacturer, especially the people making the TPM chips. This can be mitigated by purchasing from different manufacturers, making sure that the distribution of hardware between machines is random, and then analyzing the final results for correlations between hardware components and voting patterns. This type of analysis is well-understood in data mining.
It's possible to correlate tapes from security cameras near or inside the polling place with the public vote record, and get a pretty good (but inexact) idea of who voted for whom. Um, this one is tricky. No idea how to mitigate this, if we want to maintain a public aggregate ballot.
Thursday, June 25, 2009
Thursday, June 18, 2009
Exporting Democracy
Democracy is generally a good thing, when compared to the alternative. However, the recent elections in Iran have vividly demonstrated a problem with democracy in tightly controlled countries - when the people in power are the ones counting the votes, it's impossible for the other side to trust the results. Note that I'm not even contesting the validity of Iran's election results (though there are certainly reasons to). The problem is that even the appearance of impropriety can cause a national crisis, the likes of which we are now seeing.
Here in America, even though we don't get this problem so much since votes are counted by a more or less impartial process, we've tried to solve this problem with technology. The result is electronic voting machines (EVMs) which are, frankly, an embarrassment. When word got out about how atrociously bad the current generation of EVMs are, how the manufacturers didn't even try to apply basic security protocols, we should have buried them in a really deep hole, and never spoken of them again, just to try to save face. (Instead, inexplicably, we keep using them in elections.)
I don't intend to talk about that today, though. Instead, I'm going to talk about what EVMs could have been.
What if we had a perfect voting machine? One that could be given to a country which is just starting out with democracy, such that even if we assume hostile individuals at every stage of the election, the machines could still come out with a perfect tally of people's votes? It's difficult to guarantee this, since in the worst case there could be armed guards forcing each person to vote a certain way. However, it's within the realm of possibility to force election fraud out into the open. If you can guarantee that a person whose vote has been somehow tampered with knows about it immediately (as in the armed guard scenario), then it makes large-scale election fraud both transparent and uneconomical.
Imagine how this could change the dynamics of elections. If the results of an election were both incontrovertible and verifiable, it would be the end of sham democracies, where those in power change the results of elections to suit their needs. We could enable truly democratic elections in places where it would be unthinkable today. We like our democracy, and we want to share it with the world, and that's all well and good. Exporting democracy by force is fine, if you happen to have the mind of a six-year-old. Instead, if we could provide the tools necessary to ensure democratic elections, we could change the world without firing a single shot.
This is great, as long as you can posit the existence of a perfect voting machine. For next week's post, I'm going to talk about the design of such a machine, along with likely attacks against such a system, and ways to address the possible attacks.
Here in America, even though we don't get this problem so much since votes are counted by a more or less impartial process, we've tried to solve this problem with technology. The result is electronic voting machines (EVMs) which are, frankly, an embarrassment. When word got out about how atrociously bad the current generation of EVMs are, how the manufacturers didn't even try to apply basic security protocols, we should have buried them in a really deep hole, and never spoken of them again, just to try to save face. (Instead, inexplicably, we keep using them in elections.)
I don't intend to talk about that today, though. Instead, I'm going to talk about what EVMs could have been.
What if we had a perfect voting machine? One that could be given to a country which is just starting out with democracy, such that even if we assume hostile individuals at every stage of the election, the machines could still come out with a perfect tally of people's votes? It's difficult to guarantee this, since in the worst case there could be armed guards forcing each person to vote a certain way. However, it's within the realm of possibility to force election fraud out into the open. If you can guarantee that a person whose vote has been somehow tampered with knows about it immediately (as in the armed guard scenario), then it makes large-scale election fraud both transparent and uneconomical.
Imagine how this could change the dynamics of elections. If the results of an election were both incontrovertible and verifiable, it would be the end of sham democracies, where those in power change the results of elections to suit their needs. We could enable truly democratic elections in places where it would be unthinkable today. We like our democracy, and we want to share it with the world, and that's all well and good. Exporting democracy by force is fine, if you happen to have the mind of a six-year-old. Instead, if we could provide the tools necessary to ensure democratic elections, we could change the world without firing a single shot.
This is great, as long as you can posit the existence of a perfect voting machine. For next week's post, I'm going to talk about the design of such a machine, along with likely attacks against such a system, and ways to address the possible attacks.
Friday, June 12, 2009
Random idea post
Here is a thing that I think would be neat. This is just a braindump, and not a well-thought-out post, so read it as such. I am rambling but I am going somewhere with it.
Filesystem images are really useful for some stuff, but there are a lot of things that suck about them. First, it should be possible to use them as a regular user - mounting to a loopback device requires root access. Sure, you could allow suid access to mount, but then users could mount anything anywhere and basically it's a massive security hole. I'm talking about being able to mount an image in a controlled way, only visible to the current user/process/some other level of control, so that it's usable in the same way but still secure.
This could either happen in kernel space, or in userspace. Doing it in kernel space, and integrating with the current UnionFS code, would probably be a better solution overall. However, doing stuff in the kernel always has some complications, since upgrading the kernel is a pain for a lot of users (that reminds me, 2.6.30 is out!), and it doesn't really help people on other operating systems. The other way to do it would be to write wrappers around the existing filesystem API, and intercept the necessary commands to pretend that there's a filesystem mounted, all in userspace. This is extremely possible, but so much code depends explicitly on the existing filesystem API that I don't know how feasible it'd be for any nontrivial projects.
Another thing that'd be cool is dynamic sizing. What I'm basically imagining here is a tar (non-UNIX people: pretend I just said zip) file, but with the emphasis on being an efficient writable image. Log-structured filesystems are a good starting point here, since you only need to be able to append (easy) and make fixed-size updates to previous locations in the file (also really easy). The problem you run into - one log-structured filesystems don't offer a solution for - is fragmentation. At some point you have to repack the archive to free up space that's used by deleted files, and if we want to optimize for performance, then that just sucks. Note to self: look at how databases handle this, they must run into similar problems.
Yet another thing that'd be cool is cheap copy-on-write clones. The more obvious way to do this is in the archive itself - make it possible to have multiple concurrent clones in a single filesystem image, and select one when you mount it. More interestingly, you could also do this by falling back on the underlying filesystem. BtrFS has an alternate cp command which allows you to create a copy-on-write "copy" of a file. If you used that to clone the archive, and were careful to keep it mostly in filesystem-sized blocks, you could have an efficient solution that lets you treat clones as separate files, keeps the archives simple, and is generally pretty neat.
Finally, filesystem-level transactions. Seriously. Why does every filesystem on earth not already have this? I should be able to specify that an arbitrary series of operations on a set of files should be treated as atomic. The current best way to simulate this is to create a new directory containing a copy of all the files for the transaction, do the updates there, and then (atomically) rename the directories. I mean, seriously, what the hell?
I'm not just saying this because it'd be cool (even though it would be), I have an application in mind. Let's say you start out with a read-only image of your root filesystem. Take the program you want to run, and stick it in a filesystem archive as described above. To run it, just mount it along with all of its dependencies, and go. All the program state can be kept inside the program itself, because we have cheap copy-on-write clones, so doing that is basically free. Now, let's say you want to transfer the (running) program to a different machine. Take it down (the data on disk can be guaranteed consistent because we have transactions), send the program and all its dependencies to the new machine (can be done efficiently - if a dependency is read-only, and the other end has an identical copy, then it can be skipped, and this can be made the common case easily enough), and restart it. Add in some magic to keep existing file handles working, and we've got cheap process migration, across heterogeneous systems, mostly in userspace.
Cool.
Filesystem images are really useful for some stuff, but there are a lot of things that suck about them. First, it should be possible to use them as a regular user - mounting to a loopback device requires root access. Sure, you could allow suid access to mount, but then users could mount anything anywhere and basically it's a massive security hole. I'm talking about being able to mount an image in a controlled way, only visible to the current user/process/some other level of control, so that it's usable in the same way but still secure.
This could either happen in kernel space, or in userspace. Doing it in kernel space, and integrating with the current UnionFS code, would probably be a better solution overall. However, doing stuff in the kernel always has some complications, since upgrading the kernel is a pain for a lot of users (that reminds me, 2.6.30 is out!), and it doesn't really help people on other operating systems. The other way to do it would be to write wrappers around the existing filesystem API, and intercept the necessary commands to pretend that there's a filesystem mounted, all in userspace. This is extremely possible, but so much code depends explicitly on the existing filesystem API that I don't know how feasible it'd be for any nontrivial projects.
Another thing that'd be cool is dynamic sizing. What I'm basically imagining here is a tar (non-UNIX people: pretend I just said zip) file, but with the emphasis on being an efficient writable image. Log-structured filesystems are a good starting point here, since you only need to be able to append (easy) and make fixed-size updates to previous locations in the file (also really easy). The problem you run into - one log-structured filesystems don't offer a solution for - is fragmentation. At some point you have to repack the archive to free up space that's used by deleted files, and if we want to optimize for performance, then that just sucks. Note to self: look at how databases handle this, they must run into similar problems.
Yet another thing that'd be cool is cheap copy-on-write clones. The more obvious way to do this is in the archive itself - make it possible to have multiple concurrent clones in a single filesystem image, and select one when you mount it. More interestingly, you could also do this by falling back on the underlying filesystem. BtrFS has an alternate cp command which allows you to create a copy-on-write "copy" of a file. If you used that to clone the archive, and were careful to keep it mostly in filesystem-sized blocks, you could have an efficient solution that lets you treat clones as separate files, keeps the archives simple, and is generally pretty neat.
Finally, filesystem-level transactions. Seriously. Why does every filesystem on earth not already have this? I should be able to specify that an arbitrary series of operations on a set of files should be treated as atomic. The current best way to simulate this is to create a new directory containing a copy of all the files for the transaction, do the updates there, and then (atomically) rename the directories. I mean, seriously, what the hell?
I'm not just saying this because it'd be cool (even though it would be), I have an application in mind. Let's say you start out with a read-only image of your root filesystem. Take the program you want to run, and stick it in a filesystem archive as described above. To run it, just mount it along with all of its dependencies, and go. All the program state can be kept inside the program itself, because we have cheap copy-on-write clones, so doing that is basically free. Now, let's say you want to transfer the (running) program to a different machine. Take it down (the data on disk can be guaranteed consistent because we have transactions), send the program and all its dependencies to the new machine (can be done efficiently - if a dependency is read-only, and the other end has an identical copy, then it can be skipped, and this can be made the common case easily enough), and restart it. Add in some magic to keep existing file handles working, and we've got cheap process migration, across heterogeneous systems, mostly in userspace.
Cool.
Saturday, June 6, 2009
Evolution and Morality
Is there a universal standard for moral conduct?
The trouble with morality is that most people will answer that question: "Yes, and my standard of moral conduct is it." And this would be all well and good, except that people disagree as to what that universal, unassailable moral standard actually is. (I could draw examples from any two religious texts of conflicting moral standards if I wanted to make this post a lot more inflammatory than it is.) Moral codes usually overlap to the point that we can gloss over the differences, but that doesn't mean that they aren't there, and it's troubling to think that morality can be so arbitrary.
If I find myself in the market for a set of moral precepts, what should I look for? I don't just want some rules that were set out arbitrarily; I need something a little more reliable than that. Something that can be derived from first principles would be ideal. Unfortunately, as far as I know this has never been done. The closest thing I can think of is Kant's Categorical Imperative: "Act only according to that maxim whereby you can at the same time will that it should become a universal law." As moral principles go, it's a decent one, but it's not especially helpful when comparing moral systems. It tries too hard to justify what you already believe. If you truly, in your heart, believe that witches must be stoned to death, and would feel the same way even if you were found to be a witch, then the Categorical Imperative insists that you stone that witch to death.
But I digress. Can we find a system of ethics that depends neither on unjustified commandments, nor on preexisting moral beliefs?
Evolutionary Ethics
Evolutionary ethics and group selection provide an interesting framework for looking at morality. The theory here is that certain types of ethical systems give societies an advantage over others. Thus, behaviors which may not help the individual, such as altruism, can still arise through evolution, by sustaining the society the individual lives in. And, conversely, unethical behavior is considered such because it harms society in some way. (Doesn't this presuppose that evolution happened? Well, yes. Evolution is obvious enough by now that no honest person can deny it.)
This would explain a lot of features of moral systems. Killing others is generally considered bad, because it obviously weakens society. Stealing from others is generally considered bad, because having property rights enforced allows for commerce, which strengthens society. Moral systems ofthen also have an element of self-preservation built in, which makes a lot of sense in an evolutionary framework. Evil people and heathens are those who don't follow the system; eliminating the evil people in a society strengthens the moral system itself.
This also offers up at least three possible explanations of the large degree of overlap between moral systems. First, there could be a common ancestor of moral systems which was gradually spread all over the world. Second, there could be convergent evolution at work, and different moral systems could eventually end up with the same, or very similar, core values. Third, the fact that very different moral systems are labeled "evil" could imply that systems that were radically different were stomped out by their neighbors, once a dominant type of system emerged. Most likely, all three of these have played some role.
The Role of Law
These days there's a degree of mixing of culture which is unprecedented in the history of humanity, and this has serious implications for moral systems. Back when you could reasonably say that everybody in a given tribe/city/fiefdom/whatever followed the same moral strictures, things worked more or less smoothly. These days, we can make no such assumptions; how can we enforce a basic standard of morality?
I theorize that the law, whatever it may be at the time, forms a sort of meta-morality at the intersection of whatever moral systems are dominant at a given time. This can only work because most systems of morals agree on almost all of the core tenets of morality: don't kill, don't steal, don't be a jerk, etc, etc. There are various benefits to having an established moral system, but many of the benefits require everybody in a region to follow the system, and law provides just such a regionally-bound framework.
Then, the law isn't just a desirable thing - a strong, and absolutely secular, legal system is the thing which makes an extremely diverse society like ours functioning. It is not, however, a substitute for a moral system, for two reasons. First, the law is made with plenty of input from individuals who seek to alter it to their own benefit. While it would be nice to have a system of laws which corresponds exactly to morals, such a thing is simply not possible. Second, laws cannot evolve over time in the same way that current moral systems did. The law, rather than being a complete moral theory, is only useful as a bridge between the moral beliefs of individuals.
Unfortunately, all this is fun to think about, but it still leaves me in the dark as to which ethical precepts to stick to. Ah, well, maybe I'll have better luck some other time. The default course of action, of course, is to just follow what everybody around me is doing, and while this line of thinking hasn't led to anything more useful than that, it has at least given some kind of justification to the default.
The trouble with morality is that most people will answer that question: "Yes, and my standard of moral conduct is it." And this would be all well and good, except that people disagree as to what that universal, unassailable moral standard actually is. (I could draw examples from any two religious texts of conflicting moral standards if I wanted to make this post a lot more inflammatory than it is.) Moral codes usually overlap to the point that we can gloss over the differences, but that doesn't mean that they aren't there, and it's troubling to think that morality can be so arbitrary.
If I find myself in the market for a set of moral precepts, what should I look for? I don't just want some rules that were set out arbitrarily; I need something a little more reliable than that. Something that can be derived from first principles would be ideal. Unfortunately, as far as I know this has never been done. The closest thing I can think of is Kant's Categorical Imperative: "Act only according to that maxim whereby you can at the same time will that it should become a universal law." As moral principles go, it's a decent one, but it's not especially helpful when comparing moral systems. It tries too hard to justify what you already believe. If you truly, in your heart, believe that witches must be stoned to death, and would feel the same way even if you were found to be a witch, then the Categorical Imperative insists that you stone that witch to death.
But I digress. Can we find a system of ethics that depends neither on unjustified commandments, nor on preexisting moral beliefs?
Evolutionary Ethics
Evolutionary ethics and group selection provide an interesting framework for looking at morality. The theory here is that certain types of ethical systems give societies an advantage over others. Thus, behaviors which may not help the individual, such as altruism, can still arise through evolution, by sustaining the society the individual lives in. And, conversely, unethical behavior is considered such because it harms society in some way. (Doesn't this presuppose that evolution happened? Well, yes. Evolution is obvious enough by now that no honest person can deny it.)
This would explain a lot of features of moral systems. Killing others is generally considered bad, because it obviously weakens society. Stealing from others is generally considered bad, because having property rights enforced allows for commerce, which strengthens society. Moral systems ofthen also have an element of self-preservation built in, which makes a lot of sense in an evolutionary framework. Evil people and heathens are those who don't follow the system; eliminating the evil people in a society strengthens the moral system itself.
This also offers up at least three possible explanations of the large degree of overlap between moral systems. First, there could be a common ancestor of moral systems which was gradually spread all over the world. Second, there could be convergent evolution at work, and different moral systems could eventually end up with the same, or very similar, core values. Third, the fact that very different moral systems are labeled "evil" could imply that systems that were radically different were stomped out by their neighbors, once a dominant type of system emerged. Most likely, all three of these have played some role.
The Role of Law
These days there's a degree of mixing of culture which is unprecedented in the history of humanity, and this has serious implications for moral systems. Back when you could reasonably say that everybody in a given tribe/city/fiefdom/whatever followed the same moral strictures, things worked more or less smoothly. These days, we can make no such assumptions; how can we enforce a basic standard of morality?
I theorize that the law, whatever it may be at the time, forms a sort of meta-morality at the intersection of whatever moral systems are dominant at a given time. This can only work because most systems of morals agree on almost all of the core tenets of morality: don't kill, don't steal, don't be a jerk, etc, etc. There are various benefits to having an established moral system, but many of the benefits require everybody in a region to follow the system, and law provides just such a regionally-bound framework.
Then, the law isn't just a desirable thing - a strong, and absolutely secular, legal system is the thing which makes an extremely diverse society like ours functioning. It is not, however, a substitute for a moral system, for two reasons. First, the law is made with plenty of input from individuals who seek to alter it to their own benefit. While it would be nice to have a system of laws which corresponds exactly to morals, such a thing is simply not possible. Second, laws cannot evolve over time in the same way that current moral systems did. The law, rather than being a complete moral theory, is only useful as a bridge between the moral beliefs of individuals.
Unfortunately, all this is fun to think about, but it still leaves me in the dark as to which ethical precepts to stick to. Ah, well, maybe I'll have better luck some other time. The default course of action, of course, is to just follow what everybody around me is doing, and while this line of thinking hasn't led to anything more useful than that, it has at least given some kind of justification to the default.
Friday, June 5, 2009
Placeholder
I have two ideas for posts that are fighting it out in my head! One of them is the clear loser, which is unfortunate, as that was the one I had started to write. Real post tomorrow!
Subscribe to:
Posts (Atom)