Saturday, August 30, 2008

Cheat a cash register, cheat a voting machine?

The New York Times reports on "zapper" technology which allows business owners to change the records in the cash register to reduce the taxes they owe. This is the modern equivalent of not closing the cash drawer between transactions, and then taking the cash out at the end of the day to cheat the tax man (woman?).

No great surprise there - when you make the incentive big enough, people will find a way to work around the system. And when it's really big, systems will be developed - as the article describes, businesses can buy the software, and don't have to do the dirty work themselves.

David Jefferson wrote in a private email (quoted with permission), if you read this article [and] substitute the word "DRE" for "cash register" and "votes" instead of "money"for what is stolen you will have nearly perfect explanation of the danger of malicious code injection in voting systems, complete with falsification of the audit trail to fool the auditors. If it can happen in cash registers, it can happen in DREs.

I believe there are certification requirements for cash registers, and I'd guess that they're stricter than those for voting machines. [I make that guess not because I think cash registers have strong certification, but because I know that voting system certifications are extremely weak.]

When I talk to elected office holders and election officials, they sometimes doubt the technical ability to modify the software to change votes - this is absolute non-theoretical proof that it can be done in an embedded system where tampering has a real-world impact.

Friday, August 22, 2008

Signed code isn't always enough

Computer security specialists frequently point to digitally signing software as a way to prevent an attacker from replacing software with a malicious version. (Of course, the signatures themselves are of no value unless they're checked - with a chain of custody starting as far back as you can, which is what some of the Microsoft Trusted Computing stuff is about.) The lack of digital signatures on software in voting machines is frequently (and accurately) listed as one piece of evidence that the voting systems are insecure.

But more importantly, the signature is only of value if the bad guy can't create their own signature that looks valid. And it appears possible that something like that may have happened with Red Hat Fedora. Red Hat announced that "some Fedora servers were illegally accessed...
One of the compromised Fedora servers was a system used for signing Fedora packages. ... we have high confidence that the intruder was not able to capture the passphrase used to secure
the Fedora package signing key."

They then go on to note that they're replacing the signing key out of an abundance of caution, and everyone will have to update their systems to understand the new key. But it's very hard to know for sure whether the signing key was used during the compromise period - bad guys are very good at covering their tracks.

The bottom line is that code signing just shifts the weak spot for attackers - instead of just trying to change the code on the server before it gets downloaded, they focus on accessing the signing key. And the real safeguard isn't the length of the signing key (which is presumably long enough to prevent brute-force attacks), but rather the quality of the passphrase used to unlock the signing key, the set of people (or systems) that have the signing key, and the safeguards around changing the key.

We should keep doing code signing, but as with all security measures, recognize that it's a defensive measure, not a panacea.

Monday, August 18, 2008

When is speeding better than voting?

I couldn't resist - the Washington Post reports that an older couple was cited for doing 100 mph on their street - which is highly improbable given how winding the street is and the type of car they drive (a Toyota Echo, which has a 0-60 rating measured in hours). Clearly the ticket shouldn't have been issued - the Post writes that "The speed camera system is designed to catch its own mistakes. When a glitch occurs, the device warns the reviewers by citing a weird speed to get their attention, such as 0 mph or 100 mph. The Brennan's speed should have been the tip-off to toss the ticket, but it got through the review."

All's well that ends well. So what does this have to do with voting? Well, all-electronic voting machines don't have anything to detect glitches, as we've regularly seen. And unlike someone driving 100 on a winding neighborhood street, which can clearly be ruled out, it's pretty much impossible for a paperless voting system to detect an "unexpected" result and throw it out.

Additionally, the cameras (I believe) rapidly snap a couple of pictures. So by examining the pictures and their timestamps, it should be possible to come up with a more reasonable speed. Having the camera digitally sign the images together with a trustworthy timestamp would be even better.

All of which makes me wonder - I'm guessing these cameras are networked into a central site, probably connected via the Internet. How resilient are the cameras from hacking (i.e., from someone breaking in and modifying or erasing images, or inserting images)? The cameras seem more likely to be unprotected than the servers where the images are uploaded for processing. They're probably not the most important systems out there on the Internet (assuming they are, in fact, connected that way), but they're an attractive target to someone who doesn't want a speeding ticket....

Saturday, August 16, 2008

When is swimming better than voting?

Simple - when there's a close race, Olympic swimming has an audit trail, in the form of videos that back up the sensors used to detect the winner of a race. As Michael Phelps learned, one onehundredth of a second is close enough.

By contrast, in a few months a large fraction of Americans will vote on systems that get less testing than the Olympic timing system, and have no independent way of judging the winner.

Friday, August 08, 2008

Nationwide biometric databases - a good idea? Not!

Haaretz (arguably Israel's most influential newspaper) argues in an editorial that the Interior Ministry's proposal for a nationwide biometric database is a good idea. Aspects of the argument remind me of Scott McNeally's famous quote "You have no privacy. Get over it".

But perhaps the scariest part of the Interior Ministry's proposal (and the Haaretz editorial) is a seeming complete ignorance of some of the other downsides of such a database. For example:
  • What happens if someone steals your biometric data from the database, and is able to use a "replay" attack to make it appear that you're the one being authenticated?
  • What happens if someone replaces the biometric information of a bad guy in the database with an innocent victim? The bad guy will then go free ("it couldn't be him, since the biometrics don't match"), and the victim will have a hard time being vindicated ("the crime scene fingerprints match his fingerprints in the database, so he must be the murderer").
  • What happens when someone uses some of the published techniques to pick up latent fingerprints and play them back? (I remember an example of this by researchers in Japan a few years ago.)
The editorial claims that the database will be secure and "will be accessible only by judicial order." But that's also true of many databases, and it just doesn't work - see, for example, the many recent cases of hospital workers in Los Angeles reading medical records of celebrities, or IRS employees accessing celebrity tax records....

Wednesday, August 06, 2008

Absurd patents

This patent was described in a Slate article. Has to be read to be believed. (Note that the patent was eventually canceled - but the fact that it was granted boggles the mind. Was the patent examiner trying to meet some sort of quota?)

Tuesday, August 05, 2008

The Google privacy dossier

Computerworld reports that the National Legal and Policy Center (NLPC) has turned the tables on Google Inc. by using the company's controversial Street View technology along with Google Earth to compile and make public a detailed dossier on a "top Google executive."

You can find the dossier here. To (somewhat) protect the privacy of said Google executive, the dossier "blacks out" key parts of its findings, such as the detailed driving instructions from the executive's house to the Google office. However, NLPC did it wrong (or maybe right?) by just pasting black boxes across the more sensitive data. What they probably didn't realize is that Acrobat doesn't really eliminate the stuff under the black box, so you can just cut & paste the data into another application, and recover the "hidden" data.

As an example, here's the directions from page 6 of the report, after uncovering the hidden text:

1) Head NE on Waverley Oaks to Waverley St. (305 feet)
2) Turn right at Waverley St. (0.2 mi)
3) Turn left at Oregon Expressway (go 1.2 mi)
4) Merge onto US101 via ramp to San Jose (go 2.5 mi)
5) Take the Rengstorff Ave. exit (go 499 feet)
6) Keep right at the fork, merge onto Amphitheatre Parkway (go 0.7 mi)
7) Arrive at 1600 Amphitheatre Parkway
Distance 4.8 miles, about 11 minutes

You can do the same thing with almost all of the hidden text in the document.

This "feature" of Acrobat is nothing new - because it gets misunderstood on a regular basis, Adobe has some nice features and a good blog entry describing the issues. And this is nothing unique to Acrobat - the US National Security Agency has published a nice document on how to do redaction correctly for Microsoft Word.

The moral of the story: if you're trying to protect data, make sure you know what's there before you publish it online!