There is very little that gets me more upset than bad technology. Entirely too often, new technologies are implemented not because they are useful, but because they seem innovative and cool. Here are a couple of recent examples that have been bothering me.
Bad Idea No. 1: Computerized Voting Machines
I love low-tech solutions. This is a recent development for me, inspired in no small part by spending the last several years living in Cambodia. So many people seem to forget that there are some problems that are best solved by very traditional, low-tech equipment. (See Low-Tech Security.)
Voting is one of those things. Computers are great if you are running a poll on your Website and dont expect fraud, or if you don't care much if it happens. But if it is a national or even local election, the threat model is different because the stakes are higher.
Voting machines are a solution to a problem that has already been solved. Maybe the ballot design problem isn't, but that isnt the fault of the machines. As in many cases, the humans screwed that one up. For a far more thoughtful discussion of this, see Rebecca Mercuris work.
Bad Idea No. 2: Worms That Do Good
This one is seeing some press recently because of an article in New Scientist about Microsoft's research paper on the topic. (See Critics: Microsoft's 'Friendly Worm' Is a Dumb Idea.)
There are two problems. First, worms dont do good. Worms, by definition, lack central control, and patching and system modification need both central control and accountability. The mechanism used by a so-called Good Worm to access systems can be used just as easily by a Bad Worm, turning the "feature" into a bug. And when the access vector becomes a vulnerability, the potential for damage is just too high.
There's another problem here. Check out the New Scientist article, and then look at the paper referenced on Microsofts site. Close examination of this paper will reveal a)TONS of math, and b) an emphasis that the paper is pure research and not an upcoming product description.
I do understand the tendency of our IT security community to assume the worst of people it is, after all, what we are paid for. However, in this case, I think that the research is quite interesting, and not necessarily motivated by evil or stupidity. It will probably prove of more use to the bad guys than the good guys, but in this case, the Microsoft team isn't at fault.
Indeed, the paper (actually a technical report since the paper hasnt been published yet) mentions the word patch exactly once, and in the bibliography at that. One of the authors has studied effective patch dissemination in the past. That work didnt propose using worms to do it. This paper doesnt, either. Perhaps the idea is proposed in the unpublished work, but it seems more likely that the author of the New Scientist article just got things wrong.
Here's a bit of advice that comes from both of these "bad technology" examples: Talk to the media, but make sure they get it right. If they dont, you may find yourself waiting in line for hours until the technically unskilled staff at your local voting station try to reboot the booths without losing any votes.
Nathan Spande has implemented security in medical systems during the dotcom boom and bust and suffered through federal government security implementations. Special to Dark Reading