Tuesday, July 31, 2007
Monday, July 30, 2007
Researchers commissioned by the state of California have found security issues in every electronic voting system they tested.If you'd like to dig a little deeper, the Overview of Red Team Reports by Matt Bishop is a good place to start. This report is also an excellent example of how to conduct, and to report on, a "red team" test.
For this TTBR, the specific goals of each system are to record, tabulate, tally, and report votes correctly and to prevent critical election data and system audit data from being altered without authorization. The threats were taken to be both insiders (those with complete knowledge of the system and various degrees of access to the system) and outsiders (those with limited access to the systems)...Perhaps we should add "elections" to Bismarck's famous remark about laws and sausages.
The testers did not evaluate the likelihood of any attack being feasible. Instead, they described the conditions necessary for an attacker to succeed...
It is commonly accepted that no computer or computer-based system, called an information technology system, can be made completely secure. It is also commonly accepted that the managers of an information technology system have a responsibility to develop sufficient controls in and around a system to the point that continued operation of the system meets the requirements of the organization...
The California Secretary of State must certify any electronic voting system before it can be used in California elections. One of the requirements is that the system be federally certified to meet the 2002 Voting System Standards (VSS). Independent testing authorities (ITAs) test the electronic voting system to certify compliance with these standards. All three systems in this study were so certified...
The major problem with this study was time. Although the study did not start until mid-June, the end date was set at July 20, and the Secretary of State stated that under no circumstances would it be extended. This left approximately 5 weeks to examine the three systems...
The short time allocated to this study has several implications. The key one is that the results presented in this study should be seen as a “lower bound”; all team members felt that they lacked sufficient time to conduct a thorough examination, and consequently may have missed other serious vulnerabilities...
Despite these problems, the red team testing was successful, in that it provided results that are reproducible and speak to the vulnerability of all three systems tested...
The red teams demonstrated that the security mechanisms provided for all systems analyzed were inadequate to ensure accuracy and integrity of the election results and of the systems that provide those results.
Electronic voting systems are critical to the successful conduct of elections in those jurisdictions where they are used. Given the importance of voting and elections in the governing of the State of California, one may safely say that these systems are “mission critical”. Such systems need to be of the highest assurance in order to ensure they perform as required. Techniques for developing such systems are well known but, sadly, not widely used. Vendors would do well to adopt them for electronic voting systems.
Similarly, many components of voting systems run on commercial operating systems. A non-secure underlying operating system offers attackers avenues into the software that the operating system runs, in this case the vendors’ election management systems. Hence vendors must ensure that whatever underlying operating system their software runs on meets the security requirements that their software meets.
A key idea underlying high assurance techniques is that security should be part of the design and implementation of the system and not added on “after the fact”. The reasons for this need not be repeated here. Many of the components tested appear to have been hardened by taking their basic design and adding security features. As a result, the testers were able to exploit inconsistencies between the protective mechanisms and that which they were intended to protect.
Vendors should assume the components of the voting system will be used in untrusted environments in which they cannot be adequately monitored. Thus, their physical protections should be “hardened” to withstand determined attack. The added barrier that such mechanisms create will hamper the ability of attackers to obtain illicit access to the components even if lapses in procedural mechanisms allow them unobserved or unfettered access to the systems.
Of equal importance is the ability to detect when such attacks occur. Again, this speaks to security mechanisms as being “layered”; one must implement mechanisms to prevent compromise, and then add mechanisms (which may be the same as the previous ones) to enable observers to detect compromise should the preventative mechanisms fail. See for example Elisabeth Sullivan’s excellent discussion in , chapters 18 and 19.
Because detection requires that people take some action, the security mechanisms require that specific procedures be designed in order to ensure that failure of the preventative mechanisms, and success of the detection mechanisms, are properly handled. An excellent example comes from the realm of physical security. A common belief is that tamperproof tape is sufficient to detect the violation of preventative mechanisms; for example, sealing a bay with tamperproof tape enables one to detect that the bay has been opened. Two problems arise. First, there must be a procedure to check the tamperproof tape. Second, an attacker can often acquire the same tape as is used to protect the systems. The attacker simply removes the tape showing evidence of the tampering, and replaces it with her own tape. Unless the original tamperproof tape has unique serial numbers and the observers check those serial numbers, the detection mechanism is defeated. Unless the customers follow an appropriate procedure (here, checking that the tape is intact and the intact tape has the right serial numbers), the security mechanism is easily defeated.
Finally, the red teams wish again to emphasize the inadequacy of “security through obscurity” as a key defensive mechanism. No security mechanism should ever depend on secrecy. At best, secrecy should be a single security mechanism in a layer of defensive security mechanisms. In this study, when vendors failed to provide software that would have helped the red teams expedite the testing process, the failure became a motivation for the red teams to construct equivalent software to carry out the attacks. The only thing lost was time that could have been used for testing. Given the constraints under which the red teams operated, a well-financed team of attackers, with plenty of time to plan attacks between elections, could do considerably better.
[all emphasis in original]
Thursday, July 26, 2007
InfowarCon 2007 ... holds a magnifying glass to today's critical need for cooperation between government and industry, zeroing in on homeland defense and on countering global and national cyberterrorism.It appears that they are focusing on the defensive side of cyberwar, but of course the second D in DoD also stands for "Defense."
Monday, July 09, 2007
However, there is also a major technical and policy point:
The eavesdropping relied on "lawful intercept" software that had been built into the switch by the manufacturer--even though that law-enforcement capability was not being used by Greek law enforcement, and in fact had not even been enabled by the phone company.
Matt Blaze has a good post, referring to past predictions that CALEA would make telecommunications more vulnerable to eavesdropping. Steve Bellovin's post contains related comments and additional useful links.
An article in COMPUTERWORLD reports on a talk about zero-days by Justine Aitel, CEO of Immunity, at the SyScan '07 security conference.
Back when viruses mostly propagated by sharing of floppy disks, you could keep your computer reasonably safe by loading only floppies from trusted sources (and virus-scanning those) and updating your antivirus software every few months. No longer. The most vicious malware is likely to arrive with no advance notice whatsoever and no opportunity to prepare a specific defense. You get zero days of warning.
The average zero-day bug has a lifespan of 348 days before it is discovered or patched, and some vulnerabilities live on for much longer...
Immunity, which buys but does not disclose zero-day bugs, keeps tabs on how long the bugs it buys last before they are made public or patched. While the average bug has a lifespan of 348 days, the shortest-lived bugs are made public in 99 days. Those with the longest lifespan remain undetected for 1,080 days, or nearly three years, Aitel said.
"Bugs die when they go public, and they die when they get patched," she said...
"Always assume everything has holes. It's the truth: it does."
Tuesday, July 03, 2007
Hyperverbizationing was already underway in the 1960s, when authoring started replacementing writing, and architecting replacementing designing.  It perhaps reached its peak during IBM's "architecturalization wars," when there were no neutrals: You were either prorearchitecturalizationing or antirearchitecturalizationing. 
PS I'm not registrationing for this conference.
 And, among computer designers, architecture replaced the more accurate façade.
 Although there was some precedent in antidisestablishmentarianism.