The Characteristics of a Good Security
A good security policy – something that inspires confidence in the security of the software –should include the following:
(a) Full Disclosure: Vulnerabilities should be openly disclosed, in full, as quickly as reasonably possible. This should never be more than a week at most, whether the problem was fixed in that time or not. If it takes you longer than that to fix a security problem, there’s something wrong, and even if you’re still working day and night on producing a fix, end users should be informed of their vulnerability so they can work around the vulnerability if they so desire. Even if the internal policy is to delay disclosure for a few days, though, no efforts should be made to “punish” someone who discloses sooner, unless the disclosure is directed at malicious security crackers rather than the user base. Ideally, disclosure should be proactive and user friendly.
(b) Open Development: I don’t trust software that doesn’t trust me. Closed policies not only betray distrust worthy tendency in the distributors, but also ensure that I have no way to verify the trustworthiness of the developers. Ultimately, the software I use should ideally be available as source code that I can compile and run myself; this discourages deviousness on the part of software developers and ensures that, should I choose to compile it myself, I know personally that the source code to which I have access is the same stuff used to build the software I’m running. Ideally, the entire operating environment should be verifiable with both cryptographic hashes and source-based software management.
(c) Open Formats: When my data is stored in a particular format, it needs to be an open format. Vendors die, discontinue products, and play silly games with file format compatibility in the interests of “encouraging” upgrades. Closed formats hold my data hostage to the people who control those formats, and can make it difficult for me to maintain access to that data. This is, it should be obvious, simply unacceptable. When closed formats are taken to a ridiculous extreme, you get the similarly extreme consequences of DRM. Ideally, one’s choice of format should be as close to plain text as possible, because in a worst-case scenario you can still read plain text with the naked eye.
(d) Privacy-Friendly: Because of the importance of privacy, encryption support is critical to trustworthy systems. The strongest, open encryption systems should be included by default, such as OTR for IMs, Open PGP and S/MIME for email, TLS for Web browsing, and full disk encryption that doesn’t leave encryption keys lying around in swap space (i.e., virtual memory). Ideally, everything that can be encrypted should be encrypted, and should use strong encryption protocols that are open to peer review and have been the subject of extensive “real world” and academic testing.
(e) Vulnerability Management: Vulnerabilities should never be ignored. In some cases, entire classes of vulnerabilities are left unaddressed except by third-party “security software” vendors, which leaves one with at best a mitigating factor while leaving the system still vulnerable. Ideally, developers should treat every single vulnerability that affects their software as their own problem, as much as possible.
This isn’t a comprehensive list of security policy items that are key to trustworthiness, but it’s a good start. Consider these points, not only when selecting software to use, but also when constructing your own policies for dealing with security matters that may affect people outside your organization.