Since some years big software companies like Microsoft (2002) or Cisco (2010) start changing their software development procedures to address the massive amount of vulnerabilities in their products. MS seems to be successful with this strategy and all the charts, numbers and articles look promising. "But what about the open-source world, the world of Linux distributions, what did they do?" you might ask. Led me shed some light on it.
It is much different for us and I will go into the details later. Let me first enumerate some vital steps in the secure development process that correlate to the steps of various software development "philosophies":
- secure system design principals
- risk assessment (aka threat modeling or security profiling)
- choosing the right technology (programming language, compiler, etc.)
- secure-coding training for developers
- security-testing training for testers
- tools (static analysis and compilers with security options) for developers
- security-related testcases and tools (fuzzers, scanners, etc.) for the QA team
- partial code-review by specialists
- penetration-testing by specialists
- maintenance (update publishing, customer notification)
If you develop code in-house you have influence on each of the development steps (not for free of course). But if you are a distributor of open-source software you just collect the software, bundle it and hand it over to your customers (I hope no one will bash me for this simplification). This puts us in the penetrate-and-patch wheel (aka "hamster wheel" by A. Jaquith) which is known to be costly and ineffective. But you can be sure our "hamster wheel" is well oiled and our teams of "hamster-engineers" is in good shape. Maintenance is one of the main and most important services we provide because software will never come without bugs... that is the reality.
We, as distributor, could of course be so crazy and try to force open-source developers to follow a set of principals of secure software development by letting them answer questionnaires and verify their code quality. And if they fail, we will drop their package(s). Believe me, this would neither help us nor the community nor any enterprise customer. SUSE: "Oops, we have to drop X and the kernel. Well, be it so...", developer: "SUSE sux!" (BTW, untrue... we dropped "sux".)
What really helps in this situation is:
- a healthy and effective communication between distributions, and between distributions and the OSS developers as well as the user community/customers
- kernel, glibc and gcc options to avoid memory corruptions, enable non-executable memory sections, address space randomization etc.
- security-related testcases and tools (fuzzers, scanners, etc.), for example for the QA team
- secure default configuration of the system and its services by enforcing our security policy for all packages
- code review and pen-testing of high-risk components
- processes and interfaces well-known and accepted by customers
- and a highly optimized "hamster wheel", vulgo: maintenance (bug fixing, update publishing, customer notification)
About 12 to 10 years ago, when Marc (ret.), I and Sebastian (chronological order) started working for S.u.S.E., our main focus was code-reviews (our wu-ftpd was great!) and to establish a process for security updates.
We improved the way code reviews were done in the last decade and came to something like Threat Modeling (Security Profiling) but in a much less noisy way by combining design reviews, results of code reviews and runtime (penetration) tests with real bugs (incl. severity rating). Beside of that there is a real change in code quality of high-profile open-source software, we found much less simple or severe bugs. The bugs in libraries, client- as well as in web-applications increases dramatically (remember PHP, PDF libs, font libs, ImageMagick, ...).
But code-reviews alone did not protect you against unknown bugs. A secure default configuration (SUSE Linux Enterprise Server as well as openSUSE) is vital. Our strict policy (processes and technology) is mainly enforced by Ludwig and Marcus.
The general security awareness in our company as well as in the whole digital society changed positively compared to the time before the dot-com bubble imploded. ("A long, long, long, long time ago - Before the wind before the snow ...")
And therefore we receive more security bug reports from customers, community memebers and colleagues as well as from code reviews by others people and companies. And that is good.
A negative effect of the massive web-based (web 2.0) development is complexity and openness. Todays web-applications are highly exposed, have different standard and non-standard interfaces, talk to several other semi-trusted systems, are dynamic and mainly process untrusted data. They are always the low-hanging fruit... imagine what would happen if the paradise was full of apple trees!
New challenges to counter!
(Ah, I promised to take a look into the future, I can't! :) The only constant in life is change. Enjoy!)