The lovely thing about academic conferences is the number of great researchers you meet there! Yesterday I led a Birds of a Feather session at the ESEC/FSE 2017 Conference in Paderborn; we considered the question ‘How do we make software secure?’. I was delighted to have present a number of noted software security experts, including such luminaries as Laurie Williams, Arosha Bandara and Eric Bodden.
I’ve attempted to capture some of the points discussed and distil them into themes. There were some interesting areas of polite disagreement, which I’ve attempted to capture too. I was particularly interested in a repeated theme, very relevant to many there, of how to teach software security at university.
What is Software Security?
Much security is about wider features, not simply bugs. Or to put it another way, security is not an emergent property; it needs to be engineered in from the start. Now some vulnerabilities - bugs - do emerge; indeed we can think of code as the reinforced concrete of modern software; if it’s rotten that’s a problem. However, much in the way of software security must be at a higher level. 80% of attacks are via users. And research shows even a system with zero code vulnerabilities can still be hacked!
Therefore, the security of a piece of code isn’t a number. Indeed “Security” is too much a blanket word. It implies only that there exists a bad guy out there. Ultimately all security decisions are risk calculations. So think in terms of assets… How to quantify security is an insurance problem; we can address it with two questions: (A) How secure am I? and (B) What is the potential damage?
It is possible to create a system that’s completely secure with respect to the threat model. However, a deployed system is always in an evolving context and the threat model is always incomplete.
We can think of existing code bases as Marley’s Chain: a heavy burden, no matter how valuable. Indeed we can generate security defects faster than we can remove them.
Developers and Software Security
Low defect high security software doesn’t have to cost more! Maybe, but it’s expensive to change the culture and processes to create it. Typically, good security development techniques are spread slowly by ‘diaspora’, people moving on from big players like Facebook or Google.
Is the problem education: that developers don’t know about security? No, actually software security problems are often general systemic in organisations (compare if an organisation kept having employee injuries that would be a systemic health and safety problem).
Could secure coding standards help? Not so far; the CERT Secure Coding standard is 200 pages long - completely impractical for normal programmers.
Teaching Security to Professionals
If we’re teaching security professionals at university, when is it best to teach them security, and what should we omit instead? We should include in every class (e.g. avoiding SQL injection). Compare reliability and performance, which are not taught as separate classes. Security involves many edge cases, so the detail is better taught later in the curriculum. It is tricky to change textbooks, though. It’s best to teach it as a running theme throughout the course.
How do we teach wider (not just code-level) security? We teach more general (non-software) security thinking to start with, e.g. try to get into department without card, challenging bank security, one example of social engineering done on the students (e.g. photo of their keys).
Be aware of the issues with advice from other professionals – there are well-known problems with bad advice from Stack Overflow.
Security and End Users
Perhaps we need to educate users, to understand the impact of their security decisions. However they may not know the implications of their decisions. E.g. an intelligent fridge may store valuable vaccines, be part of a botnet, betray my habits or visitors to the wrong people, cause a fire, etc. So, no - we can’t educate the universe! But we can educate software professionals such as developers, and testers.
Access to systems is a tricky area. Passphrases may be easier than ‘machine-style’ complex passwords, but it’s still difficult to remember many, and dangerous to reuse passwords (e.g. LinkedIn lost many of their usernames and passwords in clear two years ago). Many use password managers. These are unlikely to be a major risk because their data is typically encrypted with your own password, and stealing the data doesn’t help.
Wider Aspects to Security
Is offensive security, attacking back, worthwhile? Not really - it’s difficult to know who the attackers are; and it’s illegal in some countries, such as the UK.
In procurement, some organisations now insist on use of a static analysis tool; after initial push back, suppliers are finding that it doesn’t add to the cost.
Politically would it help to make software companies liable for their vulnerabilities? Yes - and the EU GDPR aims to do just that. Oddly, most commercial Security Analysis tools have DeWitt clauses in their terms of use - prohibiting publishing independent benchmarks. Banning that might help.
- Charles