“Why should I care?” It’s there in your colleagues’ eyes, in their attitude. Software security is not a part of most software development. They don’t teach it much – yet – in university. Many developers and testers won’t have encountered it at all, or consider it something for other groups to handle.
Or maybe they have encountered it, and consider it too expensive, or too painful, to countenance. How can we answer that?
If we want our team committed to improving software security, we need to address that question. In particular, we need to address it in ways that are meaningful for this team, in the context of the work they’re doing. But how do we convince them that security isn’t ‘somebody else’s problem’? How do we motivate them to start taking it seriously?
The answer, we’ve found, is running an event to use the power of the group of discussion and of learning. There are several ways of doing this, using team activities or even one-to-one discussion:
• Playing a game,
• Pen test workshop,
• A security lecture, or
• The Talk
You can choose the version that best suits your circumstances: a one-to-one discussion suits a new developer joining an existing team; or for a whole team, choose based on which you can do. You might also arrange a combination of several of these.
Don’t leave the developers scared! Conventional software security wisdom used to be to ‘scare developers and leave them scared’. Our experts don’t agree – instead they stress that the event should leave developers aware of security issues, but confident in their ability to address them.
It needs to be positive. That is why the day is balanced. For every one of these problems we show you in the commercial world and internally, we absolutely have a way of mitigating it. And even if we know we can’t stop it, we can certainly detect it, contain it and then exfiltrate those people. (P5)
Considering each of the possible events in turn:
Credit: Joel Bez, Flickr
This is an approach we’ve used very successfully with groups relatively unfamiliar with secure development. The teams play a game in which they chose security enhancements for a software product, and then see if their choices prevent a variety of attacks.
You can find such a couple of such games here on this site, with full instructions.
The advantage of this approach is it’s easy, inexpensive to set up and fun for the participants. What, of course, it doesn’t do is provide a link to the work the team are doing themselves. But in our experience most participants are very capable of making that step themselves.
Penetration testers are software security specialists; they ‘wear the hat’ of a possible attacker and try to break into the software you have produced. A good ‘pen tester’ may be able to work with the developers and show them the kinds of things an attacker can do. By doing so, they can convince the developers and testers that their software already has a problem.
With these companies, we would do an initial project where, for example, we pen test one of their existing products, and show them “this is how you designed this product, and this is how you went through this process, and this is what we found (P13)
This is very much the ‘traditional’ incentivisation workshop, since many security specialists are expert pen testers. It’s excellent in one way – it is very immediate and links in immediately to the context of the development team. It does have disadvantages as an approach, though. First, it sets the specialist up as an ‘adversary’ to the development team, which makes it more difficult for that specialist to be seen as a helper later on. Second, it tends to give the impression that software security is about low-level fixes to existing code, whereas often the biggest security issues are related to design and usability. Third, it’s no use at the start of a project when there’s no code to test.
So we run a very large scale education programme … where we … tell developers exactly what happens in the real world, how TalkTalk was hacked, how Sony was hacked. And then we go in detail how we have been attacked, and whether they were successful and how they were detected. Then we also show them all the stuff that our red teams do – our internal hackers – which really scares them! (P5)
Some bigger companies, with thousands of developers to bring up to speed with security, use the traditional teaching approach of couple of days of lectures. As an approach it works only when the teachers are really expert and know a great deal that is relevant to the company.
When there’s only one developer involved – for example a new joiner to the team – the approaches above don’t really work. Many companies, instead, have a discussion between someone who knows the issues and the new joiners.
The conversation can take anywhere between 40 mins to several hours depending on who the person is, and you won’t know until you’ve had the conversation. And the conversation is, you explain how to break into, how an attacker would attack the systems, and what the various things you need to be aware of, are. (P9)
This is not easy; the trick is to link the issues to something that has meaning for the developer in question. One expert aims to find out situations with the developer’s own family and friends where software security is an issue, and then uses that as a hook to relate that to the software they themselves will be developing.