Last week I sat down to figure out what's really going on in secure software development: 'Developer-centered Security'. I used a technique by Simon Wardley, which is great for showing a whole industry at a glance.
Many developers expect their main software security problem to be attackers from North Korea using technical software weaknesses to gain access to files in the system such as lists of passwords. But Facebook's biggest security problem ever was a decision to quietly give innocent looking semi-public data to an academic researcher at a firm of analysts. Many other companies have equally found that their security problems are far from what they might have expected.
Recently I’ve been working on the Secure Development Handbook, the most important part of this website. So here I'll ask the author’s question. How do we improve it?
Many of us learn best from games, preferably games that are fun. This section introduces three games, each of which has been devised by security researchers and used in a variety of commercial situations.
All work with agile development techniques, and all three are free to download but need some preparation, so I recommend you take a look before inviting all your colleagues to a session.
This week I’ve been at the IEEE SecDev conference in Boston. It’s been a eye-opener; I had not realised before just how many aspects there are to secure software development: toolchain improvements; what can senior management do; resources available to developers; patterns for secure development; language design to prevent time-based oracles; using strong typing to enforce security; log analysis; automated threat modelling – we heard about research on all of these.
Here too, I had the privilege of organising a Birds of a Feather session. This time I asked the question raised by the paper I was presenting: given developer resources on security tend to be inadequate, how are developers to find out what they need to know?
The lovely thing about academic conferences is the number of great researchers you meet there! Yesterday I led a Birds of a Feather session at the ESEC/FSE 2017 Conference in Paderborn; we considered the question ‘How do we make software secure?’. I was delighted to have present a number of noted software security experts, including such luminaries as Laurie Williams, Arosha Bandara and Eric Bodden.
I was privileged to attend the Workshop on Security and Human Behaviour last month in Cambridge, UK. This fascinating two day workshop brought together around ninety leading researchers in Human-Centred Security, most working in software security. The delegate page alone is worth a look: most attendees linked their key papers so it makes a good introduction to the field.
A year ago I started researching software security. I started by interviewing a dozen very experienced experts, and analysing what they said.
In their answers I found something very different from what I’d seen in the literature.
Much of the writing about software security tells programmers to use checklists of possible errors; the implication is that if the checklist is satisfied, the software is secure. Alas, this isn’t true.
Last week I was discussing my research at Security Lancaster with a friend. She said, "You must think security is the most important thing in software.”
I hastened to deny it. You see, I don't think security is by any means the most important aspect to software creation.
I attended the Foundations of Software Engineering conference in Seattle a week or so back. The conference covers a wide range of research topics, and this year they’ve moved to having three streams in parallel much of the time. Three presentations really stood out.
Cyber security is a bit strange, For government it’s the name that defines a very real threat to our country’s future. For information security specialists it’s a silly word that only means something to government. So what does ‘Cybersecurity’ really mean?
What does 'App Security' really mean? What does it mean to keep an app secure, so that our users can do what they want, but we can stop malicious people from causing them and us harm?
Cryptography worries people. It all seems very complicated. But it needn't be...
Know your enemy is a very old principle indeed. It dates back to the Chinese philosopher Sun Tzu's The Art of War. I've always been fascinated to know who it is that is my enemy when I'm developing secure software for mobile phones.
It’s a tricky decision. You have two or three possible vendors for a very large software-related project. Any of them would be good. Your problem is that having chosen a vendor, you know you’ll be stuck with them, effectively, indefinitely. And so in a year or so you’ll no longer be able to use vendor competition to keep your costs down. So what do you do?