"If you ask any developer, 'Hey, do you want to write code that is going to potentially cause millions of dollars of losses for the company?' most of them are probably going to say no," says Bill Pennington, chief strategy officer of WhiteHat Security.
The problem is that much of today's security testing and training isn't tailored to suit the way developers think and do their jobs, says Ed Adams, CEO of Security Innovations, who agrees that developers want to write high-quality code.
[How can you start instituting a secure software development life cycle? See 10 Commandments Of Application Security.]
"Remember that most software developers are engineers. If you're asking them to do something, give them a reason why," he says. "And then give them the method to do what you ask."
For example, today a lot of security pros think it is good enough to leave the dev team with a policy statement along the lines of, "Write all Web applications so they're not vulnerable to common threats on OWASP's Top 10 list."
"That's great, but it means nothing to a developer," Adams says, explaining that it leaves the developer to figure out what the top 10 is, then drill down into each statement on the list and try to figure out how that actually applies to the way they code applications.
It's a pet cause for Romain Gaucher, lead security researcher for Coverity, who says that at the moment, security people don't give developers complete advice that they can apply right away in their work environments.
"Security people should be able to talk to developers with code," he says. "They should do it with code examples and how to actually do the thing properly, not with very generic advice."
This could be a problem for some security professionals who are usually not developers by trade, says Adams, who adds that security training should come from developers who can speak the language of their brethren.
"If it isn’t a developer doing the training, you’re bound to get questions that can’t be answered, which will frustrate the developers even further," he says.
In addition to taking this more pragmatic approach to offering advice, organizations should also be seeking ways to make security problems and goals more visible to developers on a day-to-day basis. In the hunt for greater testing efficiency -- a good thing -- many organizations have done a lot to obscure security from the developer's line-of-sight by using frameworks, prewritten libraries, and routines for things like input sanitization, authentication, and cryptography, Adams says. That's not very conducive to developer training.
"That’s a good way to ensure developers are doing the right thing. But you can't prewrite everything. Developers still have to write integration code to tie in the business logic, not to mention the rest of the functionality, and it is very easy to write insecure code if you aren't trained properly," Adams says. "The implementation of security during development can 'feel' invisible; however, implications and importance of security should be quite visible to developers."
Nick Galbreath, vice president of engineering for IPONWEB, says he strives to make security more visible among the developers at his organization. One of the big ways to do that is by giving developers regular data from security tools about the types of attacks hitting their application infrastructure so they can see what they're up against.
"Instrumenting real-time graphs like SQL injection, cross-site scripting, and all of the garden variety junk that comes in from scanners actually educates everyone -- management and developers," he says. "If you start instrumenting it so you can see these probes and attacks come in, developers are actually pretty interested in it, and it really is a great way of engaging people."
But don't just give them real-time information feeds to raise awareness. Also consider crunching the data with analysis that shows them the most common security issues and tying them to root causes within the code.
"There's a bazillion things you can screw up when writing code or deploying applications," says David Mortman, chief security architect for enStratus. "So if you can, start measuring where you're screwing up and how you're screwing up."
Not only can this improve the way that code is developed, but also how it is implemented.
"There's no point in spending a lot of time talking about SQL injection if everything is cross-site scripting [in your environment]," he says. "And there's no point in harassing developers who were writing code securely if the problem is that the ops keep screwing up the configuration files or something like that. If you're going to improve things you have to know where you're breaking things first."
Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.