What to do about open source vulnerabilities? Move fast, says Linux Foundation expert

The CIO does not decide how soon you need to respond. 'The person who decides is the attacker'


QCon Plus Automated testing and rapid deployment are critical to defending against vulnerabilities in open source software, said David Wheeler, director of Open Source Supply Chain Security at the Linux Foundation.

Dr Wheeler, who is the author of multiple books on secure programming and teaches a course on the subject at George Mason University in Virginia, US, was speaking at the QCon Plus event under way online this week.

The OpenSSF dashboard is intended to help developers assess the security of open source projects

The OpenSSF dashboard is intended to help developers assess the security of open source projects

How bad is the problem?

Wheeler referenced a 2021 report by software security and IoT (Internet of Things) company Synopsys which said there are an average of 528 open source components per application, that 84 per cent of codebases have at least one vulnerability, and the average number of vulnerabilities per codebase is 158. This was based on an audit of 1,546 codebases, where a codebase is defined as "the code and associated libraries that make up an application or service."

Wheeler said "I want to emphasise that software is under attack."

It is not quite as bad as it sounds, since not all vulnerabilities are exploitable. "Just because there's a dependency that's a vulnerability, does not mean it is exploitable in your situation," said Wheeler.

The vulnerabilities may be in features of a library that the application does not use, or where malformed data cannot reach. "However it can be very difficult to determine whether something is a vulnerability… although you may think it is not exploitable it is very difficult to verify," he cautioned.

'The many eyes theory does work'

Is open source or proprietary software more secure?

"The correct answer is neither," said Wheeler. "If you care about security you evaluate the software." Nevertheless, Wheeler said that open source is potentially more secure because of the long-standing secure software design principle that "the protection mechanism must not depend on attacker ignorance," as explained in a paper by Jerome Saltzer and Michael Schroeder in 1974.

This gives open source software an advantage. "The many eyes theory does work," said Wheeler.

The software supply chain has many points of vulnerability

The software supply chain has many points of vulnerability (click to enlarge)

Failure to update

A large part of the problem is that vulnerable software does not get updated. "It's well-known that known-vulnerable reused software is a serious problem today. Many applications, many systems fail to update all the components that are used within them," he said. This is true for closed source as well, but "there's a lot more open source software in use."

What is the solution? There is no single answer, but Wheeler offered numerous practical suggestions. Developers, he said, should "learn how to develop and acquire secure software," referencing a number of courses, best practices and tools, some of them free.

A course is not a huge time commitment, he said, and might take just a day and a half. A Linux Foundation "CII [Core Infrastructure Initiative] Best Practices" program specifies a number of requirements for software projects including that static analysis tools must be used and that there must be an automated test suite.

Unit testing is essential, but Wheeler noted a weakness of test-driven development is that the model of writing a test, and then the code to make the test pass, does not include negative tests, meaning that "you need to test to make sure that things that shouldn't happen, don't happen ... a failure to include negative tests is one of the big problems in a lot of test suites today. It's how the Apple goto fail vulnerability happened, said Wheeler, referencing this issue.

He recommended caution with little-used software. "If there are no users, it's probably going to get no reviewers. No use is a problem," he said. The solution if it is still needed is to "look at it yourself."

The programming language does make a difference

"There is no language that guarantees no vulnerabilities. That said, there are certain vulnerabiliites that are common in certain languages. C and C++ are not memory safe. In most languages, trying to access an array out of bounds will immediately be caught. Not so in C. That will turn into an instant potential for vulnerabilities… C and C++ have a huge number of undefined behaviours," Wheeler said.

What about the performance overhead of safer languages? "Rust is pretty impressive that way because the performance penalty is relatively small. In general there is either no or very little penalty."

In some cases, Rust may perform better because "the Rust model makes it much easier to write parallel code without fear," he said. There are other high performance options, he added, including Ada and Fortran. Fortran is "incredibly fast," he said.

The CIO does not decide how fast you need to repair it. Your process does not decide. The person who decides is the attacker

Once a vulnerability is discovered, how long do businesses have to update? "The CIO does not decide how fast you need to repair it. Your process does not decide. The person who decides is the attacker. The attacker decides when you need to fix it," said Wheeler.

"I run a project where we generally update to production in a day, and we try to do it in an hour. You have to move faster than the attacker… I realise that's a hard bar for many organisations. You have to respond and deploy faster than the attacker can exploit… You are on a clock, and the clock is not measured in months."

The implication is that effective continuous delivery is not only good for business agility, but also essential for security. "You should have everything, all your automated tools, in place," said Wheeler. "Now is the time to fix your test suite."

The issue is challenging, but Wheeler said there are initiatives that may improve matters. He mentioned the SPDX project for specifying the "bill of materials" that are used by a software library or applications, and the Open Source Security Metrics (OpenSSF) dashboard which helps developers and users assess the security of specific packages, though it is in its early stages. He also mentioned the trend toward reproducible builds.

"I think we'll see a lot more verified reproducible builds… the idea is that if you can rebuild from the same source code and produce exactly the same resulting executable package, with multiple different independent efforts, it's much less likely that they were all subverted."

Debian project has made significant progress with this for Bullseye, the next major release. ®

Broader topics


Other stories you might like

  • DuckDuckGo tries to explain why its browsers won't block Microsoft ad trackers
    Meanwhile, Tails 5.0 users told to stop what they're doing over Firefox flaw

    DuckDuckGo promises privacy to users of its Android, iOS browsers, and macOS browsers – yet it allows certain data to flow from third-party websites to Microsoft-owned services.

    Security researcher Zach Edwards recently conducted an audit of DuckDuckGo's mobile browsers and found that, contrary to expectations, they do not block Meta's Workplace domain, for example, from sending information to Microsoft's Bing and LinkedIn domains. Specifically, DuckDuckGo's software didn't stop Microsoft's trackers on the Workplace page from blabbing information about the user to Bing and LinkedIn for tailored advertising purposes. Other trackers, such as Google's, are blocked.

    "I tested the DuckDuckGo so-called private browser for both iOS and Android, yet neither version blocked data transfers to Microsoft's Linkedin + Bing ads while viewing Facebook's workplace[.]com homepage," Edwards explained in a Twitter thread.

    Continue reading
  • Despite 'key' partnership with AWS, Meta taps up Microsoft Azure for AI work
    Someone got Zuck'd

    Meta’s AI business unit set up shop in Microsoft Azure this week and announced a strategic partnership it says will advance PyTorch development on the public cloud.

    The deal [PDF] will see Mark Zuckerberg’s umbrella company deploy machine-learning workloads on thousands of Nvidia GPUs running in Azure. While a win for Microsoft, the partnership calls in to question just how strong Meta’s commitment to Amazon Web Services (AWS) really is.

    Back in those long-gone days of December, Meta named AWS as its “key long-term strategic cloud provider." As part of that, Meta promised that if it bought any companies that used AWS, it would continue to support their use of Amazon's cloud, rather than force them off into its own private datacenters. The pact also included a vow to expand Meta’s consumption of Amazon’s cloud-based compute, storage, database, and security services.

    Continue reading
  • Atos pushes out HPC cloud services based on Nimbix tech
    Moore's Law got you down? Throw everything at the problem! Quantum, AI, cloud...

    IT services biz Atos has introduced a suite of cloud-based high-performance computing (HPC) services, based around technology gained from its purchase of cloud provider Nimbix last year.

    The Nimbix Supercomputing Suite is described by Atos as a set of flexible and secure HPC solutions available as a service. It includes access to HPC, AI, and quantum computing resources, according to the services company.

    In addition to the existing Nimbix HPC products, the updated portfolio includes a new federated supercomputing-as-a-service platform and a dedicated bare-metal service based on Atos BullSequana supercomputer hardware.

    Continue reading

Biting the hand that feeds IT © 1998–2022