This article is more than 1 year old
How do you fix a problem like open-source security? Google has an idea, though constraints may not go down well
'Try telling leaders of libpng, libjpeg-turbo, openssl, ffmpeg etc they can't make "unilateral" changes to their own projects'
Google has proposed a framework for discussing and addressing open-source security based on factors like verified identity, code review, and trusted builds, but its approach may be at odds with open-source culture.
The security of open-source software is critical because of its wide adoption, from the Linux kernel on which most of the internet runs to little JavaScript libraries that get built into millions of web applications, sometimes via a chain of dependencies somewhat hidden from the developer. Vulnerabilities such as one discovered recently in the essential sudo utility affect millions of systems.
A team from Google has now posted at length about the issue in the hope of "sparking industry-wide discussion and progress on the security of open source software."
The post – called "Know, Prevent, Fix" – is co-authored by Eric Brewer, VP of infrastructure at Google, distinguished engineer Rob Pike (co-designer of the Go language); principal software engineer Abhishek Arya; program manager, Open Source Security, Anne Bertucio; and product manager Kim Lewandowski.
Decade-old bug in Linux world's sudo can be abused by any logged-in user to gain root privileges
READ MORESeparately, Google is a founding member of the Linux Foundation's OpenSSF (Open Source Security Foundation), along with many others including GitHub, GitLab, Intel, IBM, Microsoft, NCC Group, OWASP, Red Hat and VMware.
The new post references some of the work of OpenSSF, in particular Security Scorecards, which is an automated tool to assess the security of a project according to various criteria such as use of code review, static analysis, tests, and the existence of a security policy.
Google suggested that "open source software should be less risky on the security front, as all of the code and dependencies are in the open and available for inspection and verification," but noted that this only applies if people are "actually looking."
The dependency issues mean thousands of packages are in use, making it hard understand its security. "We must focus on making fundamental changes to address the majority of vulnerabilities," the team insisted.
Google, it turns out, does not entirely trust the usual open-source repositories and package managers. The company keeps "a private repo of all open source packages we use internally – and it is still challenging to track all of the updates," we are told.
It is looking for better tools to automate this. The paper appears to be in part based on the company's own internal practices, recognising that without cooperation across the software industry these standards would be beyond the means of most organisations.
The company's proposal includes some specifics, noting that some of the ideas are intended only for open-source software categorised as critical:
- A standard schema for vulnerability databases. This would make automation easier as a tool could better understand data across the industry.
- A notification system for the actual discovery of vulnerabilities.
- That no changes are made to critical open-source software without code review and approval by two independent parties.
- That owners and maintainers of critical software projects are not anonymous but have verified identities, either public or via a trusted entity, and use strong authentication such as 2FA. The team proposes developing a federated model for identities.
- For critical software, tamper-checking for software packages and artefacts, such as Google itself proposed with Trillian.
- For critical software, an attested build system, perhaps with trusted agents that provide a build service and sign the compiled packages.
The Google team acknowledged that its goals for critical software are "more onerous and therefore will meet some resistance, but we believe the extra constraints are fundamental for security."
Good luck with that, Google
While Google's proposals are a logical outcome of thinking through the hows and whys of software vulnerabilities, it does seem far removed from the norms of open-source culture. Could the standards proposed be imposed on open-source projects without making them slower and more bureaucratic, and alienating some of the highly motivated individuals who make them work?
Rubbish software security patches responsible for a quarter of zero-days last year
READ MORE"Try telling the leaders of various projects like libpng, libjpeg-turbo, openssl, ffmpeg etc that they are not allowed to make 'unilateral' changes to their own projects just because they are critical software in the FOSS world," said one comment on the proposals.
It also seems odd in some ways that Google chose to post this proposal on its own open-source blog, rather than hammering out a collaborative paper in the context of OpenSSF, which is a more neutral environment, though this of course may follow.
Google's proposals seem more stringent than the ideas presented in the OpenSSF technical vision last week, which are more focused on making it easier for developers to write secure code. Although in the light of the SolarWinds attack, in which a compromised build environment was used to insert malicious code, the Linux Foundation's director of open source supply chain security, David Wheeler, posted about hardening build environments, echoing some of Google's concerns.
The question is not only how use of open source affects security, but how the requirements of security will impact open source. ®