How do you fix a problem like open-source security? Google has an idea, though constraints may not go down well

'Try telling leaders of libpng, libjpeg-turbo, openssl, ffmpeg etc they can't make "unilateral" changes to their own projects'

Google has proposed a framework for discussing and addressing open-source security based on factors like verified identity, code review, and trusted builds, but its approach may be at odds with open-source culture.

The security of open-source software is critical because of its wide adoption, from the Linux kernel on which most of the internet runs to little JavaScript libraries that get built into millions of web applications, sometimes via a chain of dependencies somewhat hidden from the developer. Vulnerabilities such as one discovered recently in the essential sudo utility affect millions of systems.

A team from Google has now posted at length about the issue in the hope of "sparking industry-wide discussion and progress on the security of open source software."

The post – called "Know, Prevent, Fix" – is co-authored by Eric Brewer, VP of infrastructure at Google, distinguished engineer Rob Pike (co-designer of the Go language); principal software engineer Abhishek Arya; program manager, Open Source Security, Anne Bertucio; and product manager Kim Lewandowski.

Someone with a crowbar trying to break in through a door

Decade-old bug in Linux world's sudo can be abused by any logged-in user to gain root privileges


Separately, Google is a founding member of the Linux Foundation's OpenSSF (Open Source Security Foundation), along with many others including GitHub, GitLab, Intel, IBM, Microsoft, NCC Group, OWASP, Red Hat and VMware.

The new post references some of the work of OpenSSF, in particular Security Scorecards, which is an automated tool to assess the security of a project according to various criteria such as use of code review, static analysis, tests, and the existence of a security policy.

Google suggested that "open source software should be less risky on the security front, as all of the code and dependencies are in the open and available for inspection and verification," but noted that this only applies if people are "actually looking."

The dependency issues mean thousands of packages are in use, making it hard understand its security. "We must focus on making fundamental changes to address the majority of vulnerabilities," the team insisted.

Google, it turns out, does not entirely trust the usual open-source repositories and package managers. The company keeps "a private repo of all open source packages we use internally – and it is still challenging to track all of the updates," we are told.

It is looking for better tools to automate this. The paper appears to be in part based on the company's own internal practices, recognising that without cooperation across the software industry these standards would be beyond the means of most organisations.

The company's proposal includes some specifics, noting that some of the ideas are intended only for open-source software categorised as critical:

  • A standard schema for vulnerability databases. This would make automation easier as a tool could better understand data across the industry.
  • A notification system for the actual discovery of vulnerabilities.
  • That no changes are made to critical open-source software without code review and approval by two independent parties.
  • That owners and maintainers of critical software projects are not anonymous but have verified identities, either public or via a trusted entity, and use strong authentication such as 2FA. The team proposes developing a federated model for identities.
  • For critical software, tamper-checking for software packages and artefacts, such as Google itself proposed with Trillian.
  • For critical software, an attested build system, perhaps with trusted agents that provide a build service and sign the compiled packages.

The Google team acknowledged that its goals for critical software are "more onerous and therefore will meet some resistance, but we believe the extra constraints are fundamental for security."

Good luck with that, Google

While Google's proposals are a logical outcome of thinking through the hows and whys of software vulnerabilities, it does seem far removed from the norms of open-source culture. Could the standards proposed be imposed on open-source projects without making them slower and more bureaucratic, and alienating some of the highly motivated individuals who make them work?

facepalms by multiple people

Rubbish software security patches responsible for a quarter of zero-days last year


"Try telling the leaders of various projects like libpng, libjpeg-turbo, openssl, ffmpeg etc that they are not allowed to make 'unilateral' changes to their own projects just because they are critical software in the FOSS world," said one comment on the proposals.

It also seems odd in some ways that Google chose to post this proposal on its own open-source blog, rather than hammering out a collaborative paper in the context of OpenSSF, which is a more neutral environment, though this of course may follow.

Google's proposals seem more stringent than the ideas presented in the OpenSSF technical vision last week, which are more focused on making it easier for developers to write secure code. Although in the light of the SolarWinds attack, in which a compromised build environment was used to insert malicious code, the Linux Foundation's director of open source supply chain security, David Wheeler, posted about hardening build environments, echoing some of Google's concerns.

The question is not only how use of open source affects security, but how the requirements of security will impact open source. ®

Broader topics

Other stories you might like

  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading

Biting the hand that feeds IT © 1998–2022