Microsoft's Shared-Source defeats Trustworthy Computing

Security through obscurity only works to a point


Opinion The recent leak of Windows source code onto the Web has made a lot of people jumpy. According to MS news blog Bink.nu, the company has already discovered at least one downloader and sent him a nastygram. If this is true, it indicates an aggressive response back in Redmond, a scrambling to plug the leaks and intimidate in the curious RIAA-style.

It should surprise no one that the proverbial chickens have come home to roost. Microsoft's security is in part a function of keeping its source code out of the wrong hands. Thus the Shared Source gimmick is in direct conflict with that portion of the company's Trustworthy Computing gimmick that depends on secrecy.

No one wants malicious coders to get their hands on enough of the Windows source to accelerate development of the never-ending torrent of novel exploits already coming out on a weekly basis.

Keeping the code under lock and key is a brake on exploit development: it works simply by making the process more difficult. But by sharing it with numerous partners, the company also makes it more difficult to keep the lid on. Shared Source means that future leaks are inevitable.

And these leaks have consequences. It took only a few days for a computer enthusiast to find a simple exploit against IE 5 based on the leaked code.

It wasn't a terribly important one; but it was, the enthusiast claimed, based on a quick review of the code. More serious exploits may yet be found in the code now circulating, or not as the case may be. But the next time source code is stolen or accidentally released - and it will happen - there might be widespread and very serious security implications.

Deep in the Bowels of Redmond

Consider that the 'recent' ASN.1 vulnerability needed six months to be fixed. The problem was not so much Microsoft's bureaucratic inertia, but the fact that the flaw was located deep in the bowels of Windows.

It was difficult to fix because it affected many interdependent components. Microsoft's penchant for system integration and interdependence is itself an obstacle to developing patches that work properly and don't break other things.

It's no wonder the company is jumpy. Its nightmare consists of a dual threat: first, that more source code will leak and lead to the discovery of a serious exploit, and second, that the problem (like the ASN.1 bug) will be so deeply rooted in the system that patching it will require months of work. It is reasonable to foresee a situation in which millions of Windows boxes would be susceptible to an exploit that can't be patched adequately for many months.

It could happen as a result of the recent code leak, or we might have to wait for the next blunder. But it will happen: it's only a matter of time.

But if (perhaps in some alternate universe) Microsoft products were open source, the need to maintain secrecy would be eliminated, and with it, a significant source of anxiety. But that's not an option.

Security through obscurity can work in some situations, but only so long as obscurity is maintained. If I bury money in my yard, it will remain safe so long as I don't tell anyone about it, and so long as some accident doesn't reveal it. If you go down that path, you have got to stay on it, and that's difficult under the best of circumstances because accidents do happen.

Undeserved obscurity

Microsoft's mistake is trying to have it both ways. It wants to keep the code under wraps, yet share parts of it with big clients and partners whom it hopes it can trust. But some trustees will be unscrupulous while others will be incompetent, and occasional failures in obscurity are inevitable. As Poor Richard's Almanac noted many years ago, "three can keep a secret if two of them are dead."

The security of MS products, bad as it is, will only be degraded further so long as the company relies on keeping its source code secret, while at the same time sharing it with hundreds of 'select' outsiders. This is a contradictory approach. It simply can't be made to work over the long term.

Unfortunately, at this point, there is no solution. The code can't be put back in the box. This is simply a bad decision that the company has made, the consequences of which have yet to be felt firmly.

But they will. It's only a matter of time. ®


Other stories you might like

  • Alcatel-Lucent Enterprise adds Wi-Fi 6E to 'premium' access points
    Company claims standard will improve performance in dense environments

    Alcatel-Lucent Enterprise is the latest networking outfit to add Wi-Fi 6E capability to its hardware, opening up access to the less congested 6GHz spectrum for business users.

    The France-based company just revealed the OmniAccess Stellar 14xx series of wireless access points, which are set for availability from this September. Alcatel-Lucent Enterprise said its first Wi-Fi 6E device will be a high-end "premium" Access Point and will be followed by a mid-range product by the end of the year.

    Wi-Fi 6E is compatible with the Wi-Fi 6 standard, but adds the ability to use channels in the 6GHz portion of the spectrum, a feature that will be built into the upcoming Wi-Fi 7 standard from the start. This enables users to reduce network contention, or so the argument goes, as the 6GHz portion of the spectrum is less congested with other traffic than the existing 2.4GHz and 5GHz frequencies used for Wi-Fi access.

    Continue reading
  • Will Lenovo ever think beyond hardware?
    Then again, why develop your own software à la HPE GreenLake when you can use someone else's?

    Analysis Lenovo fancies its TruScale anything-as-a-service (XaaS) platform as a more flexible competitor to HPE GreenLake or Dell Apex. Unlike its rivals, Lenovo doesn't believe it needs to mimic all aspects of the cloud to be successful.

    While subscription services are nothing new for Lenovo, the company only recently consolidated its offerings into a unified XaaS service called TruScale.

    On the surface TruScale ticks most of the XaaS boxes — cloud-like consumption model, subscription pricing — and it works just like you'd expect. Sign up for a certain amount of compute capacity and a short time later a rack full of pre-plumbed compute, storage, and network boxes are delivered to your place of choosing, whether that's a private datacenter, colo, or edge location.

    Continue reading
  • Intel is running rings around AMD and Arm at the edge
    What will it take to loosen the x86 giant's edge stranglehold?

    Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.

    So where are all the AMD and Arm-based edge appliances?

    A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.

    Continue reading

Biting the hand that feeds IT © 1998–2022