BIND holes mean big trouble on the Net

MS takedown just a preview


Serious new security holes have been found in the ubiquitous BIND name server (DNS) program, the worst of which jeopardize hundreds of thousands of computers and make key elements of the Internet's infrastructure vulnerable to hack attacks, according to a Monday morning advisory from the Computer Emergency Response Team (CERT).

The advisory documents four vulnerabilities in BIND, including two buffer overflows that could allow attackers to remotely gain unrestricted access to machines running the program, which comes installed in a dozen different vendor flavours of Unix and Linux. "Because the majority of name servers in operation today run BIND, these vulnerabilities present a serious threat to the Internet infrastructure," the advisory reads.

California security company Network Associates Inc. (NAI) discovered the buffer overflows in December, and notified the Internet Software Consortium (ISC), which maintains BIND. Upgrades that eliminate the holes are now available from some vendors, and directly from the ISC, which spent the weekend quietly urging network operators to upgrade in advance of Monday's announcement.

Name servers perform the critical task of translating Internet domain names to the address numbers needed to connect online.

Last week, a technical problem left Microsoft's name servers inaccessible for two days, effectively cutting off the company's web properties, including MSN and Hotmail. The Microsoft glitch was apparently unrelated to the BIND vulnerabilities -- the company has its own name server software -- but experts say the snafu provided a small scale preview of the havoc the BIND holes could bring.

"These vulnerabilities have the potential to take out big chunks of the Internet," says NAI's Jim Magdych. "The things we've come to rely upon in the new economy could be rendered inaccessible."

Exploits certain to follow

The most serious of the vulnerabilities is a buffer overflow in the portion of BIND 8 that processes transaction signatures. Another buffer overflow in version 4 was found in the section of code that formulates error messages. With either hole, a sophisticated programmer could write code that would kill the name server remotely, or give them 'root' access to the operating system on which it runs.

Buffer overflow vulnerabilities are formed where a program accepts more data from an outside source than it can store in the memory allotted for it. The extra data overflows into a portion of memory where instructions are stored, and end up being executed as though part of the original program.

Because of the nature of the BIND bugs, to exploit either of them a hacker will need to craft code with the handicap of a limited instruction set, a task akin to constructing a sentence without using the letters 't' and 's'. "In a normal buffer overflow, you tend to have complete control of the values," says NAI engineer John McDonald, one of the researchers who discovered the bugs. "These will require more clever exploits. The values that you're overrunning the buffer with are limited."

But regardless of the effort involved, network administrators likely have little time to perform an upgrade before easy-to-use programs exploiting the vulnerabilities become widely available and a wave of attacks ensues.

"It's a very subtle bug, and I would hope that people won't turn around and have an exploit out in eight hours," says NAI's Magdych. "But it would probably be very optimistic to think that it'll be more than a day or two."

CERT recommends that users of BIND 4.9.x or 8.2.x upgrade to the newly-released BIND 4.9.8 or BIND 8.2.3. But if history is a guide, then many network administers will not hear, or not act on, that advice, and thousands of vulnerable systems will remain open.

In April, 1998, discovery of a buffer overflow in an earlier version of BIND led to a cyber-crime wave, with CERT logging intrusion reports into November of that year, despite a similar advisory and available patches. According to court records, victims included the U.S. Defence Department, which suffered intrusions into unpatched systems around the country.

© 2001 SecurityFocus.com, all rights reserved.


Other stories you might like

  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading
  • To multicloud, or not: Former PayPal head engineer weighs in
    Not everyone needs it, but those who do need to consider 3 things, says Asim Razzaq

    The push is on to get every enterprise thinking they're missing out on the next big thing if they don't adopt a multicloud strategy.

    That shove in the multicloud direction appears to be working. More than 75 percent of businesses are now using multiple cloud providers, according to Gartner. That includes some big companies, like Boeing, which recently chose to spread its bets across AWS, Google Cloud and Azure as it continues to eliminate old legacy systems. 

    There are plenty of reasons to choose to go with multiple cloud providers, but Asim Razzaq, CEO and founder at cloud cost management company Yotascale, told The Register that choosing whether or not to invest in a multicloud architecture all comes down to three things: How many different compute needs a business has, budget, and the need for redundancy. 

    Continue reading

Biting the hand that feeds IT © 1998–2022