It's official: TLS 1.3 approved as standard while spies weep

Now all you lot have to actually implement it


An overhaul of a critical internet security protocol has been completed, with TLS 1.3 becoming an official standard late last week.

Describing it as "a major revision designed for the modern Internet," the Internet Engineering Task Force (IETF) noted that the update contains "major improvements in the areas of security, performance, and privacy."

One of the biggest is that it will make it much harder for eavesdroppers to decrypt intercepted traffic. The mass surveillance of internet communications by the US National Security Agency (NSA) revealed in 2013 by Edward Snowden, was a major driver in the design of the new protocol.

Work on 1.3 began in April 2014 and reached draft 28 before finally being approved in March this year. The protocol is so central to the encryption of internet traffic that it has taken until August 10 for engineers to check that nothing in it is going to cause any major problems.

The new version – which some argue could be called TLS 2.0 due to the significance of the changes – makes no less that three previous RFCs obsolete and updates another two. As things stand, there are currently no identified security holes in the algorithms used in TLS 1.3; the same cannot be said for 1.2.

And that points to the most critical part of the new RFC 8446: getting people to actually implement it.

Drag and drop

It shouldn't be that hard. One of the editors of the TLS – and HTTPS – specs, Eric Rescorla, told The Reg earlier this month that a lot of work had been done to make it easy to deploy.

"It's a drop-in replacement for TLS 1.2, uses the same keys and certificates, and clients and servers can automatically negotiate TLS 1.3 when they both support it," he noted, adding: "There's pretty good library support already, and Chrome and Firefox both have TLS 1.3 on by default."

There have been problems: earlier drafts broke a lot of middleboxes and Google paused its plan to support the new protocol in Chrome when an IT schools administrator in Maryland reported that a third of the 50,000 Chromebooks he managed bricked themselves after being updating to use the tech.

The way TLS 1.3 works also sparked some last-minute pleading from the banking industry to make a change and effectively introduce a backdoor into the system because it could lock them out of seeing what was happening within their own networks. In response, engineers made a few improvements and the general view now is that if TLS 1.3 breaks your network monitoring, then you are probably doing it wrong in the first place.

The IETF is keen to point out that it put a lot of work into making sure that 1.3 has been tested in real-world situations before getting the official stamp.

"The process of developing TLS 1.3 included significant work on 'running code'," it noted, adding: "This meant building and testing implementations by many companies and organizations that provide products and services widely used on the Internet, such as web browsers and content distribution networks."

Aside from the fact that the new protocol provides security improvements, there are also good networking reasons to put it in place. The new version is less resource hungry and more efficient, meaning you should be able to both reduce latency and benefit from lower CPU usage.

Hole in one?

If there is one downside it is concerns over the addition of a component called "0-RTT Resumption" which effectively allows the client and server to remember if they have spoken before, and so forego security checks, using previous keys to start talking immediately.

Fizz TLS 1.3 logo

Facebook cracks opens its bottle of Fizz – a carbonated TLS 1.3 lib

READ MORE

That will make connections much faster but opens up a potential security hole that those seeking to exploit TLS 1.3 will almost certainly focus on. The change was pushed by big tech companies like Google that will massively benefit from faster communications between its billions of connections, but some fear it will come back to bite everyone. Some companies are not implementing 0-RTT as a result.

But that aside, TLS 1.3 represents a big jump in general security. And considering that implementation shouldn't be too difficult, it's a no-brainer for sysadmins. Of course, as much as moving to 1.3 will increase general security, so will getting people ditching earlier, insecure, protocols. There is even a push to officially kill off TLS 1.0 and 1.1.

You see, sometimes there is a good security story. ®

Similar topics

Broader topics


Other stories you might like

  • Florida's content-moderation law kept on ice, likely unconstitutional, court says
    So cool you're into free speech because that includes taking down misinformation

    While the US Supreme Court considers an emergency petition to reinstate a preliminary injunction against Texas' social media law HB 20, the US Eleventh Circuit Court of Appeals on Monday partially upheld a similar injunction against Florida's social media law, SB 7072.

    Both Florida and Texas last year passed laws that impose content moderation restrictions, editorial disclosure obligations, and user-data access requirements on large online social networks. The Republican governors of both states justified the laws by claiming that social media sites have been trying to censor conservative voices, an allegation that has not been supported by evidence.

    Multiple studies addressing this issue say right-wing folk aren't being censored. They have found that social media sites try to take down or block misinformation, which researchers say is more common from right-leaning sources.

    Continue reading
  • US-APAC trade deal leaves out Taiwan, military defense not ruled out
    All fun and games until the chip factories are in the crosshairs

    US President Joe Biden has heralded an Indo-Pacific trade deal signed by several nations that do not include Taiwan. At the same time, Biden warned China that America would help defend Taiwan from attack; it is home to a critical slice of the global chip industry, after all. 

    The agreement, known as the Indo-Pacific Economic Framework (IPEF), is still in its infancy, with today's announcement enabling the United States and the other 12 participating countries to begin negotiating "rules of the road that ensure [US businesses] can compete in the Indo-Pacific," the White House said. 

    Along with America, other IPEF signatories are Australia, Brunei, India, Indonesia, Japan, South Korea, Malaysia, New Zealand, the Philippines, Singapore, Thailand and Vietnam. Combined, the White House said, the 13 countries participating in the IPEF make up 40 percent of the global economy. 

    Continue reading
  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading
  • To multicloud, or not: Former PayPal head of engineering weighs in
    Not everyone needs it, but those who do need to consider 3 things, says Asim Razzaq

    The push is on to get every enterprise thinking they're missing out on the next big thing if they don't adopt a multicloud strategy.

    That shove in the multicloud direction appears to be working. More than 75 percent of businesses are now using multiple cloud providers, according to Gartner. That includes some big companies, like Boeing, which recently chose to spread its bets across AWS, Google Cloud and Azure as it continues to eliminate old legacy systems. 

    There are plenty of reasons to choose to go with multiple cloud providers, but Asim Razzaq, CEO and founder at cloud cost management company Yotascale, told The Register that choosing whether or not to invest in a multicloud architecture all comes down to three things: How many different compute needs a business has, budget, and the need for redundancy. 

    Continue reading

Biting the hand that feeds IT © 1998–2022