Hurrah! TLS 1.3 is here. Now to implement it and put it into software

Which won't be terrifyingly hard: it's pretty good at making old kit like the way it moves

The ink has dried, so to speak, on TLS 1.3, so it's time for work developing software to implement the standard to begin in earnest.

As we reported last week, now that the protocol's received the necessary consensus in the IETF, implementation “will require people to put in some effort to make it all work properly.”

Vulture South talked to one of the people involved in that implementation, Mauritian developer Loganaden Velvindron, who said the biggest change he's seen since the Singapore IETF 100 last October is that developers no longer seem so wary of the protocol.

“What was interesting to me is that finally, open source developers are no longer saying 'wait and see' about TLS 1.3,” said Velvindron, who participated in the TLS 1.3 hackathon for IETF 101 in London*.

It's beer o clock for sysadmins. Photo by SHutterstock

World celebrates, cyber-snoops cry as TLS 1.3 internet crypto approved


Velvindron said the hackathon's work to build TLS 1.3 into OpenSSL was the gathering's most important contribution, since that will have the most upstream impacts.

The OpenSSL architecture helped: it “abstracts a lot of the low-level changes [behind] the API,” he said.

His team had a bunch of other projects ready to go by the end of the hackathon: the ubiquitous wget command-line HTTP retrieval library, the Nagios-plugins set of network system monitoring packages, the Git and Mercurial version control libraries, the Eclipse Paho machine-to-machine library, and the Monit process/file monitor.

Along the way, Velvindron said, the team discovered a misconstruction in how servers construct the CLIENT HELLO that other app maintainers should watch out for.

He said some applications “don't work with 1.3 because ... the CLIENT HELLO is not constructed correctly, it causes handshake failures”, he said.

Also, Velvindron told us, while the signed-off TLS 1.3 included a resolution to the “middlebox controversy”, it could take a while for that to be implemented in the field.

Middleboxes – chiefly enterprise-edge traffic inspectors and packet filters – were one of the points of contention that helped delay TLS 1.3 for four years.

The IETF decided that systems like OpenSSL should ship with “middlebox compatibility” enabled by default. In this mode, the TLS 1.3 connection looks like TLS 1.2, Velvindron said.

“Assuming that the middlebox implements TLS 1.2 correctly, then the session goes through … it looks like TLS 1.2, but it's using TLS 1.3.”

That means, for example, that some of the worst aspects of TLS 1.2 – for example, that hackers could trick the system into reverting to an old and insecure ciphersuite – are plugged without customers having to undertake a large-scale upgrade to existing systems.

If there's anything wrong with how the middlebox implements TLS 1.2, the connection will break and users will get a warning, and Velvindron said some middleboxes will probably need a firmware upgrade.

What's next: DNS privacy

TLS 1.3 implementation is a long way from finished, but with the project well begun, the group behind it is branching out.

One project that's caught their eye is the IETF's work on DNS privacy, making sure that encrypted DNS sessions don't leak information.

“You still need RFC 7830, DNS padding”, Velvindron told Vulture South.

That's because even the size of an encrypted message can “leak information about the message,” he explained, “and that can make it easier for snoopers to get an idea of the kind of message going through.”

Padding under RFC 7830 makes sure the packet aligns to a particular block size.

One thing that emerged during IETF 101 is that DNS is becoming unwieldy: according to Velvindron, noted Power DNS developer Bert Hubert asked in a presentation: “How many features can we add to this protocol before it breaks?”

Since there are 185 DNS-related RFCs, things could already be starting to creak, which is why Hubert has created the “DNS Camel” (code at GitHub), which is crawling IETF archives for DNS-related documents (the tabulation is here).

“Very few people understand all those features, they're a very small group in the IETF”, Velvindron told Vulture South.

A result is a growing concern that developers tend to work in isolation: “We don't test a feature with other DNS features to make sure it interpolates correctly.”

There was a consensus, he said, that this needs to change – that developers need to test their DNS features against others.

As part of that, the meeting discussed the need to bring ISPs on board, to explain which features they're using, so developers know what's important to test against. ®

*Other hackathon participants and members were Pirabarlen Cheenaramen, Nitin Mutkawoa, Codarren Velvindron, Yasir Auleear, Rahul Golam, Nigel Yong Sao Yong and Yash Paupiah.

Other stories you might like

  • Florida's content-moderation law kept on ice, likely unconstitutional, court says
    So cool you're into free speech because that includes taking down misinformation

    While the US Supreme Court considers an emergency petition to reinstate a preliminary injunction against Texas' social media law HB 20, the US Eleventh Circuit Court of Appeals on Monday partially upheld a similar injunction against Florida's social media law, SB 7072.

    Both Florida and Texas last year passed laws that impose content moderation restrictions, editorial disclosure obligations, and user-data access requirements on large online social networks. The Republican governors of both states justified the laws by claiming that social media sites have been trying to censor conservative voices, an allegation that has not been supported by evidence.

    Multiple studies addressing this issue say right-wing folk aren't being censored. They have found that social media sites try to take down or block misinformation, which researchers say is more common from right-leaning sources.

    Continue reading
  • US-APAC trade deal leaves out Taiwan, military defense not ruled out
    All fun and games until the chip factories are in the crosshairs

    US President Joe Biden has heralded an Indo-Pacific trade deal signed by several nations that do not include Taiwan. At the same time, Biden warned China that America would help defend Taiwan from attack; it is home to a critical slice of the global chip industry, after all. 

    The agreement, known as the Indo-Pacific Economic Framework (IPEF), is still in its infancy, with today's announcement enabling the United States and the other 12 participating countries to begin negotiating "rules of the road that ensure [US businesses] can compete in the Indo-Pacific," the White House said. 

    Along with America, other IPEF signatories are Australia, Brunei, India, Indonesia, Japan, South Korea, Malaysia, New Zealand, the Philippines, Singapore, Thailand and Vietnam. Combined, the White House said, the 13 countries participating in the IPEF make up 40 percent of the global economy. 

    Continue reading
  • 381,000-plus Kubernetes API servers 'exposed to internet'
    Firewall isn't a made-up word from the Hackers movie, people

    A large number of servers running the Kubernetes API have been left exposed to the internet, which is not great: they're potentially vulnerable to abuse.

    Nonprofit security organization The Shadowserver Foundation recently scanned 454,729 systems hosting the popular open-source platform for managing and orchestrating containers, finding that more than 381,645 – or about 84 percent – are accessible via the internet to varying degrees thus providing a cracked door into a corporate network.

    "While this does not mean that these instances are fully open or vulnerable to an attack, it is likely that this level of access was not intended and these instances are an unnecessarily exposed attack surface," Shadowserver's team stressed in a write-up. "They also allow for information leakage on version and build."

    Continue reading
  • A peek into Gigabyte's GPU Arm for AI, HPC shops
    High-performance platform choices are going beyond the ubiquitous x86 standard

    Arm-based servers continue to gain momentum with Gigabyte Technology introducing a system based on Ampere's Altra processors paired with Nvidia A100 GPUs, aimed at demanding workloads such as AI training and high-performance compute (HPC) applications.

    The G492-PD0 runs either an Ampere Altra or Altra Max processor, the latter delivering 128 64-bit cores that are compatible with the Armv8.2 architecture.

    It supports 16 DDR4 DIMM slots, which would be enough space for up to 4TB of memory if all slots were filled with 256GB memory modules. The chassis also has space for no fewer than eight Nvidia A100 GPUs, which would make for a costly but very powerful system for those workloads that benefit from GPU acceleration.

    Continue reading
  • GitLab version 15 goes big on visibility and observability
    GitOps fans can take a spin on the free tier for pull-based deployment

    One-stop DevOps shop GitLab has announced version 15 of its platform, hot on the heels of pull-based GitOps turning up on the platform's free tier.

    Version 15.0 marks the arrival of GitLab's next major iteration and attention this time around has turned to visibility and observability – hardly surprising considering the acquisition of OpsTrace as 2021 drew to a close, as well as workflow automation, security and compliance.

    GitLab puts out monthly releases –  hitting 15.1 on June 22 –  and we spoke to the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, about what will be added to version 15 as time goes by. During a chat with the company's senior director of Product, Kenny Johnston, at the recent Kubecon EU event, The Register was told that this was more where dollars were being invested into the product.

    Continue reading
  • To multicloud, or not: Former PayPal head of engineering weighs in
    Not everyone needs it, but those who do need to consider 3 things, says Asim Razzaq

    The push is on to get every enterprise thinking they're missing out on the next big thing if they don't adopt a multicloud strategy.

    That shove in the multicloud direction appears to be working. More than 75 percent of businesses are now using multiple cloud providers, according to Gartner. That includes some big companies, like Boeing, which recently chose to spread its bets across AWS, Google Cloud and Azure as it continues to eliminate old legacy systems. 

    There are plenty of reasons to choose to go with multiple cloud providers, but Asim Razzaq, CEO and founder at cloud cost management company Yotascale, told The Register that choosing whether or not to invest in a multicloud architecture all comes down to three things: How many different compute needs a business has, budget, and the need for redundancy. 

    Continue reading

Biting the hand that feeds IT © 1998–2022