HashiCorp tweaks Terraform with user interface changes and AI infused testing

Can I use AI to write me a license that won't annoy the open-source community?

HashiConf A host of Terraform features join HashiCorp's flagship product today, including testing and user interface tweaks aimed at cutting errors in infrastructure code.

Terraform's recent license shenanigans, which have enraged significant chunks of the open-source community and given rise to OpenTofu, have also served to add controversy to that driest of technologies – infrastructure as code.

To redress the balance, HashiCorp has shown off features – some generally available and some in beta or private preview – to make the Terraform experience smoother.

Most notable are the improvements around module testing. Modules are a core component of Terraform and used by customers to standardize infrastructure provisioning. However, by their very nature, a bug in a module's code can cause mayhem, from outages to security holes.

A test framework showed up in Terraform 1.6 and is now directly integrated with the private registry, with a branch-based publishing method used to control how and when modules are published – a move away from the current Git tag-based publishing. Yet, while kicking off tests automatically is useful, actually writing the scripts can be a chore, particularly considering the need to learn yet another new framework.

Enter generative AI

Even Terraform seems not immune to the latest and greatest fad in IT. In this beta instance, generative AI is being implemented to kickstart the writing of module tests. However, HashiCorp reckons the results are intended as a starting point for module authors.

The company said: "Our new generated module tests feature leverages a large language model (LLM) to auto-generate a suite of customized tests for a module within the private registry.

"This model is specifically trained on HCL and the Terraform test framework to help module authors begin testing their code right away."

It's undoubtedly useful – as with similar tools, Terraform Cloud will generate the code for tests customized to the module. The code can then be copied or downloaded from the user interface and added to the module repository, where it will be saved for later use.

HashiCorp also addressed security and privacy concerns concerning generative AI use at Hashicobnf today. It said: "Customer data security is very important to us, and our AI-test generation features have been built so that no customer or community module data is used for training models and module data won't be stored with third-party vendors."

As well as AI-generated module tests, the company also showed off enhanced editor validation in the Terraform extension for Visual Studio Code – now generally available – and its Stacks concept, which is currently in Private Preview.

Although Terraform's modular approach to infrastructure works well for some applications, the company admitted that "large-scale deployment and management often remains tedious, complex, and repetitive."

It isn't wrong. A user must understand the dependencies and provision modules and workspaces manually, one by one, in the correct order. Add multiple environments, and the complexity ramps up.

Stacks is HashiCorp's attempt to simplify things by allowing multiple Terraform modules to be organized and deployed in a stack using components. This construct groups together different interdependent systems, such as network and database modules.

Once defined, the work can be replicated multiple times, with differing input values in each deployment.

It's an interesting approach to the problems Terraform users encounter when faced with large-scale deployments. Still, the company is hardly the only Infrastructure as Code (IAC) outfit trying to work in this space. Others include Puppet, which has many fans in the enterprise community, even if managing it day to day can be challenging compared to the HashiCorp product. ®

More about


Send us news

Other stories you might like