This article is more than 1 year old
Azul lays claim to massive efficiency gains with remote compilation for Java
High price and no ARM64 yet. We quiz boss on whether it makes sense
Interview Azul, a provider of OpenJDK (Java runtime) builds, has introduced a "Cloud Native Compiler" which offers remote compilation of Java to native code, claiming it can reduce compute resources by up to 50 per cent.
When a Java application runs, a JIT (Just-in-time) compiler, usually the OpenJDK JIT called HotSpot, compiles the Java bytecode to native machine code to optimise performance. It is a highly optimised process – but Azul reckons it can improve it further by removing that responsibility from the VM or container where the application is running.
"The problem with [local compilation] is that you're constrained by local machine resources," Azul CEO and co-founder Scott Sellers tells The Register. "There is no sharing of information between one instance of the Java runtime and the next. So everything is very siloed and rigid. The Cloud Native Compiler is about offloading the compilation process, taking it out of the JVM [Java Virtual Machine] and instead putting that into a cloud service."
Is it really efficient to have a Java application send its bytecode over the network to another service that compiles and sends back the results to be executed?
"You're talking about infrastructure that is very close, from a latency perspective. So surprisingly, yes. The amount of information that gets passed back and forth is very little, relative to the amount of compute required to compile that bytecode into the native instruction set. As a result, architecturally it makes sense to do this. There is no change to the application," says Sellers.
HotSpot would optimise the code for the hardware on which it runs – does the remote compiler do that as effectively? "It takes that into account," says Sellers. "The compiler that is inside our product is called Falcon, and Falcon is based on the LLVM project. Most of LLVM is used for static compilation only, we're the only ones who use it in a Java runtime context. Inherent to LLVM is that it's a cross-compiler. When it receives the request from whatever JVM is asking, it knows exactly the underlying microarchitecture of that processor, so the code that is compiled is highly tuned for that processor."
Is this a lot of effort for something that is essentially just a startup cost, which will make no difference one the application is running? "There's two aspects to this," says Sellers. "The first is that in the CI/CD DevOps mentality that many enterprises embrace today, they are restarting their applications quite commonly, three to four times a day. Startup time is more important than it used to be.
"But the startup benefits are secondary… it turns out that the way machines are sized has a lot to do with the amount of resources needed to do a good compilation job at the beginning. The amount of compute and memory needed to run a Java application is huge at the beginning, and then settles down, so you get significant over-provisioning of resources that are only needed at the beginning of a run. In many cases we're seeing customer public cloud costs cut in half just by moving the compilation process to the Cloud Native Compiler."
That is quite a claim. Why not just do ahead-of-time (AOT) compilation and avoid the JIT compiler completely?
- Log4j doesn't just blow a hole in your servers, it's reopening that can of worms: Is Big Biz exploiting open source?
- Developers offered browser-based fun in VSCode.dev and Java action in Visual Studio Code
- Oracle sets its own JDK free, sort of, for a while
- Java 17 arrives with long-term support: What's new, and is it falling behind Kotlin?
"AOT compilation does improve startup time, but it comes with costs," says Sellers. "For example, the native compiler technology in Graal [Java AOT compiler] is not fully Java compatible, so you can't run all Java applications on it. But the bigger problem, the reason that dynamic compilation technology exists, is how performant an application can become making dynamic compilation decisions… anywhere from 10-20 per cent to 80-100 per cent faster than what you can do with static compilation."
Are the benefits of the Cloud Native Compiler application dependent? For example, an application that loaded a lot of data into memory will need a lot of resources, irrespective of compiler technology?
"We would love it if there were more applications where the memory to compute ratio was very high," Sellers tells us, claiming that it would show off the Azul garbage collector. "In reality, those are kind of niche."
The Cloud Native Compiler is customer-deployed, and Sellers says that the normal approach is to use Kubernetes since the native compiler service itself needs to auto-scale. "Now in the era of Kubernetes it's straightforward, there's a standard way to deploy an inherently scalable platform like Cloud Native Compiler," he said. "Over time, we do expect that we will have a SaaS offering." Kubernetes is not essential, though, according to Sellers.
Does Cloud Native Compiler work with ARM64 chips like AWS Graviton? "Our technology today is x86, but we'll have an answer for that pretty soon," Sellers tells us.
Cloud Native Compiler is not offered as a separate product, but as part of Azul's commercial subscription called Platform Prime, which runs from $100 per annum per virtual core (vCore), down to $20/vCore for the largest deployments. Azul plans to add further capabilities to what it calls Intelligence Cloud next year, which will be "more into the analytical side of things," Sellers says.
Despite Azul's enthusiasm for its own product, there are downsides, such as adding a dependency and further complexity to Java application deployments. We have asked for more data on real-world resource savings since this is the most significant of the claimed benefits. ®