Evolving Java Runtimes for a Cloud-Centric World

Share
  • December 15, 2021

Java Runtimes Evolve from a Pre-Cloud Mindset

Java Virtual Machines (JVMs) were created in a pre-cloud world and have continued to operate with a pre-cloud mindset: They are isolated, self-reliant units constrained to limited local resources, compute capacity and analytical capabilities. This approach has worked for decades, as evidenced by Java’s ubiquitous presence and adoption, but is now showing its age. Java runtimes now have the opportunity to thrive in modern cloud environments by casting off the legacy limitations and tradeoffs that were inherent in a pre-cloud world, but which are now unnecessary.

Today’s JVMs are isolated; they are built to run workloads using their own functionality. They are unaware of anything outside of themselves; JVMs have no memory of what happened before they started, they leave nothing behind for future runs, they are ignorant of what their neighbors or peers are doing, and they do not leverage intelligence or capabilities that their environment may have.

JVMs are also religiously self-reliant. Only local resources are used in running an application and in conducting whatever functions the JVM must perform. This approach necessarily involves tradeoffs. Resources used to run a given application compete with resources used to perform internal JVM functions designed to support the execution and improve application performance and efficiency over time.

“Magic clouds” didn’t exist in the 1990s, when today’s JVM architectures were solidified. At the time, JVMs were painfully aware that compute power was a highly limited resource, as were storage and memory. Nor was there external functionality for them to rely on; analytical capability and knowledge were limited to what a JVM could do on its own, and to information gathered since it started. JVMs could study behavior and code, analyze and optimize it, but only in the context of what the JVM itself had seen within a given run.

But these limitations are not inherent to JVMs. We live in the 2020s and can make assumptions that were not available to JVMs when they were originally designed: reliable connectivity to vast networks and the availability of massive external resources and functionality. JVMs now can assume the ability to analyze and apply experience from the past and can leverage such external experience – both to benefit themselves and to contribute experience so other JVMs can benefit in turn.

Today, the “magic cloud” exists. When JVMs need to perform work beyond the actual execution of an application, such as activities designed to support or optimize the execution of the application, virtually unlimited compute power, storage, memory and experience is available for them to access and leverage. Today, cloud-centric Java runtimes are both practical and real, and they are available now.

SEE ALSO: Who should keep software development safe and secure? It’s the engineering organization

A Cloud-Centric Approach to Java Code Optimization with Cloud Native Compilation

The assumption of elastic and abundant cloud resources allows us to re-evaluate which functions must be performed in isolation by the JVM and which can be outsourced to external resources. While various elements within the JVM lend themselves well to an outsourced approach, JIT (just in time) compilation is an obvious choice.

JIT compilers perform the heavy lifting of translating Java bytecode to optimized machines’ code. To do that, an optimization conversation between the JVM and the JIT occurs for every “hot” method within the Java runtime. The conversation starts with the JVM asking the JIT compiler to optimize a method. The JIT compiler then interrogates the JVM about the code, iteratively asking questions about various facts and profiles, and requesting additional code for called methods so they might be analyzed and optimized together. The JIT compiler eventually produces code that it thinks best suited to the current situation – optimized code that is hopefully better, faster and more efficient. Today, production JVM performs these optimizations using in-JVM JIT compilers.

Better JIT’ing can and does produce faster code; and results from Azul’s highly optimizing JIT compilers demonstrate just how much faster workloads can run. When we turn powerful optimizations on in our Azul Platform Prime environment, we often produce code that is 50 to 200 percent faster for individual methods. This in turn translates into seriously faster speeds for applications – such as 20 and 30 percent faster performance for Cassandra and Kafka.

However, there is an inherent tradeoff between how optimized the code is and the cost of performing the optimizations. Better JIT’ing and powerful optimizations can get us much faster code, but it comes at the cost of increased CPU usage and memory requirements, as well as increased time for performing the optimizations. In a constrained environment, such as a container environment where only a few cores are available and memory is limited, some levels of optimization may not be practical. The resources needed for producing more powerful optimizations may not be affordable, and the prolonged warmup performance drawbacks may be prohibitive. Such resource constrained environments often choose to forgo more powerful optimizations, leaving speed and efficiency on the table.

That is the inherent tradeoff facing pre-cloud, self-reliant JVMs. But a cloud-centric approach to Java runtimes can eliminate the tradeoff, by taking the JIT compiler out of the JVM and serving its functionality from a cloud-native compilation service. A service that elastically scales up and down, and is available to and shared by a multitude of JVMs. A service that can apply more resources to optimization than a single JVM ever could. Such a Cloud Native Compiler is ideal for efficiently and effectively producing highly optimized code.

Cloud Native Compiler: Optimization, Performance, Efficiency, and Cost Reduction

By shifting the heavy lifting of optimization to a scalable and efficient resource, a cloud-native compiler makes powerful optimizations both practical and affordable. These optimizations result in faster application code and improved JVM performance, which in turn translate to more efficient application execution and reduced cloud infrastructure costs for running a given workload.

A JVM instance will usually run for many hours, but it only needs its full optimization capabilities for minutes at a time. A JVM instance using constrained local resources for optimization must carry around the resources needed to perform these optimizations for its entire lifetime, and with it the cost of those resources even when they are not in use. In contrast, when a cloud-centric JVM uses a cloud-native compiler, that compiler can amortize the same resources, sharing and reusing them as it performs optimizations for many different JVM instances. It can shrink its resources when not in use, dramatically reducing their costs, and can grow them on demand, even to massive amounts. The reuse efficiency, scale, and elasticity of a cloud-native compiler mean that resources can easily and economically be brought to bear to perform powerful optimizations without the warmup or cost concerns that contained JVMs would encounter.

In addition, cloud-native compilation can leverage a key efficiency benefit not available to individual JVMs: The vast majority of code today executes more than once, runs many times across many devices, and tends to repeat the same tasks and profiles. In today’s pre-cloud JVMs, these common characteristics go unused because each JVM performs local, ephemeral optimizations that are then forgotten. A cloud-native compiler environment such as Azul Intelligence Cloud can reuse the optimizations themselves. When a JVM asks for an optimization of code that the cloud native compiler has optimized in the past, the compiler can check to see if the JVM’s environment and reality match satisfy assumptions made in previous optimizations. If the previous assumptions are satisfied, the same (cached) optimized code can be provided without having to recompute the optimization. Therefore, further efficiency comes from reusing the work itself, and not just the resources needed to produce it. When an optimization can be reused across many JVMs, the cost of producing each optimization is dramatically reduced to what pre-cloud JVMs would need to pay to achieve the same optimization levels.

A Cloud-Centric Mindset

Cloud-native compilation takes Java runtimes from a pre-cloud mindset into a cloud-centric one. It produces more powerful optimizations with greater efficiency. It produces faster code because it can afford to do so – using less local resources: less CPU, less memory, and less time. Cloud-native compilation enables application developers to do more with less, and can do so in today’s cloud native environments, whether on the cloud itself or in a customer managed environment involving Kubernetes.

The post Evolving Java Runtimes for a Cloud-Centric World appeared first on JAXenter.

Source : JAXenter