Standards-Based Caching Solutions for the Enterprise

It's all about data distribution.



If you were delivering Web applications a year or more ago, the choices as to which technologies and platforms to use were limited, and those technologies were fairly hard to use. Anyone who had to come to grips with Perl and CGI will remember how complicated life was.

Today, the Java 2 Enterprise Edition (J2EE) platform has matured to deliver a range of easy-to-use, standards-based technologies, making it faster and easier to deploy Web applications. But J2EE is only the first step in delivering Web applications. Ensuring that your Web application scales is often a bigger challenge than deploying it in the first place.

A Web site that succeeds in attracting visitors can be swamped with requests. Poor performance at this point will decrease the visitors' overall satisfaction and may lead them to leave the site. Once they have gone, there is little hope of attracting them back.

At the end of the day, good Web site performance depends on efficient data distribution. Even in a transactional system, 90 percent or more of server interactions are about data distribution--reading and formatting data--since most of the time visitors to Web sites are reading rather than transacting. So it is extremely important to ensure that readers are satisfied and that reading is both fast and efficient.

The J2EE application platform. Over the past couple of years, the J2EE platform has become a common choice for delivering Web applications. The introduction of Java Server Pages and Servlets, which can run along with Java Database Connectivity in a Web server environment, has led to the three-tiered Web application model becoming easier to deliver and portable across operating systems. What this adds up to is faster time-to-market for Web applications.

EJB and the N-tier architecture. The adoption of Enterprise JavaBean (EJB) technology, which effectively wraps up data content into business components, was a major feature of J2EE 1.2. It enabled developers to build reusable business components that were managed by the EJB server and so provided a better separation of concerns, with the presentation clearly handled by the Web server, the business logic by the EJB server and the data management by the database. This model still holds for the majority of application builders.

Latency issues. Unfortunately, the J2EE solution has inherent performance problems that have their roots in the latency between the layers. The latency between browser and Web server is essentially tied to a threading model within Web servers that holds a thread/request. The thread you are allocated is blocked until the request returns.

Because of this latency between the Web server and the EJB application server--and given that much of it involves access to read-only information--it pays to build a cache and so (most of the time) avoid an expensive trip to the EJB server. This, in turn, ensures that browser-to-Web server latency is kept to a minimum and not compounded by the traffic out to the EJB and onward to the database.

Figure 1: Latency Issues in J2EE Architecture

In the same way, the latency between EJBs and the database can be reduced by ensuring that the EJB doesn't have to make unnecessary trips to the database to retrieve unchanged data and by ensuring that changed information is forwarded to the EJB resident database cache.

All this has a bearing on the overall scalability of the system and affects the user experience. These factors have an impact on revenue (poor performance turns customers away) and on cost, as the amount of hardware required needs to be modeled effectively.

Techniques to Increase Performance

Of course, there are many ways to improve perceived performance. Common approaches are to:

  • Reduce/remove unnecessary workload
  • Spread workload across multiple resources
  • Hide workload from the user
  • Redesign the application to reduce the effective workload

The last of these is really a matter for specific application design, but we can see how to implement the first three approaches using JMS and caching.

Queuing with JMS MDBs

One of the easiest ways of hiding workload is to place it onto a queue and have it processed in the background. This pattern has been used since the dawn of the computer age (or at least since the invention of the Job Control Language for prioritizing mainframe jobs) and has now made its way into J2EE 1.3 in the guise of Message-Driven Beans (MDBs), which are part of the JMS API.

Simply send a persistent message, and you can be sure that the background processing will--eventually--happen (using MDBs). You only wait for the message to be recorded, and the EJB server deals with scheduling the MDB to receive the message and execute the required work. The client doesn't have to wait for the MDB to complete, so the front end is unblocked.

Figure 2: Asynchronous Requests with JMS

This approach has a further benefit in that the JMS software should be able to use MDBs to load-balance requests across a farm of EJB servers--delivering requests using a round-robin or least-loaded receiver algorithm--so the workload is spread across multiple resources.

Caching to Reduce the Load

However, the most effective way to reduce unnecessary workload is to use a cache. Deep down, all programmers know this. Sometimes they think they are using one (perhaps the database cache), but it is often too far away from the application to be effective. JCACHE, or Java Temporary Caching API--a specification that has recently come out of the Java Community Process--can help Java developers create data caches. The JCACHE specification provides a way to standardize in-process caching of Java objects and enable faster implementation.

By introducing caching at the Web server and ensuring that the EJB application server publishes updates to the cache, you further enhance the generic architecture. Frequently accessed data is cached; as the cache "warms up," the frequency of access (directly or via EJBs) to the data servers declines. Remember that at least 90 percent of activity on Web-based applications is read-only, so most requests for data are satisfied immediately (without I/O), and latency is hugely reduced.

By severely reducing the traffic to the application server and database, the net effect is to substantially diminish the size and cost of the installation--the cash savings involved can be considerable when you take into account CPU-based pricing of database and application server--or, conversely, to increase the amount of traffic that a given installation can support while ensuring that user satisfaction is protected.

Figure 3--taken from a real application--shows how caching reduces the number of database accesses.

Figure 3: Cache Response Tests

Without a cache, the number of database reads increases linearly with the number of users. With a cache, two things happen:

  • Each user stops rereading data, producing an (application-specific) linear reduction in the number of database hits.

  • Across multiple users, the more users there are, the more they tend to be reading a lot of the same data, so the number of database hits increases slower than linearly.

Multitier Caching Example

In this example, a hardware load-balancing device--a sort of specialized router--is used to mediate between the incoming request, which arrives at one IP address, and the farm of Web servers. A cache is needed because the load balancer can cause a user session to migrate between multiple application server virtual machines (VMs) at the same IP address.

When the browser initiates a request that requires data, the Web server looks for the data in the top-tier (VM) cache. If it isn't there, it invokes its cache loader to locate the needed data. In the case of a JMS cache loader, a subscription with a reply-to address is sent to the midtier cache, which attempts to resolve the reference and send back the information.

Figure 4: Cache Loading in a Multitier Cache

If necessary, the same thing happens down the hierarchy until a cache that can resolve the item is located. The cache loader for the "bottom" cache does some database lookup and publishes the data on the appropriate channel. All the caches that are subscribing for the same object receive the data. In this case, a single midtier cache gets the data. It then publishes the data to all the VM caches on its box. They get it because they all use a more generic topic. The other VM caches don't get it because they aren't connected to the same local JMS bus.

At first glance, the temptation is to ask "Why do I need a midtier cache?" The answer is quite simple. By using this approach, we can use a technique called "traffic shaping," which enables us to manage traffic flow based on the type of traffic and on which application or database is its intended destination.

Conclusion

The bottom line is that Web application performance depends on efficient data distribution. Since almost all server interactions involve data access, it's crucial to ensure fast data access for maximum application performance.

Because of the latency that exists between the Web server and the underlying data--and given that much of data access involves read-only information--it pays to build a cache and avoid time-consuming round trips. By reducing repetitive traffic between the different layers of a Web application, you substantially diminish the size and cost of the installation and greatly enhance the system's responsiveness.

Caching, combined with open standards-based technologies such as JMS, can help get the best value out of your existing infrastructure for Web applications.

This is an excerpt from SpiritSoft's "JMS & JCACHE: Standards-Based Caching Solutions for the Enterprise and Web Services." The full paper can be downloaded at http://www.spiritsoft.com/download_files/registration.asp?downloads=sbcs.

About SpiritSoft

SpiritSoft develops open standard enterprise messaging-based technologies and tools that enable dynamic business interactions across diverse applications and devices. The company's SpiritArchitecture is a complete integration platform for building, deploying and managing distributed systems. SpiritSoft's open, standards-based approach leverages JMS and XML technologies to enable users to seamlessly integrate with legacy technologies and any proprietary middleware. SpiritSoft's technology delivers platform-independent multiplug messaging, universal caching, multichannel delivery and dynamic event management, which ensures that the right information is delivered to the right place at the right time. More information is available at www.spiritsoft.com.

Related Links

Book Excerpt: JMS Enterprise Messaging--A Technical and Business Perspective (Part 1)

Book Excerpt: JMS Enterprise Messaging--A Technical and Business Perspective (Part 2)

Book Excerpt: JMS Enterprise Messaging--A Technical and Business Perspective (Part 3)