Taming the SOA Tiger With CEP (Part I of II)

Untitled Document

Part One: Knowing the Beast



One of the most complex and dramatic characters of a circus is the animal trainer; one who can face a bevy of threats when dealing with a strong and wily beast, like a tiger. Think of the tiger as an example of the circus'raison d'etre. In information technology, the tiger is that most promising platform for applications: Service Oriented Architecture.

The tiger is a dynamic, powerful, complex, living organism; comprising integrated systems, just like a SOA environment. Applications are the lifeblood of a business. You wouldn’t let your toddler wander around in the cage with an unmanaged tiger; why would you risk your business by letting your applications run in an unmanaged SOA environment?

The tiger (SOA environment) is dependent on the trainer for its care, feeding, and well being. The trainer (user) depends on the tiger to behave in expected, predictable, and consistent ways. If there's a hint of inconsistency in the tiger's behavior or reactions, the trainer better be able to spot it and react fast or lose body parts.

If the tiger has been distracted by ladies in large straw hats or sudden loud noises in the past, it's crucial for the trainer to know this. If a straw-hat lady walks in or if a thunderstorm comes up, he must be prepared to proactively deal with these events with a combination of real-time and historical trend data to take preventive action before the tiger runs amok

While information systems are the lifeblood of an enterprise, even the seemingly simplest of applications (like buying or selling shares of stock) may depend on many, different streams of information. They must not merely handle these streams of information efficiently, but also intelligently and within the appropriate compliance requirements.

The value of information is negatively affected by age, as measured in fractions of seconds. More importantly, it can also be valued according to its source. Both the timeliness and source of information depend upon a complex stream of raw data from trading partners, RSS news feeds, financial data from news sources, the Federal Reserve and perhaps even weather information.

What type of scenario might incorporate this variegated input of information? One that is strikingly current is commodities trading, say in petroleum products. The trading application may be operating perfectly, at 100% effectiveness: the system operating well, accurately and on time. Transactions are done within the specified limit, keeping the trader in compliance with regulations, keeping the customer happy and meeting service level agreements for all parties. Traders get an order, complete it, report it. Fait accompli, job well done.

Imagine that a tsunami, cyclone or rogue wave capsizes one or more supertankers. With no warning, tens of thousands of barrels of crude go off the market. Unaware of the tragedy or its extent, trading continues apace. Some hours later, traders hear rumors: some begin to act, stories spread, trading cascades, and the system is overwhelmed. Servers are overloaded, trades are delayed. In fact, thousands of trades cannot be implemented by the time the exchange closes, triggering alerts to customers and to the oversight authority (SEC); hefty fines are imposed for being out of compliance. Besides financial penalties, reputations are harmed.

In comparison traders with an information dashboard including information from other applications would fair much better. Connecting with news feeds including weather alerts from NOAA or the National Weather Service and RSS feeds from business partners, they instantly digest the news, gear up extra servers, and get a gauge on the anticipated cascade. They are then able to provide early warning to both customers and the SEC mitigating the surprise, customer anger, and oversight ramifications.

The Island Scenario
If business was an isolated phenomenon, then all the applications running would be operating perfectly. But in the real world, events outside of business or that have nothing to do with IT directly, can change the business world in an instant.

To continue with a real example, the trading environment is set up to use all resources, including monitoring both internal systems and external events, to run ideally. The system could be running 100% and meeting SLAs; but an external event can cause prices to skyrocket, bringing heavy trading in oil stocks, rippling through to airlines, other transportation industries, and perhaps farming. Cascading effects all sorts of trades through other exchanges.

If the NYSE doesn’t know this until a half-hour before close of trading, and traders discover they have 500,000 trades still in flight, they suffer the penalty of not having everything submitted to the Fed as completed trades: heavy penalties ensue. If we could monitor the event or know about it in advance, we can use operational monitoring to fix the problem in real time and mitigate the impact to an acceptable level of exposure.

This cascading renders a system that has been operating fine, at 100%, inadequate and in regulatory peril. If the systems were provisioned well enough in advance, which SOA enables through messaging among internal and external applications, they would have hours to adjust expectations, and know whether they would be able to complete the trades or not. They might be able to re-provision and complete more trades. Notification to the Fed could be earlier, penalties would be mitigated if not eliminated. Timely, pertinent information would minimize the impact.

This is not a fantasy scenario, it is part of everyday business. Multiple sources of information are part of the decision process. This is a fairly recent phenomenon, born of the Internet age and exacerbated by web services, which so often depend on SOA.

While much incoming information is important and worthwhile, there is still too much to handle. Unlike the metaphorical tiger, there is a solution to the information glut. We turn to automation itself as a response to complexity.

Complex Event Processing software can whip the information maelstrom into a benign enterprise resource. CEP is a framework that can reduce the time, cost, and risk of certain decisions. In a complex world, individual data points or streams are not adequate. They’re point solutions in a world requiring a system, viewed with a single gauge, a dashboard.


There’s Information, and Then There’s Information.

Three types of information need to be overseen, correlated, and managed:
* Internal metrics e.g., the state of the environment, events related to the underlying infrastructure (ESB, messaging systems, J2EE environment, databases, etc).
* Internal metrics that come from the applications themselves, and transactions flowing over the infrastructure.
* Internal and external events. It is this last that is most frequently ignored or overlooked because of technological shortcomings.

The internal application is familiar to everyone: it’s what we do, it’s the business process, the reason we use a computer in the first place. Office Accounting might be an early example. Tools are available to improve performance of these basic, independent applications. But, tools are also needed to provision the application with critical, timely information.

Higher-tier performance management requires something like Complex Event Processing, a technology enabler, the secret sauce that lets the users do some things that otherwise they could not do. For example, CRM and MRP are suites of applications that take information from several internal systems, oftentimes interfaced to supplier or business partner systems. Such capabilities make CEP a valuable part of systems management, key to unlocking Application Performance Management. With the advent of SOA and web services, CEP becomes more and more important as a way to manage complexity.

Then, there are systems that rely heavily on information from external sources, as well as internal. The earlier real-world tidal wave provides an example of the importance of external information management. Early insight into a pending emergency can enable a company to adjust prices, be proactive on the trading front, or even alert authorities as to the advisability of suspending trading. If a system is hooked into news and weather feeds, as commodity trading systems should be, then they are better prepared.

The most important requirement for CEP is to correlate external events with internal processes. Much as we'd like to think that strategic planning, a solid vision, and good management allow us to control our own fate, no business is its own master. It is subject to the whims of external forces as wide ranging as the regulatory environment, the weather, the fancy of investors, and the agility of competitors.

If you can't bring in external information, correlated with what applications are doing, there could be at least a partial collapse in the system. The applications may be "working" fine, but with incomplete information, the systems are not delivering the information required or expected of them. Businesses need to be nimble, to adjust their processes to outside influences; to react quickly to the straw-hat woman or thunderstorm.

Check in next week for the second installment, where Richard Schreiber discusses how to implement a successful CEP.

About the Author

Richard Schreiber is Vice President of International Operations, Strategic Alliances, Corporate Marketing at Nastel Technologies, Inc. He has more than 30 years experience conceptualizing and selling complex software and system solutions to senior management in Global 2000 companies, covering integrated infrastructure solutions such as SOA, EAI, and the emerging application performance management, as well as messaging, middleware systems, and data security. He has been an executive leading hardware and software companies including DEC, Data General, Prime and RAD/componentized application pioneer Seer Technologies.

More by Richard Schreiber