Five Ways to Skinnydown Mainframe Costs: Part 1

It is no secret that many companies would like to leverage their mainframe to help drive new business initiatives and/or make their operations more efficient. These integration efforts often stumble, fall, or don't even get started because the tenets of mainframe integration are misunderstood. This in turn leads to sub-standard results, often accompanied by increased costs.

The mainframe integration problem can be broken down into its basic parts, which will help enterprises build up a simplified, successful strategy that will save money. We have discovered five ways that will ensure success with integration projects and -- perhaps more importantly -- reduce costs while delivering better results.

The mainframe integration battlefield primarily consists of two distinct sides -- the mainframe systems we've depended on for years and years, and the new applications, initiatives, and delivery channels.

Our tried and true mainframe applications were built in technologies like CICS, IMS, IDEAL, VSAM or even applications like Hogan. And then came the new world, with mobile and web applications, or business applications from companies like SAP or Oracle. How the two integrate or interact can be the difference between success and failure.

When one of these new systems needs to access functionality from the mainframe we can think of it as a question-and-answer session. For example, perhaps a mobile banking application needs to ask a question like, "provide me all the account balances and recent transactions for a customer." And now it's up to our existing mainframe applications to answer that question.

The problem is that the question typically cannot be easily answered, as the mainframe systems could never have anticipated the question in the first place. So, they simply aren't prepared to answer it, and therefore, we have a mismatch.

Integration is all about solving that mismatch.

How do we get the mainframe systems to receive and understand our question, and provide an appropriate answer to resolve this mismatch?

The standard answer to this question today is Web services, which unarguably provide a great, commonly accepted way for things to connect. The key word is connect. Because at their core, just saying "Web services" is only stating we have a standard way to receive the question and offer an answer. But Web services on their own say nothing about the ability to understand the question, and certainly nothing about providing an appropriate answer.

It's basically like trying to ask a question to someone in Italy, and assuming because we both have a standard telephone connection we can communicate. This is exactly where the cost of mainframe integration is buried in both soft and hard costs. This article discusses five basic ways to solve the mismatch, deliver the right result, and save money doing it.

This opportunity can be addressed in five phases, each highly dependent on the previous.

1. Defining the service. The question we want to ask, and the answer we wish to receive. In Web service vernacular, this is simply how do we define the WSDL.

2. Assembling the service. How and where can we put this work? This where a lot of hard costs come into play, as it related to the cost of mainframe processing.

3. Deployment. How will we implement the Q&A session? That is, what systems do we need to interrogate, and how do we package and answer the question the requestor was looking for?

4. Time to deliver. How do we get the work done quickly and efficiently? This must not take a lot of time. The truism that time is money applies in software more so than in any other area of business.

5. Flexibility to change. The minute you start any project, things start changing; so, what's the consideration, implication, and cost of not effectively being able to adapt?

As with most processes, the first step is the most thorough and requires the most preparation, as it lays the foundation for the rest.

Defining the Service

It all starts with the WSDL, which essentially defines our Question and Answer: "What is expected by the consumer" in the form of an answer by the provider, the mainframe?

For example: what does a mobile banking app need, to produce meaningful accounting information to a customer? What does that stock trading web app need to enable trading? Or, what data does my new ERP system need to populate its internals?

Typically, we'll either build or be forced to use a WSDL that represents our specific question. Like a web service that retrieves customer account details based on a customer number which the new application provides.

Mainframe systems simply couldn't have known these questions were coming. So, tools that take a single COMMAREA transaction and simplistically generate a WSDL based on their original copybook simply won't cut it. Many times it's more than a single transaction required to even begin to answer the question, which is covered in step 2.

Additionally, we need to consider that new applications fully exploit the XML spec. That is, they will expect to leverage a wide array of XML data types, and maybe even industry standard schemas like IFX for banking or ACORD for insurance, as they use Web services to ask their questions.

Moreover, to reiterate, many new applications already have their WSDL defined; that is, they have their questions and expected answered scripted out already. The ability or inability to support these requirements and be adaptable has many implications and associated costs.

If you choose to use a technology that automatically generates Web services "bottom up" - that is, from existing mainframe transactions-- you have decided to essentially ignore what the new system requires. In a sense you are stating: "Regardless of what you need to ask me, here's what I have to say to you."

Furthermore, many commonly used mainframe data types may not even be supported with these tools.

For example, the IBM Web service utilities for CICS will generate a WSDL that essentially mirrors 100% of the fields in the copybook, and doesn't support common data types like Occur Depending On, or Redefines. This means you will be recoding your transactions to even deploy the most basic services.

With a solution like this, you'll find yourself only halfway home, and will probably still have a mismatch between what the requestor wants, and what you can provide. The only difference is now instead of mismatching between a web service and a mainframe app, the mismatch will be between a web service and a web service.

Minimal progress - at best.

This means buying more software to resolve the new mismatch, and increased time, and moving parts as you progress.

Finally, taking this approach resigns you to single-step services -that is, hoping that questions can be answered by a single program or system. All of this speaks to the costs of not being able to answer questions directly, working in a top-down fashion.

So let's assume I want to be direct, that I want to give you what you need and am committed to using or producing the correct WSDL. That starts to move into step 2…

Assembling the Service

Let's say you have a new Oracle application that requires customer account details providing you supply a customer number, in this example WSDL as you can see on graphic #2. On the left side are the systems required to answer the question. CICS green screens for address info, balance info in VSAM files, as well as IMS and CA IDEAL applications for transaction history.

The question is: how to deliver access to resolve the mismatch and answer the question. The implications and costs here get pretty obvious, but in real life they get buried, as these tools and architectures tend to cross different departments and skill sets. So we're adding additional layers, creating many new web service mismatches, and having to resolve them by wrapping WSDL with WSDL.

If we don't deal with this issue effectively, we will be buying too much software, building services that bring little to no reuse potential, adding soft costs in terms of time, and significantly longer development and maintenance, and hard costs in terms of additional software, hardware, and processing costs.

Let's assume that we can build our services in some way or the other. The next step is the one that's easiest to calculate in terms of cost: how and where I deploy. And, deployment speaks of many options:

  • Should you deploy on the mainframe?

  • Can you leverage specialty engines to save money - how about Linux on System z to incur zero processing costs?

  • What about off-platform all together?

Each tool or technology selection will bring with it specific features and limitations that will dictate the options in terms of architecture.

As stated, the implications here are massive.

Editor's note: The second half of the article will be posted on Thursday, July 29th.

About the Author

Robert Morris is Chief Strategy Officer at GT Software, where he is responsible for the planning, integration, and marketing of GT Software product solutions to the global market. Prior to GT Software, Mr. Morris held a variety of sales, marketing, and product management positions at industry leaders KnowledgeWare, Forté Software, ClientSoft (now NEON systems) and Jacada. Mr. Morris also holds a Board position on the Integration Consortium. He has an extensive background in application development and integration including experience with CASE methodologies, distributed systems, as well as midrange and mainframe environments. Mr. Morris speaks frequently at customer and industry events including Gartner Symposium, Java One, IBM Common, IBM Transaction and Messaging, and IBM CICS and IMS.

More by Robert Morris