IT Directions

Keith Harrison-Broninski

Big Processes 4 - Underpinning processes

user-pic
Vote 0 Votes
Following on from my post Big Processes 3, in this post I will discuss the second type of Big Process - Underpinning.

Underpinning processes help IT become more flexible.

Of course, IT is not the only facility that "underpins" modern organizational work.  Other underpinning facilities include electricity, furniture, stationery, cleaning, catering, and so on. The reason I single out IT for attention is that, for the moment at least, most organizations have a Chief Information Officer (CIO) or Head of IT.  They do not usually have a Chief Electricity Officer or Head of Cleaning - although they sometimes used to.

For example, until the 1930s large companies typically had a "VP of Electricity".  Most don't anymore, because electricity is a reliable utility that the supplier takes care of, not the consumer.  The emergence of "electricity as a service" removed the need for companies to manage their own power sources (except for those few companies who play the electricity market).  In due course, "software as a service" will do exactly the same thing for IT.

The major obstacle in the way of true utility IT is the difficulty of integration.  Unless you purchase all your software from the same source, it can be very expensive to connect up the dots, and even if you are (for example) a pure IBM or Microsoft shop, there is still a lot of work to do to make your IT backbone perform optimally.

It is necessary to cut the Gordian knot, which in this case means recognizing that not everything needs integrating, and it may even be sensible to dispense with parts of your current infrastructure.  It can be hard to quantify the true cost of IT projects, but those who try often find that the Return-On-Investment of an integration effort is minimal, or even negative, when Total Cost of Ownership (TCO) is taken into account.  Paul Strassman's analysis of US Department of Defense spending in FY2006 shows of a total spending of $30.1bn, Information Infrastructure consumes 51%, an overwhelming amount of the IT budget:

"The entropy of the IT structure becomes apparent through an examination of the scope and funding for 4,121 IT projects planned for FY2006. The budgets for most of these projects are small with funding of less than $5 million each. Such projects can be devoted primarily to the perpetuating and upgrading of local solutions. As result the rising entropy in the system prevails because resources become consumed for maintenance and only marginally for improvements of organizations that were designed to deal with obsolescent processes."
http://www.archive.org/details/I.t.SpendingAsAMeasureOfOrganizationalDisorder

This view goes against the grain of thinking in IT for the last 30 years, which has been firmly focused on provision of a tightly integrated enterprise IT backbone.  Out of this thinking arose the BPMS, essentially a tool for joining up dots, whose greatest strength is also its greatest weakness - automation via Web services.  Web service integration not only requires specialist (and hence expensive) expertise, but is fraught with danger - one incorrect service call, repeated enough times, can put you out of business.  Time and trouble must be taken to avoid mistakes (especially since many BPMS products do not come with engineering-quality testing tools), and when mistakes occur, they are costly.

Web service integration is justified, and can bring huge advantage, where processes are repeated a great number of times without change (as in the newsroom platform example from my previous post, where transcoding and distribution activities are set up once then repeated ad infinitum).  However, applying BPMS tools to a flexible process is using an elephant gun to shoot a fly - you are more likely to damage the furniture than to hit the rapidly moving object of your attention.

A similar argument applies to the emerging field of Adaptive Case Management (ACM).  Paul Harmon describes ACM as concerning "rule-based or agile workflow systems" that depend "on dynamic planning, "Tasks" ("templates") and rules":

 "The knowledge worker about to undertake the specific case considers a list of tasks and decides which he or she will use for this specific case, and in what order the tasks will be attempted. In other words, one of the first tasks in the case involved planning the tasks and tentative sequence for the specific case ...The structures of the tasks themselves are largely based on the use of rules ... Adaptive Case Management suggests how the rule-based techniques might take over and provide developers with tools that make it easier to model and automate knowledge structures and knowledge-based tasks."
http://bptrends.com/publicationfiles/07-06-10-BR-Adaptive%20Case%20Mang-%20Swenson1.pdf

As with the BPMS, the strength of an ACM system is also its weakness.  In this case the information support provided to knowledge workers by a rule engine is balanced by the complexity of maintaining that engine.  Harmon points out that this "return to knowledge-based techniques that predominated in the Eighties" is a "significant step" that may well provide some much-needed support for knowledge workers, but:

"I do not believe [ACM systems] can be scaled to deal with really complex problems - for the same reason that expert systems failed - because the rule maintenance problems would be too expensive. I do believe, however, that the time is right to apply these techniques to extending BPMS tools for use with processes that include tasks that depend on knowledge. Moreover, as these applications illustrate, the tools only work if there are knowledge workers to plan each case and adjust the tasks and choose among the options offered by the ACM tools. In other words, we are always talking about a Decision Support tool here rather than a fully automated solution."

In a sense, the ACM system provides Communities of Practice in software form - sources of knowledge and experience on which the practitioner can draw when needed to solve specific problems. In fields such as aerospace, the value of Communities of Practice is well recognized - along with their limitations.  Advice from peers may well help a structural engineer pin-point a flaw in a rotor blade, but it won't help the project manager responsible for a new jet engine co-ordinate the efforts required to test and resolve issues in a system with over 32,000 variables, where each issue may have aspects whose resolution requires structural, system, materials, operational, safety, and other specialists to work together.  Similarly, a financial analyst may find it very helpful to have support in modeling the details of a merger, but the work itself - everything from market repositioning to product rebranding to organizational restructuring - is huge, messy and beyond the scope of any rule engine.

ACM and BPMS vendors agree (http://bit.ly/acm-panel) that to provide IT support for large-scale dynamic work processes - in other words, to turn on a tap that provides utility IT across the entire organization - the nature of that IT support must be radically simplified.  Flowchart diagrams with their swim lanes and decision gates are out, for one thing, since these are only suitable for programmers.  Rules with their conditions and consequences are also out, since their complexity scales exponentially with the size of the process.  Rather, it is necessary to focus on the 5 principles of HIM, which make it possible to provide fast, cheap IT:

  1. Effective team building
  2. Structured communication
  3. Knowledge creation
  4. Empowered time management
  5. Collaborative, real-time planning

I recently ran a 2 day HumanEdj workshop for a public sector organization.  Before lunch on the second day, I asked the attendees to bring to the afternoon session descriptions of the most complex and troublesome processes in their organization.  Starting at 2pm, we looked over these and chose the largest and most labyrinthine, a huge process for planning customer services spanning multiple departments in their organization.  This process had taken weeks to document, producing documents and diagram of such complexity that few of the workshop attendees could understand at first how it was supposed to work.

By 3:40pm, we had entered the process into HumanEdj (although none of the attendees were technically oriented) and produced an executable Plan whose operation was obvious at a glance, and that everyone was keen to use in practice.  By removing complexity due only to use of inappropriate tools and techniques, we had cut the Gordian knot.  In less than 2 hours, we had provided cloud-based IT support for a large-scale dynamic process without need for any specialist expertise whatsoever.

[HumanEdj is available free]

Keith Harrison-Broninski cuts through the hype in his hands-on guide to where enterprise IT is really going

Keith Harrison-Broninski

Keith Harrison-Broninski is a researcher, writer, keynote speaker, software architect and consultant working at the forefront of the IT and business worlds.

Subscribe

 Subscribe in a reader

Recently Commented On

Monthly Archives

Blogs

ADVERTISEMENT