We use cookies and other similar technologies (Cookies) to enhance your experience and to provide you with relevant content and ads. By using our website, you are agreeing to the use of Cookies. You can change your settings at any time. Cookie Policy.

BPM: Theory to Practice

Tim Huenemann

Workflow Naivete

user-pic
Vote 0 Votes

I came across a situation recently that reminded me of an experience from almost 20 years ago, back when BPM only meant beats per minute. Our team had just finished developing a homegrown workflow framework, built to be a platform for workflow and "incident tracking" applications. It had typical workflow features, with configurable workflows, dynamic work queues, etc.

Once the framework was complete, we moved on to building the first application. We were eager to prove out the flexible workflow framework and see what the business would want! As we worked with the business managers to design their workflow processes, the flexibility got a bit out of control. We rapidly had "follow-up" and "manager" queues all over the place and dozens of paths through the process.

Luckily, somebody concluded that things were too complex. We simplified the process design and moved on to implementation. But we still ran into problems. As the system came together, we realized that we still didn't have a complete understanding of how the "cases" would flow through the system: how long would it normally take, where could a case get stuck and sit too long, how many times might we communicate with a customer, etc. What was missing from our plan? End-to-end thinking, with documented process goals that would guide our design.

We expanded our perspective and worked the process design some more. Some changes were made and we started planning for system testing. I thought that since the framework underneath was solid, we could focus testing on the discrete actions - user tasks and system tasks that moved things through the flow. As long as we had validated the end-to-end behavior of the system at design time, we would be ok putting most of our testing effort into the discrete tasks and very little in end-to-end testing. Guess what? This was a mistake, too. We weren't prepared for all the combinations of business rules and configuration settings that could make a "medium complexity" process execute with hundreds, even thousands of possible paths through the process. And sure enough, we needed to hit the system more thoroughly than we expected to finally get things working in a way that satisfied the business.

What did we learn, luckily for me early in my career?
1. Don't create complicated processes to handle every alternative path or exception that might come up. Just because a workflow system (or BPMS) will let you rapidly create a complex network of work doesn't mean you should.
2. Simple isn't good enough if your process doesn't work end-to-end and meet all the objectives. Don't congratulate yourself on a new simplified process until you've done some thoughtful end-to-end analysis.
3. Rules-based behavior enables the rapid explosion of complexity. Just because the software technically works doesn't mean your implementation will work as predicted or desired. I've seen systems with over 1,000 rules that are very fragile and need constant regression testing. Yes, they are easy to change, but often hard to test.

Leave a comment

This blog offers a true “practitioner’s perspective,” with issues and commentary based on real-world experience across many industries.

Tim Huenemann

Tim Huenemann is the senior principal for business architecture and process management at Trexin Consulting. He has more than 20 years of experience in process management and business-focused IT. In his consulting work, he helps organizations execute business strategy by implementing effective process management and IT solutions. He regularly translates BPM theory into practice, and practice, and more practice. Contact Tim at tim.huenemann[at]trexin.com.

Recently Commented On

Recent Webinars

    Monthly Archives

    Blogs

    ADVERTISEMENT