Business Resilience Stream

Monday, March 13, 2006

Housekeeping: slight name change

When I originally created this weblog, I was not sure whether "Business Resilience" or "Business Resiliency" will end up being used more often. In the intervening year or so, "Business Resilience" has definitely won out. So I decided to change the name of the weblog to "Business Resilience Stream". I am debating whether it is worthwhile to change the URL as well, but for now, I will stick with this one.

thanks for reading.

Article Review: Integrating Business Continuity Criteria into Your Supply Chain

"Integrating Business Continuity Criteria into Your Supply Chain" by Geary W. Sikich is an excellent, excellent article. Clearly the author knows what he wants to say, and has thought for a long time about how exactly he is going to say it. He clearly outlines steps he believes are necessary for integrating business continuity into the procurement product and vendor selection and management. Indeed, the summary of the article is presented as:
Developing business continuity strategies and embedding business continuity processes into an organization's procurement process can enhance the organization's ability to actively assess and monitor vendor capabilities
Going farther, Mr. Sikich also talks about possible implementation strategies and approaches, proposing a phased implementation of 5 phases:
* Phase 1: Assessment & Vendor Continuity Questionnaire --— deliverable: letter report with executive summary that will include discussion and recommendations based on the results of the review of essential elements of analysis (report)

* Phase 2: Procurement Integration (vertical/horizontal) --— deliverables: procurement management system, vendor business continuity management program and plan integration criteria guide (tools); and procurement management system, vendor business continuity management program and plan integration criteria guide training program materials (knowledge transfer)

* Phase 3: Monitoring & Enforcement -- deliverable: procurement management system, vendor business continuity management program and continuity plan integration criteria guide maintenance criteria (sustainability)

* Phase 4: Sustainability -- deliverable: periodic metrics, event response reports

* Phase 5: Maturity Model Evaluation -- deliverable: metrics for maintaining the process, change management procedures
In conclustion, the author exhorts senior management to:
"Using their status as “leaders,” senior management and board members can and must deliver the message that survivability depends on being able to find the opportunity within the crisis."
and makes a claim, quite credible in my opinion, that:
Market research indicates that only a small portion (5 percent) of businesses today have a viable plan, but virtually 100 percent now realize they are at risk. Seizing the initiative and getting involved in all the phases of crisis management can mitigate or prevent major losses. Just being able to identify the legal pitfalls for the organization by conducting a crisis management audit can have positive results.
The forum for the article, published in Supply & Demand Chain Executive magazine, is also one of the primary targets for business continuity efforts. It would seem; however, that it is also a more narrowly focused audience than the subject of overall risk management and business resilience. Clearly lessons and thoughts expressed here can be applied over the whole enterprise, not just the procurement process.

One interesting point that I have not seen made much is the tiered structure of tactic, grand tactic, and strategy, that applies to all the business level components (logictics, finance, etc.) of a supply chain:

At the tactical level the focus is generally on event response and mitigation. The focus at the tactical level should be on response and mitigation while the need at the tactical level is for support from the next level (grand tactical). At the grand tactical level, the focus should be on support for the tactical response.

Additionally, at the grand tactical level the focus should be on the prevention of cascade and containment of cascade effects on the organization. At the strategic level the focus should be on management oversight, coordination and facilitation of restoration of services. It is important to note that a key element in this vertical and horizontal process of detection, classification, response, management, recovery and restoration is seamless communications. Seamless communication is based on the adoption of common terminology and in the functions represented at each level.

A good diagram is also presented. It would interesting to see organizations model the interactions between tiers not just on a formal level - similar to ones in the article, but also on business process, product, or supplier-specific level. The charts could look something like this for vendor X:



One could continue by developing baselines and profiles for different types of vendors and processes. Let's say an ideal situation would result in values of 100 for each components across all tiers. Given limited resources, one could set existing values based on resources and processes in the present, but also set appropriate risk profiles. We could then easily overlay the two charts to see where the organization is underperforming - or over-budgeting. Once applied consistently to all vendors, the approach could also allow an easy model of the impact adding a new vendor would have in terms of improving operational resilience, or decreasing it. The 3D chart presented here can certainly be modeled using a pivot-table in 2D. I feel that in this example is does help to show whether for a particular component on each tier the organization is well-prepared -- and how that preparedness changes from tier to tier.

Monday, March 06, 2006

Virtualizing for Resiliency

A colleague sent this article on virtualization to me today. This is not the first virtualization-related piece of information to come across my desk today. There are also calls to customers, calls from vendors, and other pleasantries. The main point of the article talks about different strategies for increasing "yield" from a cubic foot of data center space.

The comparison to agriculture is apt, I believe, since for our society information generation, storage, and retrieval mirrors the concerns of agricultural societies in years and millennia past. Data Centers are our fields and granaries, and the network is the road between our towns, those fields, granaries, mills, and bakeries - replaced by online communities, data centers, SANs, database and application servers, and web servers respectively. What data center managers are going through now is similar to the "closing of the frontier" thesis by Turner.

As a result of the closing of the frontier, several significant changes occurred. As the availability of free land was basically exhausted ... At the closing of the frontier, we entered a period of concentration -- of capital, as with monopolies and trusts -- and of labor, responding with unions and cooperation.

We can theorize, that as opportunity to add thousands of square feet of space for data center use becomes exhausted, people actually have to turn to concentrating - or consolidating - their resources for more productivity. Similarly, we power expenditures for running the CPUs and the disks, and power to cool them as well rising proportional to the density and amount of used space, and rising again as the cost per unit of power has increased by 50 or more percent over the last 2 years, managers better be getting something worthwhile from all those boxes. Suddenly, it is not longer possible to just "add a box" to a rack. Like modern agriculture, the "yield" from all these machines must be watered with power, and fertilized with efficient allocations and management.

What does this all have to with resiliency? One can generalize that a funny thing is taking place. As infrastructure and servers themselves become virtualized, a specific "machine" (if that term can be applied to a virtual machine running on a multicore, multiprocessor server with OS partitions using virtual CPU allocations over a virtual network and disk I/O) will be transparently managed for service levels and failover. Difficulties of setting up clustering and multi-site failover will be left far in the past - except that new issues will take their place. An individual machine, or even data center may become non-critical - but all the nice virtualization management software and hardware will become extremely critical. As "yields" increase, all applications will be considered critical - which means a new set of policies for determining service levels will need to be created.

I see an increasingly close relationship between data center owners, application level specialists, and application management and user base. Some or all of these entities might be partners of each other or of the end-user community. Hybrid installations with one full data-n-application set being hosted by the "client", and failover and backup sets being hosted by various partners will probably become the norm. The concept of "insourcing", as described by Yossi Sheffi in his book "The Resilient Enterprise" will become more and more the norm.

As I mentioned in a previous post about SaaS - disaster recovery and contingency policies will increasingly have to deal with service level agreements and resiliency among partners and suppliers as a core the recovery and resiliency . Virtualization holds a great promise for improvements in operational efficiency and enterprise resiliency, but a thorough adjustment of policies and expectations needs to take place before these gains can be realized.

Wednesday, March 01, 2006

Housekeeping note

A little housekeeping information on the blog. If you are subscribed for the RSS via blogger's atom feed for this blog, please consider changing the subscription to this feed (http://feeds.feedburner.com/BusinessResiliencyStream). I really like FeedBurner, and think they have a wonderful product. If I ever decide to leave the hospitable blogger.com I would like to be able to take my feed and its subscribers with me. Theoretically, if one uses the bloglines.com browser extension and similar, they should be pointed to a FeedBurner URL as well.

Which brings me to the obligatory resiliency note. FeedBurner is a great example of a service I wrote about in my previous post - Software as a Service - a resiliency look. By using the service I am extending my own resiliency - creating an integration point in my publishing process that lessens my dependency on a single supplier (google/blogger). At the same time I am introducing a dependency on another partner. What did I look at before making such a decision?

One of the important considerations, obviously, was the relative stability of my new partner, FeedBurner. Stability is of course relative, but between the size of my publishing empire, and FeedBurner's impressive growth and client portfolio, I felt fairly comfortable that they are not going to disappear overnight leaving me with no recompense. Finally, the cost of having FeedBurner service inoperative for 1-72 hours is pretty low for me right now. Similarly, I do not think a switching cost will ever be prohibitive for a blog of this size and [non] popularity. But things could easily have been different.

For example, some of the issues discussed in the "software as a service" post would remain for a larger enterprise. So if I represented one of them, the ability to host feeds on my own servers, or at least with URLs pointing to my own servers would have been a priority. This way in case of an outage my feeds would still be available, if not updated, and even more impressively I would be able to switch to a FeedBurner-like provider fairly easily - strongly discounting the risk of FeedBurner going out of business. Good technical architectures are often about creating explicit integration points based on future business requirements, even if no technical necessity for that point exists.

Monday, February 27, 2006

Software as a Service - a resiliency look

Starting point for ruminations was this - IBM Recruiting ISVs, Partners to SAAS:
Viewing the software-as-a-service market as a major new-growth industry, IBM is offering a package of services and incentives to help software companies and channel partners deploy their products as hosted applications.


IBM is looking for a wave to catch to vault it over Sungard and other, smaller, companies specializing in hosting company's backup servers and data. It is worthwhile, I think, to look at the generalizing principle on software-as-a-service (SAAS). What are its implications from a resiliency and continuity perspective?


For starters, SAAS goes beyond the now well-understood Application Service Provider (ASP) model. ASP implies that an application, usually one which covers at least one complete business process, is hosted by a service company rather than an internal IT department. From a computing perspective there is often little difference. After all, most large and medium-sized companies today have widely distributed IT deployments and most users do not know whether the web application they are using is coming to them from a data center 3 floors above or 3 thousand miles away. So what does it matter whether someone else is running a web server instead of your organization? Better organized resiliency programs certainly take this outsourcing into account when creating plans, treating ASPs as critical vendors as much as someone else supplying financial data or iron ore might be.

SAAS is a slightly different beast. One can think of an ASP provider as an implementation of SAAS, providing that "service" in SAAS in fairly large and monolithic chunks. But it need not remain that way. What if a SAAS provider is someone like former hitbox, providing a very specialized service of web analytics, or qualys continuously searching your network for vulnerabilities. In both cases, data might be downloaded and analyzed by a tool hosted by some other 3rd party, or internally. This software service is now provided as a small part of an overall business process, and may not even be known to the business unit as a component of the process that is provided by an outside vendor. To re-use the examples of services in this paragraph, we can consider the following scenario for web analytics:

IT Department provides traffic report and analysis to all departments in the enterprise. Most likely 90% of the department could not care less about the accuracy and granularity of the results. Marketing; however, is an exception. While it carefully tracks website usage all the time, a day-long outage of analytics would not be a major problem unless it coincides with a test run of a new marketing campaign. At which point and to which internal customers should IT direct an awareness campaign of the outside vendors it is using for the moment? Once a service become part of the enterprise services, their origin becomes largely transparent to the business level consumers of the service. It is worth noting that for most services only a small number of users will have a critical need of it. How should vendors be now evaluated for reliability and contracts structured?

Previously, when a department wanted to use an ASP both that business unit and IT would be involved in the evaluation process. However, SAAS will now allow both IT and business units to go it alone. That's where things can start falling through the planning cracks since a lot of the services may not be part of the primary impact analysis process. In our scenario Marketing may not be aware that web analytics is separate from web server maintenance, and IT may not know that its outsourced analytics service is critical to some group - in this case Marketing - 3 weeks out of the year.

Similarly to how cheap Windows and Linux servers proliferated in workgroups a few years ago, cheap and transparent services will have a huge impact on how applications and business processes are assembled and executed in the future. Different providers may even be used for similar process steps in various locations or processes across the enterprise. How should customers reconcile their needs for efficient and cost-effective services with an increasingly flexible software services environment? One way, of course, is for an organization to forbid the casual use of outside software services and require than any allowed uses go through a rigorous evaluation process for each service, with clearly identified IT and business level integration points and fully performed cost-benefit analysis. That would work to keep smaller service vendors out, but they are also the most innovative ones.


Another way is for someone like IBM to step in. Salesforce is already doing something similar with its AppExchange, and I think other players are gearing up. IBM has an advantage over Salesforce and others, such as SAP or Oracle in that it has a much more independent platform. IBM can become, effectively, a guarantor of a service, whether it was developed by them or not. By providing the infrastructure, IBM can make sure the basic hosting things go well - such as service uptime, bandwidth, power, etc. Furthermore, IBM can host the same service in different configurations - critical for Marketing and delayed for other department, for example. Its market power would require service vendors to certify their products for stability and scalability, and remove the uncertainty from customers of dealing with a small and unknown entity. Organizations could then provide business rules for department to use, or at least test services, provided they comply with certain requirements - certified by IBM, and are hosted by a reliable vendor - such as IBM. At some point a need to both a certified host and certifying authority will become too strong not to produce a whole sub-industry. Currently, Accentures & Delloite's of the world have the lead on certifying implementations (information security, for example). However, IBM already has a host of certification programs for its WebSphere Catalogue, as well as Ready for Virtualization and others relevant to organizational resiliency. Moreover, IBM has the ability the Accenture and its ilk lack of becoming insourced not only at the customer level, but vendor level as well. What that means is that vendors could develop services and solutions concentrating on their core competencies, not peripheral requirements of hosting an on-demand software service, for example.


As someone who works for a small vendor, I know that I would not very excited about having to build up a tremendous amount of infrastructure and support capabilities instead of farther developing our product. We did what we needed to do for our customers, but the less we have to do of things we have no competitive advantage in, the more value-added activities we can engage in. I am sure many other vendors feel the same way, and I think a lot of customers would be much happier if they could both easily use innovative services and have world-class hosting support to guarantee the robustness of that service.

Saturday, February 25, 2006

A new beginning

I originally started this weblog hoping to have an informal outlet for thoughts and news regarding an industry I am involved in. That did not work out too well. I simply did not have the time to work on this weblog in addition to my direct work-related duties. Recently, I have gotten a mandate to write up more of my thoughts, and perhaps this is a forum where they can be expressed and refined through feedback and comments.