Wednesday, December 16, 2009


Enterprise integration is one of the most difficult IT problems to crack.

A while back I was lucky enough to be invited to a reception at The Gherkin by a leading vendor of integration software. At the reception the CTO of this company made a speech; though I expected this to be fairly positive and sales-oriented, I was stunned when he announced that “the integration problem is 90% solved”. Having worked continuously on large-scale integration projects for the last 8 years or so, I was filled with a sense of dread that after all there had been an obvious answer that I and the talented people I had been working with had somehow overlooked. Intrigued, I grabbed the CTO as soon as he had finished his speech and challenged him about this. To my opening “You’re wrong, we’re not even at 10% yet” he managed to muster that he was just talking about the technology plumbing underlying integration. That of course is a different issue totally but this brief interlude highlights everything that is wrong with integration projects.

First and foremost, integration is a business problem, which needs to be owned and led by the business not the IT shop. Integration boils down to users attempting to conduct business while traversing system boundaries. Before you can understand how to plumb the systems together you need to understand the business processes involved. The problem with this is that understanding integrated business processes is very difficult. Moreover integration usually goes hand in hand with business transformation, so in parallel new business processes need to be agreed and integration aspects analysed and documented.

Integration projects go wrong when they are technology driven. A bunch of well-meaning techies (or even worse, mercenary consultants) tell senior managers that they can save £(name your amount) by providing “joined-up solutions”. The explanation is simple in management speak - “by joining up silos of data we make the business more efficient and agile, requiring fewer people to achieve greater productivity”. Who isn’t going to believe that (Tony Blair did). Joining up these silos means building business processes that can span the silos (otherwise you won’t get any improvement in efficiency). The net result is techies telling business people how to do their jobs - not renowned as a recipe for success.

Wednesday, November 25, 2009

Systems Integration

I have recently noticed with a number of clients an interesting phenomenon. A project starts life as an application roll out or an upgrade but morphs over time into a systems integration project. First things first, what do we mean when we talk about systems integration?

Systems integration projects have a number of characteristic properties:
  • The is a single holistic desired business outcome with an identified business sponsor accountable for achievement of this outcome;
  • The outcome can not be achieved by delivery of a single isolated application but requires multiple applications integrated in some way, or a single application integrated with other back-end systems in some way;
  • There are multiple organisational entities involved in delivery of the overall system. These could be external entities such as multiple suppliers delivering to the client, or could involve multiple internal entities, particularly in larger organisations.
What is striking about these evolved systems integration projects is that by accident or design some of the basics of systems integration get ignored, leading to fundamental (and in some case irresolvable) problems with systems delivery. Examples include:
  • Having a clearly defined and baselined set of requirements under configuration and change management;
  • Having a prime who is accountable for end to end systems integration. The National Programme for IT is a classic example of how things can go wrong when there isn’t a prime - you can’t federate systems integration;
  • Putting in place mechanisms which allow individual suppliers to perform partial integration testing of their components. This could be for example provision of a sandpit environment or provision of test harnesses that suppliers can use in their own environments.
In future blogs I will return to some of these themes and explore them in more detail.


Someone recently introduced himself to me as an “expert” in his particular area. I was somewhat taken aback by this as in Britain at least this is not a term we use lightly. As I reflected on this I reached two diametrically opposite interpretations of this word:

  • Someone calling themself an expert is one of a handful of people in the world who knows all there is to know about this particular subject; if they aren’t able to answer a question (perhaps after going away to think about it on their own) no-one can.
  • Someone calling themself an expert has such a weak understanding of the subject in question that they aren’t even aware of all of their areas of ignorance. In all likelihood such experts will answer questions inaccurately if they can answer them at all.
Note that experts in the former category don’t need to describe themselves in this way - it is obvious.

Friday, May 08, 2009

An Audience with Michael Dell

I recently had the pleasure of hearing Michael Dell speak in a briefing session organised by BCS ELITE.

Michael started off by giving his views about industry trends and the current economic situation. In particular he spent quite a lot of time talking about cloud computing and how Dell are working with large technology organisations such as Amazon and Yahoo. He talked about the emergence of private clouds - cloud based solutions with limited shared infrastructure to provide some of the benefits of cloud computing while at the same time assuaging concerns about security and integrity of data.

After Michael’s presentation there was a Q&A session. The questions varied from “are you getting too close to Microsoft” to “what would you do if you weren’t running Dell”. Michael answered all of these questions head-on, without any kind of hesitation. What really came across very strongly was his deep passion for technology - he very clearly would rather spend time with his engineers than with his bean counters. In this respect his attitude was similar to Bill Gates’s, when I heard him speak last year (albeit in a more constrained forum). It was great to hear Michael speak - my congratulations and thanks go to BCS ELITE for organising this event.

Friday, March 13, 2009

The Enterprise in EA

One of the areas of contention in EA in anything but the smallest organisation, is how to define the enterprise, i.e. the ‘E’ in ‘EA’. This may seem obvious - if you are in the business of manufacturing and selling widgets, then the enterprise is the business of manufacturing and selling widgets. However in these days of outsourcing, off-shoring and reconfiguration of value chains things are not so straightforward. Consider the following examples.

The first is Tesco. As a large and successful retailer its enterprise consists of taking products from suppliers and selling them to consumers via multiple channels (various store formats, online and catalogue). It provides back office functions in support of these activities. So what is the enterprise here? Are suppliers part of the enterprise? Are the different haulage companies used by Tesco part of the enterprise? Is the internal email system part of the enterprise?

As a second example consider a public sector organisation such as an NHS trust. It delivers care to patients, funded by the department of health. If it is a larger trust it may also undertake teaching and/or research. So does the enterprise in this case include the research systems? The teaching systems?

Finally, my own pet favourite example: a systems integration programme. Consider a prime contractor for the NHS National Programme for IT. A prime delivers the programme against the customer’s specification to the customer’s stakeholders. So in this case we have a confluence of the NHS enterprise, an individual trust’s enterprise and the prime contractor’s enterprise.

I think the first thing that becomes clear from these examples is the link between the enterprise and governance. As such I could choose the enterprise to be anything I want, but it is meaningless if I have no ability to measure and influence conformance against my target architecture. This potentially means that the enterprise can be broader than a legal entity. For example Tesco could include elements of EA in the contracts that they agree with suppliers e.g. use of standardised interfaces for communications, common business processes etc. Conversely it also explains the common situation of organisations creating EA teams but not providing any governance mechanism, which leads to a team that produces lots of good ideas which are then largely ignored by the rest of the organisation.

My second observation is more contentious: if the enterprise picture is highly complicated with multiple powerful stakeholders who have divergent interests, there is no point trying to have anything other than a trivial enterprise architecture, because parochial stakeholder interests will always defeat a federated, consensus-based governance model. In short, if it is a complex stakeholder environment, see what emerges rather than trying to impose a centralised EA.

Friday, February 27, 2009

Requirements and IT programmes

Requirements in any kind of IT-based engineering programme are difficult. The bigger the programme, the more difficult they are. This may seem paradoxical in world in which Google, Amazon and eBay are delivering high performance IT-driven businesses to massive audiences. Why is that?

I think there are three major reasons for this.

The first is the predilection for big-bang approaches with large programmes. This seems to be particularly prevalent in public sector programmes, where IT is driving business transformation, such as Aspire, NPfIT and DII. By adopting a big-bang approach it is not possible to attempt to scale up ideas from agile development around getting something working and then improving on it in stages. Conversely Google et al typically introduce new products and services initially to limited audiences as betas (e.g. gmail) and only gradually increase functionality until sufficient stability for full release has been reached.

The next reason, may be peculiarly British. The UK Office of Government Commerce has mandated the use of output-based specifications for procurement of large programmes. This is fine in itself since this approach ensures focus is maintained on end-user business benefits. However output-based specifications are not engineering requirements, so delivery of a service can not be measured against an OBS. Let’s consider an example to make this clear. The following requirement is from the OBS for the NPfIT, available on the Department of Health’s public web site.

The service shall be secure and confidential, and only accessible on a need-to-know basis to authorised users.

In the absence of precise definitions of “secure”, “confidential” and “need-to-know” this is a vacuous statement!

(Note that it may appear that I have selected this requirement as an extreme example in order to demonstrate the point, but in fact I chose this at random.)

I’m not suggesting that the requirements against which programmes are procured should go in to engineering levels of detail, but conversely these requirements are inadequate as a basis for engineering delivery. The approach that I have seen at first hand, and which worked successfully, is to use such OBS style requirements as the starting point for requirements elaboration. For a particular release of a service, the requirement should be elaborated to the degree that it is testable and implementable, and that should be agreed between customer and supplier as the basis for determining whether the requirement has been met or not. In long-term programmes the elaboration may alter over time (for example an encryption algorithm which was secure 10 years ago may not be secure today). The essence is that the OBS expresses the spirit of the requirements rather than the details; procurement requirements are not the same thing as engineering requirements.

The final reason is the use of COTS packages for delivery of such programmes. The challenge is that the delivered requirements depend totally on the capability of the COTS package. This is more an issue for business processes, since these tend to be intrinsic to specific packages. This is another reason why ideas from agile development can not be reused. Also, picking up on the previous point, the vagueness of an OBS means that the customer and COTS vendor could have very different (but equally valid) interpretations of what a requirement means.

Is this a real problem? Well, judge for yourself the success of these large programmes...

Wednesday, February 18, 2009

Building Architecture vs Aeronautical Engineering

Building architecture is often used as an example of what IT architecture should aspire to. There are a number of reasons for this: for a start, the term “architecture” is normally associated with buildings, and has really been adopted by IT in parallel with the emergence of the new discipline of abstracting large scale systems in order to be able to understand them. This close relationship with building architecture has been cemented by the work on architecture patterns, which takes as its starting point Christopher Alexander’s work on patterns in building architecture.

There are certainly some similarities between building and IT architecture. Both use tools of abstraction to manage complexity; for example building architecture uses plan and elevation drawings to understand the structure of buildings, and mathematics to understand how to construct buildings. IT uses architecture views (such as the ToGAF framework) to understand what needs to be built, and a variety of tools and processes in order to build these systems.

But what about after construction? How well does the metaphor hold then? I think at this point it falls apart; buildings are by and large static, maintenance is typically restricted to superficial changes such as painting and decorating. It is unusual for buildings to go through fundamental reconstruction. On the other hand, IT systems are living beasts, which are subject to constant change, of varying degrees. Sometimes it is addition of a minor feature or a new interface; other times it can be fundamental re-architecting of the entire system in order to accommodate new requirements, or in order to accommodate original requirements which were not properly understood.

In practice this means that in the case of building architecture, design documents and blueprints will gather dust post-construction, whereas for IT systems it is critical to have an up-to-date ‘as-maintained’ documentation set. (That said, in my experience organisations that do maintain such documentation are the exception rather than the rule.)

I think a better metaphor is aeronautical engineering, where the discipline involved in maintaining up-to-date documentation for in-life aircraft, associated systems and components is quite incredible. I was struck by this years ago when I worked on a project with Boeing re-engineering a tool they use - Wiring Illuminator - which helped maintainers to understand the individual wires and connected end-points in aircraft. Subsequently I worked on JSF where the life history of every single component was being maintained. Note that I am following well-trodden ground here: over 10 years ago Ross Anderson was pointing out that IT could learn much from the way that the aircraft industry deals with safety-related issues.

As the discipline of IT architecture develops I fully expect that the need to capture high quality ‘as-maintained’ documentation will be critical. Tim O’Reilly shrewdly observed that a key industry differentiator in the future will be service management organisations; I would add to that: it will be service management organisations who excel at ‘as-maintained’ documentation and baseline management.

Wednesday, February 11, 2009

What Should Go In The Cloud?

Cloud computing is all the rage. Vendors are falling over themselves to offer services from the cloud. Analysts are proclaiming that the cloud is the next big thing. So given that cloud based services provide economies of scale that most businesses can’t dream of, we should be pushing all of our services in to the cloud rather than provisioning them ourselves, right?

Let’s consider an example from the last programme that I worked on. In that programme a national single sign on (SSO) solution was provided as an external service (in the cloud in fashionable parlance). This ensured a single identity across NHS organisations, allowing users to cross organisational boundaries with a single identity. Great idea. One minor problem: if for any reason this external service was unavailable, users were not able to log in to any of their applications, and users already logged in had their sessions terminated. Unavailability of that single service impacted all other business applications.

What seemed like a great idea at the time, did not really stand up to scrutiny in practice. Of course hindsight is a great tool so I am not criticising the original design choice, but trying to learn from it. Using this example it is obvious that not everything should be in the cloud - operational considerations need to be traded off against financial benefits. So how do we decide what should go in the cloud and what we should deliver ourselves?

There are a number of dimensions to this. The first consideration is the business’s value chain. Any secondary activity in the value chain is a candidate for delivery via the cloud. For example, HR systems, intranets etc. What about primary activities? Instinctively these should be delivered internally. But if that is the case, how is it that has been so successful?? I think the answer is deeper: primary activities should be delivered from the cloud if they can so provide greater levels of quality and reliability than would be possible by delivering it internally. So for a large, mature organisation with a sophisticated IT operation, delivering CRM internally might make sense. For other organisations CRM via might make sense even though this is a primary activity for the organisation.

Returning to my SSO example then, for those NHS organisations for whom SSO is too complicated a task it makes sense to deliver this from the cloud. For larger, more sophisticated NHS organisations, internal delivery of SSO might be appropriate. That just leaves the problem of interoperability...for a later blog!

Thursday, February 05, 2009

Massive IT Programmes

I have just recently changed jobs, joining Cognizant’s Advanced Solutions Practice, having spent the last three and a half years working for BT as Chief Architect on the NHS National Programme for IT. Moving on from that role has given me the chance to reflect a little on some of the challenges that I faced in that role.

The programme is frequently in the press, and has been labelled as a classic example of a failing IT programme. Though the press coverage has in general been ill-informed and inaccurate there have undoubtedly been problems with delivering the programme, for many reasons, which I will not get in to here. However some general observations can be made about massive IT programmes.

One of the greatest challenges in programmes such as this one is the sheer size of change involved in terms of both business and technology. The traditional programme and project management approach to dealing with the complexity that this scale brings is to follow a reductionist strategy, breaking the overall programme into smaller manageable parts. The difficulty with this is choosing how to slice and dice the large problem. Executed correctly this approach allows application of traditional programme management and systems engineering techniques to ensure delivery within acceptable parameters of cost, schedule and risk. The down side is that if the overall problem is divided incorrectly the small parts so obtained are as difficult to deliver as the overall programme. Moreover this approach assumes that such a division is possible.

What alternatives are there then? That is a difficult question to answer since this is really an embryonic and immature field. Historically the approach taken was to execute a small-scale pilot programme then scale this up to the size of the large programme, but that takes time and can cause loss of momentum. An alternative would be to take an evolutionary approach, similar to some agile approaches to software development: execute a solution with acknowledged flaws, and evolve this via a series of small iterations in to a solution that is ‘good enough’ to satisfy the key stakeholders of the programme.

Tuesday, January 20, 2009

Enterprise vs Solution Architecture

Following on from a previous blog, one of the things that I often see confused is the difference between enterprise and solution architecture. In particular I often see people confuse solution architecture with technical architecture, and enterprise architecture with solution architecture.

Wikipedia provides the following definition:
Enterprise architecture is a comprehensive framework used to manage and align an organisation's business processes, Information Technology (IT) software and hardware, local and wide area networks, people, operations and projects with the organisation's overall strategy.
I have highlighted some of the key elements of this definition. EA is about providing a framework that helps to align business processes, IT and people with the overall strategy of the organisation. This is typically captured by considering business process, applications, information and technology as independent views of the enterprise, and then mapping out how they will evolve over time (e.g. via the use of roadmaps, technical strategies and reference architectures). This can be depicted graphically:

Solution architecture is about delivering a project at a particular point in time. In the happy days scenario governance structures are in place to ensure that solution architectures are perfectly aligned with the enterprise architecture, so we can think of each solution architecture as being a ‘snapshot’ of the enterprise architecture at a particular point in time. Again we can model this graphically:

The reality is rarely as clean as the happy days scenario. Governance is more typically a mechanism for identifying divergence from the enterprise architecture, rather than a means of enforcing it, since delivery projects are normally under massive pressure to do the bare minimum to ensure delivery, rather than think of the longer-term implications of the choices made. Solution architectures are thus often misaligned with the EA at best; in some cases they are totally at odds with the EA. This is shown below:

More on governance in the future...