Tuesday, December 21, 2010

The Assumption of Independence in the Financial Systems Failure

I have been spending a lot of time recently reading some of the plethora of books that have been published which either provide inside accounts of the 2008 failures in the banking sector (Lehmans and Bear Stearns) or have tried to analyse the causes of this failure. Though I’m not an economist by any stretch of the imagination, having studied financial strategy I have developed something of a morbid fascination for this topic, a little on the lines of an episode of Columbo where you know the outcome but the interest comes from finding out how Colombo will prove the perpetrators guilt. In the case of the 2008 crash it is common knowledge that it was caused by bankers taking risks purely to maximise their own bonuses isn’t it?

Going beyond the superficial mass media level reveals something slightly more interesting. There was certainly unjustifiable risk taking but this in itself ought not to have caused the systemic failure that occurred. One of the early failures which triggered the collapse of the dominoes was the collapse of two hedge funds dealing in derivatives run by Bear Stearns. Right until the point at which these funds were liquidated the fund managers were maintaining that the funds were diversified so their expose to subprime mortgages was limited. However subsequent investigation proved in fact over 70% of the cash invested in these funds had been spent on mortgage-backed derivatives. This is important as it is a key tenet of investment strategy that funds should be diversified so that losses in one area are compensated by gains in other areas. Diversification fails as a strategy when losses in one area trigger losses in another area i.e. even though a portfolio may be diversified there may be dependencies between them. This is what happened in 2008: losses in sub-prime triggered losses in other areas leading to large scale failures in supposedly resilient diversified funds.

So why bring this up in a blog supposedly devoted to technology? Having the memory of an elephant this reminded me of a couple of papers that were published in the 80s. The first paper The N-Version Approach to Fault-Tolerant Software looked at how software risk could be massively reduced by copying the idea of hardware redundancy in software, a technique know as n-version programming. Basically the idea was that for a high integrity system the software should be independently written multiple times and then control logic would execute all three versions in parallel, following the majority vote at each decision point. This was followed up by another paper An Experimental Evaluation of the Assumption of Independence in Multi-version Programming which challenged the hypothesis at the heart of n-version programming – that a failure in one programme would be independent of a failure in another programme. This is a reasonable assumption in hardware since failures are typically caused by physical characteristics rather than common design flaws. However this latter paper demonstrated empirically that this was not a safe assumption for software and therefore n-version programming was dead.

Fast forward 25 years and what do we see? The 2008 crash was effectively the result of dependent failure in a system which assumed failures were independent. Spooky eh? Normally software mimics life but in this instance software technology seems to have go there first!

Friday, July 30, 2010

Open Source Confusion

 

I have dabbled with open source for many years, both as a user and briefly as a developer. I personally like the idea that if there is a problem with the software I can fix it myself rather than having to wait for the vendor to fix it. I have therefore read with interest some interesting thoughts about open source, in particular in two recent forums.

The first area is in connection with the Department of Health’s decision not to continue its enterprise wide agreement with Microsoft. This has triggered some discussion about open source. At the same time but coincidentally the latest version of IT Now concentrates on open source.

What I have found interesting reading these articles and posts is the amount of confusion and misinformation about open source. Here I add my own thoughts to these discussions.

  1. Open Source is cheaper than closed source

    This is a classic misconception. While it is often the case that open source does not have the same initial license cost as closed source solutions, proper comparison of the costs of the two requires analysis of the respective total cost of ownership. For example in a typical corporate situation key infrastructure components require support in line with the organisation’s business needs. In a closed source situation the software vendor typically provides this as part of their maintenance agreement; in an open source situation, since there often isn’t a software vendor as such, a 3rd party organisation must provide this support. The organisation procuring such an open source solution must satisfy itself that any such support vendor has sufficient competence and expertise in the software to be able to support it. A good example of such an organisation is Red Hat who provide support for Red Hat Linux (amongst many open source products). The key point here is that lifetime costs including training, support, upgrades etc must be included in the TCO calculation.

  2. Open Source is less secure than closed source

    This is somewhat more contentious. I have previously heard this used as an argument (by non-technical people) for not using open source. I would argue that open source solutions are more secure than closed source since the opportunity for unlimited peer review of open source code significantly reduces the risk of security vulnerabilities persisting, compared to closed source solutions which effectively rely on security by obfuscation. The open source approach is similar to the practice in the cryptographic community of peer review of crypto algorithms.

  3. Open Source is easier to modify than closed source

    It is self evidently true that in principle anyone can modify open source code. However in practice modifying open source code is not for the faint hearted – these are often complex and sophisticated enterprise applications. It’s fine for Yahoo and Google engineers to modify open source software since their businesses are based on software. However for organisations for whom software is an enabler rather than a core business asset, such sophisticated software development will not typically be a core competence. For such organisations self modification of code is not really an option unless there is a desire to diversify the business into software development! For example this means that most adopters of Open Office are unlikely to modify the code themselves.

  4. Open Source is supported by dedicated individuals who freely give up their time

    There are undoubtedly many dedicated developers who give up their own time to write or modify code for open source applications. However there are also many open source products for which major chunks are developed by large organisations with salaried employees. Red Hat is an example of this. Similarly Yahoo contributes to many open source projects based on the work that their salaried engineers perform. Daniel Pink’s idealised view of open source as being the output of individuals motivated not by normal corporate rewards isn’t totally accurate.

Thursday, March 04, 2010

Software Patents Gone Mad?

Is it just me or is Apple trying to claim a patent for pub/sub? See this article. According to this Apple is claiming a patent over

“A system in which a software module called an event consumer can indicate an interest in receiving notifications about a specific set of events, and it provides an architecture for efficiently providing notifications to the [event] consumer”

What is interesting is that the pretenders to Microsoft’s crown are now exhibiting the same kind of behaviour for which Microsoft used to be criticised.

Friday, February 26, 2010

Are Standards Good for Consumers?

Last week’s Mobile World Congress produced the interesting announcement that a number of industry members are joining together to form an industry association (Wholesale Applications Community, or WAC) dedicated to providing a common application platform for mobile phones. This, combined with Bruno’s thoughtful blog on Apple and Flash/Java got me thinking...

According to the announcement “The alliance's stated goal is to create a wholesale applications ecosystem that – from day one – will establish a simple route to market for developers to deliver the latest innovative applications and services to the widest possible base of customers around the world.”

It is interesting that this is an operator-led initiative; none of the major platform vendors (Apple, Microsoft, Google or Nokia) are currently involved in this. A simple interpretation of this could be that it is an attempt by the operators to reclaim the initiative as the services they provide are effectively commoditised with industry differentiation being provided by the mobile device platforms provided by Apple et al. These mobile device platforms provide the features that enable the rich ecosystem of applications which has created a whole new sub-industry. If the operators don’t get a piece of this, they will be consigned to building masts and sending bills until they go the way of the dinosaurs.

However it also raises an interesting broader technology issue: does the consumer benefit from this standardisation? Discussions about technology standards almost always invoke the example of VHS vs Betamax as the rationale for the consumer benefits of standardisation. However doesn’t standardising the application platform take away the ability of the device manufacturers to differentiate themselves by providing distinctive features? If the argument works for mobile devices, why not for laptops? Desktops? Servers? If it’s such a great idea, why did MSX fail?

To my mind the key difference is whether we are talking about functionality or content. VHS vs Betamax was important to consumers because they wanted a standard content delivery mechanism - as long as the machine could play the content, essentially the functionality of the machine was irrelevant. So standards for defining and delivering content are good for the consumer. (HTML is another good example of this.)

Standards for functionality restrict the features available to consumers, creating monopolies and stifling innovation. This is bad for consumers. Having a diverse market for mobile device platforms is therefore very much to the benefit of consumers. Standardising this platform would be bad for consumers.

By the way, in case no-one told the members of WAC, they are reinventing the wheel - they should check out Java.

Thursday, February 18, 2010

Custom Configuration?

One of the trends that I have noticed recently is the number of firms who are challenging the elements of their IT estate that are custom developed or in some other way non-standard. The reasoning goes that anything non-standard costs more to develop and maintain compared to a vanilla out-of-the-box configuration. This is of course quite true but I think there is a more subtle point here.

Against the cost of any element of the IT estate we need to balance the value generated. In general the standard configuration in a packaged application is merely an aggregation of the most common needs of their existing user base. For many sector most firms in the sector will be using the same set of applications, so by adopting a standard configuration, a firm is saying that it is happy to execute its business process in the same way as most of its competitors.

If this business process constitutes a source of differentiated competitive advantage for the firm, by adopting the standard configuration this source of differentiation is being sacrificed and the value delivered by this differentiation lost. In this case firms should think very carefully about the cost of this differentiation versus the value.

If there is no differentiated competitive advantage associated with this business process, it is either a business overhead or a source of cost advantage. Either way the value is not reduced by standardising the business process. So in this latter case it is quite safe to standardise on an out-of-the-box configuration.

Thursday, February 11, 2010

Enterprise vs Solution Architecture Reprised

I currently work as an architect with Cognizant’s Architecture Practice within the Advanced Solutions Group. Just before Christmas we had a working session where we got on to the subject of Enterprise vs Solution Architecture. Coming out of this discussion I reached a couple of conclusions.

Firstly, is Enterprise Architecture just Solution Architecture on a larger scale? The answer, somewhat unhelpfully is Yes and No. Recall from my previous blog the time element of Enterprise Architecture. Solution Architecture doesn’t become Enterprise Architecture when it just becomes bigger and/or more complex. However humans typically deal with scale and complexity by breaking problems down. In this context if the Solution Architecture becomes so much more complex it will most likely not be implemented in one go but will be the target of a programme of change. The target and roadmap to achieve that target then becomes the Enterprise Architecture for this programme of change.

This led me to a second conclusion which I have found incredibly helpful when explaining to non Enterprise Architects the distinction between Enterprise and Solution Architects. Most IT people have no problem understanding the distinction between a project manager responsible for delivering a tightly scoped piece of work, and a programme manager, responsible for delivering change over a period of time executed as a series of parallel and/or sequential projects. In this context the appropriate simile is that an Enterprise Architect is to a Solution Architect, as a programme manager is to a project manager.