Paul Mukherjee's Blog
Monday, January 12, 2015
The New Legacy
Tuesday, July 09, 2013
The intersection between technology and business strategy
It may seem obvious that these questions are important, but I have worked in multiple organisations where technology is not considered as a means to achieve competitive advantage, but rather is thought of as a high-risk cost centre which needs to be managed in the style of an unexploded bomb. Such organisations typically have limited understanding of the role of technology amongst top management, and this lack of understanding then pervades the corporate culture. High profile IT programme failures are used as ammunition to justify this approach. What these managers fail to understand is that, unless they happen to be operating in an environment which is totally immune to market forces, if they don't work out how to use technology to enable competitive advantage, another competitor will do. Even worse, a new market entrant without the legacy cost base that existing market participants are saddled with, could totally disrupt the market.
Conversely organisations that have a deep understanding of how technology strategy is a key enabler for business success are often at the forefront of either delivering market-leading products and services, or leading the way in reconfiguring the economics of the market by using technology to transform ways of working. Most industries have examples of such organisations. Following this approach is not a one-off programme - it needs to be embedded in the corporate culture if the competitive advantage is to be retained.
The upshot is that technology leaders need to challenge their companies to ensure that the critical questions identified in the McKinsey article have answers that top management are in agreement about and are executing against.
Friday, April 19, 2013
Intellect Stream at HC2013
I had the pleasure of participating in Intellect's stream at HC2013 last Wednesday. The overall event was very well organised, and the Intellect stream in particular had a great set of speakers. Kicking off the stream were several patients, who gave their perspective on the NHS, based on their own experiences of it as service users. A couple of them had previously worked within the NHS so their experiences had been enlightening for them. They all showed immense bravery in sharing their individual experiences around their own conditions; too often the individual is lost in discussions about tariff, technology, resource management, commissioning etc. The lessons they fed back to us were sobering in the modesty of their needs; specifically
- access to their own records, the degree of access and control varied somewhat across the speakers;
- treat patients as individuals looking at their overall healthcare status and needs, not as one participant in a series of discrete, disconnected transactions;
- why do patients have to keep providing the same information?
- in the case of a type 1 diabetic patient, why can't all of the devices he uses to monitor and control his condition talk to each other, and/or to his smartphone?
- how can we achieve "no decision about me without me" without a sea change in culture and organisation?
- as a patient, I see one NHS and expect it to interact with me on that basis, not as a dysfunctional collection (my words not theirs) of organisations grudgingly aware of each other.
I participated in the session on technology platform and architecture. This was really an opportunity to stimulate some discussion around the ideas that Paul Cooper, Jon Lindberg and I pulled together in the Intellect paper "The NHS Information Evolution" launched this week. We were joined in our panel discussion by Phil Birchall from Intersystems and Mark Treleaven from FDB, and the discussion was chaired by Andrew Hartshorn, chair of the Intellect Health and Social Care Council. The discussion was engaging and interesting, ranging from the question of how to create the environment that allows beautifully crafted apps to flourish, to the challenge of how to incentivise a view of healthcare ecosystems that goes beyond the narrow parochial boundaries of the individual healthcare provider organisations. A recurring theme was the need to support mobility; another observation was that clinicians have already started using tools such as Facebook and Google docs to collaborate with other professionals in treating their patients, so we need to move rapidly to provide equivalent functionality which also protects the integrity and security of patient data.
Moving into the later sessions these themes recurred; the following session involved a number of service developers who talked about some of the challenges they are facing in designing and deploying different kinds of technology services. Gary Shuckford from EMIS talked about his experience, from patient.co.uk, of the kind of information that patients are interested in accessing. He also talked about some of the practical difficulties in ensuring take-up of the service. The final session aimed to pull together various themes that had come up during the day, chaired by Julian David, Director General of Intellect. What was interesting in this session was the degree of consensus, that had been evident throughout the day. There is a clear acceptance that we need to improve how we leverage technology within UK healthcare, and the things that are holding us back are far more people-related than they are technology-related.
Monday, August 22, 2011
The difficult thing about back-ups
My back-up drive failed recently.
It was a lucky coincidence that I noticed this; despite grand intentions I’m lazy when it comes to backing things up and only think about it when I am panicking about typing ‘rm –rf’ in the wrong directory.
The drive failed when I was moving it across my study; I powered it down, unplugged it, moved it, plugged it back in and then fired it up again. Except it declined to fire up. It made a few pathetic wheezing sounds and gave up. After a few days of online searching I admitted defeat and contacted Lacie who eventually acknowledged it was faulty and issued an RMA. Four weeks later I had a repaired working drive in place (albeit having lost all of the original data). I am now able to return to my previous state of blissful ignorance.
The point of this little story? In a nutshell I think this summarises many enterprise’s attitude to back-ups. I know from personal experience of two organisations who notionally had a standard back-policy with regular full and incremental back-ups, which when the back-ups were needed, could not be retrieved because the back-up hardware had failed.
Then there is the recent case of Amazon. Running infrastructure as complicated and sophisticated as they do is fraught with risk so if I was one of their customer’s I would probably be thinking about having an iron-clad business continuity plan in place, but apparently even here back-up complacency reigns.
Why is this a big deal? Normal production systems are tested thoroughly prior to go-live and then tested on an on-going basis through production use. Any problem will be automatically detected or signalled by a user fairly quickly. However since back-up systems are only invoked by exception, the first time you know there is a problem is when you need them. The answer? Well, regular testing of a full restore from back-up seems like the obvious solution. There are some products on the market which claim to help but I remain somewhat sceptical of their efficacy.
Am I eating my own dog food? Alas I have regressed to my pre-failure days and have adopted the macho “my hardware never fails and I never type rm –rf by mistake” attitude. Some people never learn.
Wednesday, July 06, 2011
Government ICT Strategy
The adoption of compulsory open standards will help government to avoid lengthy vendor lock-in, allowing the transfer of services or suppliers without excessive transition costs, loss of data or significant functionality.
- The software offers must-have features which competitors do not have.
- The organisation using the software has adapted its business processes to fit with how the software works.
- The organisation using the software has made a considerable investment in training its staff in how to use the software.
- The software is integrated with other systems that the organisation uses.
- Difficulty of data migration
Tuesday, December 21, 2010
The Assumption of Independence in the Financial Systems Failure
I have been spending a lot of time recently reading some of the plethora of books that have been published which either provide inside accounts of the 2008 failures in the banking sector (Lehmans and Bear Stearns) or have tried to analyse the causes of this failure. Though I’m not an economist by any stretch of the imagination, having studied financial strategy I have developed something of a morbid fascination for this topic, a little on the lines of an episode of Columbo where you know the outcome but the interest comes from finding out how Colombo will prove the perpetrators guilt. In the case of the 2008 crash it is common knowledge that it was caused by bankers taking risks purely to maximise their own bonuses isn’t it?
Going beyond the superficial mass media level reveals something slightly more interesting. There was certainly unjustifiable risk taking but this in itself ought not to have caused the systemic failure that occurred. One of the early failures which triggered the collapse of the dominoes was the collapse of two hedge funds dealing in derivatives run by Bear Stearns. Right until the point at which these funds were liquidated the fund managers were maintaining that the funds were diversified so their expose to subprime mortgages was limited. However subsequent investigation proved in fact over 70% of the cash invested in these funds had been spent on mortgage-backed derivatives. This is important as it is a key tenet of investment strategy that funds should be diversified so that losses in one area are compensated by gains in other areas. Diversification fails as a strategy when losses in one area trigger losses in another area i.e. even though a portfolio may be diversified there may be dependencies between them. This is what happened in 2008: losses in sub-prime triggered losses in other areas leading to large scale failures in supposedly resilient diversified funds.
So why bring this up in a blog supposedly devoted to technology? Having the memory of an elephant this reminded me of a couple of papers that were published in the 80s. The first paper The N-Version Approach to Fault-Tolerant Software looked at how software risk could be massively reduced by copying the idea of hardware redundancy in software, a technique know as n-version programming. Basically the idea was that for a high integrity system the software should be independently written multiple times and then control logic would execute all three versions in parallel, following the majority vote at each decision point. This was followed up by another paper An Experimental Evaluation of the Assumption of Independence in Multi-version Programming which challenged the hypothesis at the heart of n-version programming – that a failure in one programme would be independent of a failure in another programme. This is a reasonable assumption in hardware since failures are typically caused by physical characteristics rather than common design flaws. However this latter paper demonstrated empirically that this was not a safe assumption for software and therefore n-version programming was dead.
Fast forward 25 years and what do we see? The 2008 crash was effectively the result of dependent failure in a system which assumed failures were independent. Spooky eh? Normally software mimics life but in this instance software technology seems to have go there first!
Friday, July 30, 2010
Open Source Confusion
I have dabbled with open source for many years, both as a user and briefly as a developer. I personally like the idea that if there is a problem with the software I can fix it myself rather than having to wait for the vendor to fix it. I have therefore read with interest some interesting thoughts about open source, in particular in two recent forums.
The first area is in connection with the Department of Health’s decision not to continue its enterprise wide agreement with Microsoft. This has triggered some discussion about open source. At the same time but coincidentally the latest version of IT Now concentrates on open source.
What I have found interesting reading these articles and posts is the amount of confusion and misinformation about open source. Here I add my own thoughts to these discussions.
- Open Source is cheaper than closed source
This is a classic misconception. While it is often the case that open source does not have the same initial license cost as closed source solutions, proper comparison of the costs of the two requires analysis of the respective total cost of ownership. For example in a typical corporate situation key infrastructure components require support in line with the organisation’s business needs. In a closed source situation the software vendor typically provides this as part of their maintenance agreement; in an open source situation, since there often isn’t a software vendor as such, a 3rd party organisation must provide this support. The organisation procuring such an open source solution must satisfy itself that any such support vendor has sufficient competence and expertise in the software to be able to support it. A good example of such an organisation is Red Hat who provide support for Red Hat Linux (amongst many open source products). The key point here is that lifetime costs including training, support, upgrades etc must be included in the TCO calculation.
- Open Source is less secure than closed source
This is somewhat more contentious. I have previously heard this used as an argument (by non-technical people) for not using open source. I would argue that open source solutions are more secure than closed source since the opportunity for unlimited peer review of open source code significantly reduces the risk of security vulnerabilities persisting, compared to closed source solutions which effectively rely on security by obfuscation. The open source approach is similar to the practice in the cryptographic community of peer review of crypto algorithms.
- Open Source is easier to modify than closed source
It is self evidently true that in principle anyone can modify open source code. However in practice modifying open source code is not for the faint hearted – these are often complex and sophisticated enterprise applications. It’s fine for Yahoo and Google engineers to modify open source software since their businesses are based on software. However for organisations for whom software is an enabler rather than a core business asset, such sophisticated software development will not typically be a core competence. For such organisations self modification of code is not really an option unless there is a desire to diversify the business into software development! For example this means that most adopters of Open Office are unlikely to modify the code themselves.
- Open Source is supported by dedicated individuals who freely give up their time
There are undoubtedly many dedicated developers who give up their own time to write or modify code for open source applications. However there are also many open source products for which major chunks are developed by large organisations with salaried employees. Red Hat is an example of this. Similarly Yahoo contributes to many open source projects based on the work that their salaried engineers perform. Daniel Pink’s idealised view of open source as being the output of individuals motivated not by normal corporate rewards isn’t totally accurate.