Monday, January 12, 2015

The New Legacy

As we enter 2015 there have been lots of media articles about emerging technologies in many areas. Technology enabled businesses are disrupting many established industries. The CES which started this week is adding to this excitement. Working with clients it’s great to play around with node.js, angular, wearable tech etc. Indeed I read a fascinating article about the next step after wearables being ingestibles!

Against this backdrop I had some contrasting tech experiences over the festive period. Firstly the positive: I needed to buy a new car for my wife as her existing car is getting a bit old. I used a web 1.0 approach and searched on autotrader amongst other sites to find the best price for the car we wanted. My wife however found a site called carwow.co.uk which combines beautiful presentation with a great business model: we identified the car we wanted and then dealers offered directly to us their best deal. We were able to secure a great deal without any of the traditional pain or hassle of negotiating with a car sales person.

At the other end of the scale I had to sell a few things on eBay at around the same time. This isn’t something i have done for a few years so I was taken aback by how little the UI and UX has changed since I last used it. The entire process was painful with frequent redirection to the wrong URL, multiple losses of data that I had entered leading to a need for re-entry, and an (until now) unresolved problem around uploading pictures, which seems to go into an infinite loop. 

So what - one website is better than another - nothing new there. My point is that eBay was at the leading edge of online trading sites - I can remember pioneering work originated by eBay on optimal J2EE design patterns, in the days when J2EE (as it was then) really was leading edge. There is certainly no shortage of smart people working at eBay even if they aren’t necessarily the most fashionable employer any more. My underlying fear is that this is an example of something I am seeing more and more frequently - the new legacy. Normally when we talk about legacy systems we are thinking of mainframes or client/server apps. But given the volume of technology being deployed and the pace of change of technology, the new legacy is about systems which might only be 5 years old. They are characterised by an inability to evolve to serve the changing needs of the business. Sometimes they are the result of poor design; other times they reflect the dreaded practice of deploying prototypes into production; in other cases the business needs have changed in ways that weren’t envisaged when the system was developed. 

For normal businesses the new legacy is problematic; for technology-enabled businesses the new legacy can be fatal. Businesses evolve as customer needs change, so the underlying technology needs to be flexible enough to adapt quickly to this evolution. I attended a talk that Pete Marsden, CIO of asos.com gave last year in which he talked amongst other things about this need for flexibility in your architecture. However in the same way that introducing agile is a business-wide consideration, having a flexible architecture is only useful if this flexibility is actually utilised on a regular basis. Otherwise the fear of change creates an inflexible technology platform irrespective of the underlying architectural genius!

Tuesday, July 09, 2013

The intersection between technology and business strategy


    McKinsey recently published an article "The do-or-die questions boards should ask about technology" which highlights the key questions that need to be answered about the role of IT and technology strategy in the strategy of the overall business. Although this is pitched as something that boards should ask executives, it strikes me that it is just as important that technology leaders within the organisation need to know, understand and apply the answers to these questions.

    It may seem obvious that these questions are important, but I have worked in multiple organisations where technology is not considered as a means to achieve competitive advantage, but rather is thought of as a high-risk cost centre which needs to be managed in the style of an unexploded bomb. Such organisations typically have limited understanding of the role of technology amongst top management, and this lack of understanding then pervades the corporate culture. High profile IT programme failures are used as ammunition to justify this approach. What these managers fail to understand is that, unless they happen to be operating in an environment which is totally immune to market forces, if they don't work out how to use technology to enable competitive advantage, another competitor will do. Even worse, a new market entrant without the legacy cost base that existing market participants are saddled with, could totally disrupt the market.

    Conversely organisations that have a deep understanding of how technology strategy is a key enabler for business success are often at the forefront of either delivering market-leading products and services, or leading the way in reconfiguring the economics of the market by using technology to transform ways of working. Most industries have examples of such organisations. Following this approach is not a one-off programme - it needs to be embedded in the corporate culture if the competitive advantage is to be retained.

    The upshot is that technology leaders need to challenge their companies to ensure that the critical questions identified in the McKinsey article have answers that top management are in agreement about and are executing against.

    Friday, April 19, 2013

    Intellect Stream at HC2013


    I had the pleasure of participating in Intellect's stream at HC2013 last Wednesday. The overall event was very well organised, and the Intellect stream in particular had a great set of speakers. Kicking off the stream were several patients, who gave their perspective on the NHS, based on their own experiences of it as service users. A couple of them had previously worked within the NHS so their experiences had been enlightening for them. They all showed immense bravery in sharing their individual experiences around their own conditions; too often the individual is lost in discussions about tariff, technology, resource management, commissioning etc. The lessons they fed back to us were sobering in the modesty of their needs; specifically
    • access to their own records, the degree of access and control varied somewhat across the speakers;
    • treat patients as individuals looking at their overall healthcare status and needs, not as one participant in a series of discrete, disconnected transactions;
    • why do patients have to keep providing the same information?
    • in the case of a type 1 diabetic patient, why can't all of the devices he uses to monitor and control his condition talk to each other, and/or to his smartphone?
    • how can we achieve "no decision about me without me" without a sea change in culture and organisation?
    • as a patient, I see one NHS and expect it to interact with me on that basis, not as a dysfunctional collection (my words not theirs) of organisations grudgingly aware of each other.
    During the subsequent discussion Ewan Davies made the excellent point that objections often made to change (e.g. information governance, less than 100% access to the internet etc), while reasonable should not in themselves be reasons not to change, as is currently the case. Joe McDonald in the last session characterised this as dictatorship by unanimocracy i.e. we are unable to do anything unless everyone agrees 100%.

    I participated in the session on technology platform and architecture. This was really an opportunity to stimulate some discussion around the ideas that Paul Cooper, Jon Lindberg and I pulled together in the Intellect paper "The NHS Information Evolution" launched this week. We were joined in our panel discussion by Phil Birchall from Intersystems and Mark Treleaven from FDB, and the discussion was chaired by Andrew Hartshorn, chair of the Intellect Health and Social Care Council. The discussion was engaging and interesting, ranging from the question of how to create the environment that allows beautifully crafted apps to flourish, to the challenge of how to incentivise a view of healthcare ecosystems that goes beyond the narrow parochial boundaries of the individual healthcare provider organisations. A recurring theme was the need to support mobility; another observation was that clinicians have already started using tools such as Facebook and Google docs to collaborate with other professionals in treating their patients, so we need to move rapidly to provide equivalent functionality which also protects the integrity and security of patient data.

    Moving into the later sessions these themes recurred; the following session involved a number of service developers who talked about some of the challenges they are facing in designing and deploying different kinds of technology services. Gary Shuckford from EMIS talked about his experience, from patient.co.uk, of the kind of information that patients are interested in accessing. He also talked about some of the practical difficulties in ensuring take-up of the service. The final session aimed to pull together various themes that had come up during the day, chaired by Julian David, Director General of Intellect. What was interesting in this session was the degree of consensus, that had been evident throughout the day. There is a clear acceptance that we need to improve how we leverage technology within UK healthcare, and the things that are holding us back are far more people-related than they are technology-related.

    Monday, August 22, 2011

    The difficult thing about back-ups

     

    My back-up drive failed recently.

    It was a lucky coincidence that I noticed this; despite grand intentions I’m lazy when it comes to backing things up and only think about it when I am panicking about typing ‘rm –rf’ in the wrong directory.

    The drive failed when I was moving it across my study; I powered it down, unplugged it, moved it, plugged it back in and then fired it up again. Except it declined to fire up. It made a few pathetic wheezing sounds and gave up. After a few days of online searching I admitted defeat and contacted Lacie who eventually acknowledged it was faulty and issued an RMA. Four weeks later I had a repaired working drive in place (albeit having lost all of the original data). I am now able to return to my previous state of blissful ignorance.

    The point of this little story? In a nutshell I think this summarises many enterprise’s attitude to back-ups. I know from personal experience of two organisations who notionally had a standard back-policy with regular full and incremental back-ups, which when the back-ups were needed, could not be retrieved because the back-up hardware had failed.

    Then there is the recent case of Amazon. Running infrastructure as complicated and sophisticated as they do is fraught with risk so if I was one of their customer’s I would probably be thinking about having an iron-clad business continuity plan in place, but apparently even here back-up complacency reigns.

    Why is this a big deal? Normal production systems are tested thoroughly prior to go-live and then tested on an on-going basis through production use. Any problem will be automatically detected or signalled by a user fairly quickly. However since back-up systems are only invoked by exception, the first time you know there is a problem is when you need them. The answer? Well, regular testing of a full restore from back-up seems like the obvious solution. There are some products on the market which claim to help but I remain somewhat sceptical of their efficacy.

    Am I eating my own dog food? Alas I have regressed to my pre-failure days and have adopted the macho “my hardware never fails and I never type rm –rf by mistake” attitude. Some people never learn.

    Wednesday, July 06, 2011

    Government ICT Strategy

    The recently published government ICT strategy makes interesting reading. The government is clearly trying to learn some of the lessons of previous failures, albeit without really getting to grips with the underlying reasons for these failures.

    One of the government's explicit strategy statements makes very interesting reading:
    The adoption of compulsory open standards will help government to avoid lengthy vendor lock-in, allowing the transfer of services or suppliers without excessive transition costs, loss of data or significant functionality.
    Will open standards really do this?

    Let's turn this on its head; what are the typical reasons for vendor lock-in? Here is my starter for 10 in no particular order:
    1. The software offers must-have features which competitors do not have.
    2. The organisation using the software has adapted its business processes to fit with how the software works.
    3. The organisation using the software has made a considerable investment in training its staff in how to use the software.
    4. The software is integrated with other systems that the organisation uses.
    5. Difficulty of data migration

    Which of these will open standards help with? As far as I can tell only number 4; use of open standards in interface specifications should in principle allow substitutivity of standards-compliant components on either side of the interface. I say in principle because for any reasonably sophisticated enterprise system, an interface will be a key part of a business process, which will one way or another be organisation specific. It is typically a non-trivial task (in some cases impossible) to substitute another component into this business process without impacting the business process, leading to item 2 in the above list.

    Don't get me wrong: open standards are great and to be applauded - I glory in my ability to choose my browser according to my mood. However let's not kid ourselves that be adopting them we are going to see a public sector IT world free of Oracle and MS Office any time soon.

    Tuesday, December 21, 2010

    The Assumption of Independence in the Financial Systems Failure

    I have been spending a lot of time recently reading some of the plethora of books that have been published which either provide inside accounts of the 2008 failures in the banking sector (Lehmans and Bear Stearns) or have tried to analyse the causes of this failure. Though I’m not an economist by any stretch of the imagination, having studied financial strategy I have developed something of a morbid fascination for this topic, a little on the lines of an episode of Columbo where you know the outcome but the interest comes from finding out how Colombo will prove the perpetrators guilt. In the case of the 2008 crash it is common knowledge that it was caused by bankers taking risks purely to maximise their own bonuses isn’t it?

    Going beyond the superficial mass media level reveals something slightly more interesting. There was certainly unjustifiable risk taking but this in itself ought not to have caused the systemic failure that occurred. One of the early failures which triggered the collapse of the dominoes was the collapse of two hedge funds dealing in derivatives run by Bear Stearns. Right until the point at which these funds were liquidated the fund managers were maintaining that the funds were diversified so their expose to subprime mortgages was limited. However subsequent investigation proved in fact over 70% of the cash invested in these funds had been spent on mortgage-backed derivatives. This is important as it is a key tenet of investment strategy that funds should be diversified so that losses in one area are compensated by gains in other areas. Diversification fails as a strategy when losses in one area trigger losses in another area i.e. even though a portfolio may be diversified there may be dependencies between them. This is what happened in 2008: losses in sub-prime triggered losses in other areas leading to large scale failures in supposedly resilient diversified funds.

    So why bring this up in a blog supposedly devoted to technology? Having the memory of an elephant this reminded me of a couple of papers that were published in the 80s. The first paper The N-Version Approach to Fault-Tolerant Software looked at how software risk could be massively reduced by copying the idea of hardware redundancy in software, a technique know as n-version programming. Basically the idea was that for a high integrity system the software should be independently written multiple times and then control logic would execute all three versions in parallel, following the majority vote at each decision point. This was followed up by another paper An Experimental Evaluation of the Assumption of Independence in Multi-version Programming which challenged the hypothesis at the heart of n-version programming – that a failure in one programme would be independent of a failure in another programme. This is a reasonable assumption in hardware since failures are typically caused by physical characteristics rather than common design flaws. However this latter paper demonstrated empirically that this was not a safe assumption for software and therefore n-version programming was dead.

    Fast forward 25 years and what do we see? The 2008 crash was effectively the result of dependent failure in a system which assumed failures were independent. Spooky eh? Normally software mimics life but in this instance software technology seems to have go there first!

    Friday, July 30, 2010

    Open Source Confusion

     

    I have dabbled with open source for many years, both as a user and briefly as a developer. I personally like the idea that if there is a problem with the software I can fix it myself rather than having to wait for the vendor to fix it. I have therefore read with interest some interesting thoughts about open source, in particular in two recent forums.

    The first area is in connection with the Department of Health’s decision not to continue its enterprise wide agreement with Microsoft. This has triggered some discussion about open source. At the same time but coincidentally the latest version of IT Now concentrates on open source.

    What I have found interesting reading these articles and posts is the amount of confusion and misinformation about open source. Here I add my own thoughts to these discussions.

    1. Open Source is cheaper than closed source

      This is a classic misconception. While it is often the case that open source does not have the same initial license cost as closed source solutions, proper comparison of the costs of the two requires analysis of the respective total cost of ownership. For example in a typical corporate situation key infrastructure components require support in line with the organisation’s business needs. In a closed source situation the software vendor typically provides this as part of their maintenance agreement; in an open source situation, since there often isn’t a software vendor as such, a 3rd party organisation must provide this support. The organisation procuring such an open source solution must satisfy itself that any such support vendor has sufficient competence and expertise in the software to be able to support it. A good example of such an organisation is Red Hat who provide support for Red Hat Linux (amongst many open source products). The key point here is that lifetime costs including training, support, upgrades etc must be included in the TCO calculation.

    2. Open Source is less secure than closed source

      This is somewhat more contentious. I have previously heard this used as an argument (by non-technical people) for not using open source. I would argue that open source solutions are more secure than closed source since the opportunity for unlimited peer review of open source code significantly reduces the risk of security vulnerabilities persisting, compared to closed source solutions which effectively rely on security by obfuscation. The open source approach is similar to the practice in the cryptographic community of peer review of crypto algorithms.

    3. Open Source is easier to modify than closed source

      It is self evidently true that in principle anyone can modify open source code. However in practice modifying open source code is not for the faint hearted – these are often complex and sophisticated enterprise applications. It’s fine for Yahoo and Google engineers to modify open source software since their businesses are based on software. However for organisations for whom software is an enabler rather than a core business asset, such sophisticated software development will not typically be a core competence. For such organisations self modification of code is not really an option unless there is a desire to diversify the business into software development! For example this means that most adopters of Open Office are unlikely to modify the code themselves.

    4. Open Source is supported by dedicated individuals who freely give up their time

      There are undoubtedly many dedicated developers who give up their own time to write or modify code for open source applications. However there are also many open source products for which major chunks are developed by large organisations with salaried employees. Red Hat is an example of this. Similarly Yahoo contributes to many open source projects based on the work that their salaried engineers perform. Daniel Pink’s idealised view of open source as being the output of individuals motivated not by normal corporate rewards isn’t totally accurate.