Sunday, June 15, 2008

The Resurrection of the Power Mac

For the last four years I have used a Power Mac G5 with dual 2.5GHz processors as my main machine at home. During this time the machine has been pretty much bullet-proof; it stays turned on most of the time (though in sleep mode when I am at work); the only time it has been switched off is when I have moved house! The one alteration I have made to it was to add an additional hard disk last year.

Last Tuesday without warning the machine stopped - it was not even a graceful shutdown. When I tried to restart it, though I could hear the fans spinning no video signal was generated. I tried the various boot options e.g. single user mode, boot from optical drive, none of which worked. Stumped I went to bed to think about it overnight. In the morning when I tried again, it booted first time. I put it into sleep mode and went to work. When I returned in the evening I started using and after 20 minutes or so it put itself into sleep mode. It repeated this several times until it just stopped again.

I decided to go back to basics so I disconnected all devices except the screen, keyboard and mouse, in case there was a hardware conflict causing the problem. That didn't help. At this point I was starting to get desperate, and concluded that there was probably a hardware fault on the machine. I therefore contacted a local company who are authorised for Apple service, and they suggested I bring the machine in and they could run some diagnostics for me. I couldn't do it until the weekend so I had a couple of days to try to resolve the issue myself.

As a vain attempt to do something useful I opened up the case and was greeted by several thick dust bunnies. There was nothing visibly wrong internally, but embarrassed at the thought of the service engineer seeing the amount of dust in the case, I put a brush nozzle on the end of our vacuum cleaner and sucked out all of the dust. When I looked more carefully I could see that the air ducts onto the CPU cooler (it is liquid cooled) were totally clogged up, so I used the vacuum cleaner to remove most of this dust. Feeling satisfied that I would not deliver a dust-filled machine to be repaired I put the case back together, and tried one last time to start it up.

Hey presto, it worked! That was 4 days ago, and I have not had a problem with it since then. The conclusion I have reached is that the CPUs were getting too hot which was causing the machine to shut down and then to refuse to start up until they had cooled down. I am guessing this is a feature built in to the chipset to prevent permanent damage. I have therefore installed Temperature Monitor so that I can avoid the problem in the future.

It gives a whole new meaning to the phrase "clean down the machine"!

Saturday, May 24, 2008

Architects According to The Architecture Journal

I have been subscribing to Microsoft's Architecture Journal for over a year now. It drops into my letter box every two months and gives me something interesting to read on my train journey in to work. Normally (as one would expect) it is quite Microsoft-centric so it needs to be understood in that context, but nevertheless it can be interesting to understand how Microsoft technologies can be applied to problems, to understand best practices and to understand the emerging solution patterns around these technologies.

The latest issue (Journal 15) has the theme of "The Role of an Architect". Under this umbrella there are several articles that explore the nature of the architect role, and highlight some of the challenges involved in being an architect.

The first article is "We Don't Need No Architects" by Joseph Hofstader. It is an interesting article identifying the role to be played by architects in projects, and distinguishing the architect role from the role of developers. It makes a number of critical points about the skills that an architect needs such as the ability to think abstractly and conceptually, and the ability to understand and leverage patterns. However in my opinion it only answers a small part of the question; everything that the article states is quite true, but it only represents a small part of the architect's role.

The article does not distinguish between solution architect and enterprise architect, but implicitly it refers to solution architects. Within the solution architect role critical to success are the soft skills. For example:
  • People - in anything other than the smallest organisation, a solution architect needs to work in a cooperative, consensual manner, ensuring buy-in for the solution architecture from all key stakeholders. Without this buy-in there can be no confidence that the solution as implemented will match the solution architecture.
  • Politics - related to the previous point, there is an element of organisational politics that solution architects need to be aware of; I'm not suggesting that solution architects need to be Machiavellian and manipulative, but all organisations have politics and if a solution architecture is to be accepted by key stakeholders these political pressures and drivers need to be understood.
  • Commercial - in principle commercial drivers should be captured as solution requirements that drive the solution architecture. The reality in my experience is that commercial drivers are never documented in that sense (and an organisation may not want to document such drivers) so a solution architect needs to be aware of these drivers when making architectural decisions. For example a difficult commercial relationship with a particular supplier might mean that an abstraction layer should be placed around that supplier's API to insulate the solution from a future change of supplier
So how much of the role is the IT architect role that Joe describes, and how much is soft skills? The answer depends very much on the size of the organisation and the size of the project; my own experience is that 80-90% is soft skills, but a very experienced enterprise architect I spoke to recently estimated 95%.

Tuesday, May 13, 2008

The Software Value Chain

I have read some interesting articles recently: "How Google Works" gives a fascinating overview of Google's pioneering approach to providing a high-performance service using commodity hardware. Similarly Amazon's Dynamo storage system provides an innovative approach to the problem of storing resilient persistent data without using an ACID database. What is striking in both cases is how industry norms are not just being ignored, but are being turned on their head. For many organisations the cost and risk of developing and maintaining a proprietary persistence layer would outweigh the performance benefits gained.

At first I thought that this was an issue of businesses whose core competence is not software choosing to effectively outsource these elements of their value chain e.g. by using a J2EE app server and an Oracle database. However thinking more about this there are lots of business who have a highly sophisticated approach to software who do not dare go against these industry norms. Why then do Google and Amazon innovate where others fear to tread? I see two reasons for this:
  • By developing a custom infrastructure they maximise their competitive advantage since the infrastructure is tuned to precisely what the business needs, no more and no less. Amazon's Dynamo has been designed to exactly fit in with their business processes (e.g. a shopping cart) giving a performant and lean solution.
  • The custom infrastructure is an enabler for business services through which they gain competitive advantage - how long would it take to serve up a standard Google query if the data was stored in an Oracle database?
Is this the way forward then? The reality is that most IT engineers are not as smart as those that designed Dynamo or worked out how to provide Google with a resilient infrastructure using commodity hardware. Standard platforms such as .Net and J2EE enable average engineers to deliver reliable solutions. Don't rush out to design a custom persistence layer just yet.

Thursday, May 08, 2008

Airport Security and Scalability

Travelling through Heathrow terminal 4 in January I was interested to see that the shoe checking had been decoupled from the rest of the xray security i.e. there was an initial xray check followed in a separate area by a security examination of shoes.

This was an excellent example of a tiered architecture since presumably someone had done the analysis to show that bottlenecks in throughput were created by the shoe check. By separating out the shoe check it could be independently scaled (i.e. additional resources could be applied to the check) allowing overall greater throughput.

I was therefore bemused travelling through Heathrow's shiny new terminal 5 last week to see that the design had reverted to the monolithic single layer with integrated xray and shoe check. Yet another move forward for T5?

Wednesday, May 07, 2008

Build vs Buy Part 2

Following on from my previous post about build vs buy, I have thought about this a bit more. I have developed my own application to manage my bank account and credit cards over many years. It sucks data from my online banking service and online credit card statements, allowing me to reconcile transactions and plan our finances based on the bills we are expecting in the future. Recently I thought I would try a commercial package for this purpose, in order to save myself the effort of having to maintain my own code. However the package I bought required fundamental changes to my workflow which I wasn't prepared to make, so in the best traditions of IT projects I have abandoned the package and reverted to my home brewed solution!

Monday, May 05, 2008

Eee PC

I was lucky enough to be given an Eec PC last Christmas. It has taken me a while to get used to it, since its form factor is such that it isn't practical as a daily work tool. However it is so compact and lightweight that it is perfect for when I travel and I don't want to take a full size laptop with me. For example I take it to the tutorials for the MBA programme I am attending and use it to make notes which I can then quickly upload to my Powermac when I get home. Similarly I am typing this entry on said Eee PC from a hotel room in California where I am currently on holiday.

The Eee PC runs Xandros Linux, which is easy to use (that said I still prefer Ubuntu on my home server), though the shipped distro needs a bit of tweaking to get it to make the most of the hardware (I had to recompile the kernel with some additional settings). One drawback is that there are quite a few apps and config applets that aren't small form factor aware - this means that only part of the window is displayed and there are no scroll bars, so there is a certain amount of guessing about what some of the fields may say!

Apparently it is also possible to install Windows XP on the Eee PC, but the question is: why would you want to? :-)

Wednesday, September 12, 2007

Appliances

Appliances seem to be a hot topic at the moment; manufacturers are falling over themselves to come out with an appliance that sits in the data centre with minimal management overhead and levels of performance that are unsurpassed. This ranges from specialised XML accelerators to Google's appliance.

There is a natural trade off between the flexibility of a conventional server and the utility of an appliance; as a natural sceptic I remain unconvinced that in most cases it makes sense to use an appliance, but I am open to persuasion :-)

Sunday, August 05, 2007

Enterprise Architecture?

Architecture seems to be a heavily overloaded term at the moment; as Martin Fowler notes, the title "architect" can cover a wide spectrum of roles. In my own experience I have seen the term architecture to mean everything from am abstract arrangement of requirements to a rack diagram showing the configuration and cabling of a number of servers. Accordingly one of my stock interview questions when hiring people is "what is architecture?". I won't embarrass anyone by repeating some of the answers I have had, but in general I am surprised by the number of people claiming to be architects who struggle with this question.

Drilling down the current buzz term is 'enterprise architecture'. However even for this term there is a variety of interpretations, even if there is a de facto definition of the term; the following is taken from wikipedia:

Enterprise Architecture is the practice of applying a comprehensive and rigorous method for describing a current and/or future structure and behavior for an organization's processes, information systems, personnel and organizational sub-units, so that they align with the organization's core goals and strategic direction. Although often associated strictly with information technology, it relates more broadly to the practice of business optimization in that it addresses business architecture, performance management, organizational structure and process architecture as well.


My own experience is that organisations are putting together enterprise architectures today, because that is what everyone in the industry says is required. However in most cases due to the absence of a business strategy enterprise architecture defaults to technical architecture - a description of an IT solution, perhaps phased over some period of time.

The situation is confused all the more by software architecture for enterprise applications. This is the architecture as captured by Sun's Certified Enterprise Architect qualification, and also as described in books such as Martin Fowler's Patterns of Enterprise Application Architecture.

Common to most of these ideas about architecture is that there is an abstract representation of a target IT solution, to some business problem. Therein lies the key difference from the rack diagram - there is no abstraction, it is a wiring diagram rather than a building plan. Which is why I don't think of the rack diagram end of the scale as architecture.

Wednesday, June 27, 2007

Build vs Buy

I was asked recently about build vs buy. Specifically what was my opinion? Without thinking about it I betrayed my software background by saying that the pendulum had swung too far towards buy, whereas with modern approaches to software engineering the risk involved in build is much less than it once was provided the requirements are well understood.

However since then I have had a chance to reflect on this and I think to answer this question requires a little more nuance. Broadly speaking there are three categories of problem to consider:
  • Problems that can be solved by a commodity off the shelf product.
  • Problems that can be solved by configuring an off the shelf product.
  • Problems for which no off the shelf product exist.
The first category is a no brainer: there is no sense in writing your own word processor when there are several mature products on the market. Typically any unique business requirements for such problems will be sacrificed at the altar of the cost savings of commodity software.

The second two categories are more interesting. In my experience the biggest problem in IT projects is identifying and agreeing the business requirements. The first category resolves this by using the product to determine the requirements, but neither of the other two help with this. Modern products such as SAP and Siebel have such flexible data models that they can be configured and adapted to do pretty much whatever you want.

I'm not saying that organizations should go out and write their own ERP systems from scratch, but on the other hand I have worked on several projects where only a small part of the capability of such a large package is being used, so there is minimal value in buying rather than building in these circumstances.

The bottom line: if you don't understand your business requirements, build or buy makes no difference!

Sunday, October 16, 2005

Zachman Lecture

I attended a two day seminar on the Zachman framework for Enterprise Architecture last week. I have read about Zachman before, and have even bought a book on the subject (which as an aside must be the work book ever written). However the seminar was presented by John Zachman himself so I thought it would be a good opportunity to get the message from the horse's mouth.

I was not disappointed; I have always thought of the Zachman framework as just being a means of classifying the various artefacts created during the development and maintenance of an enterprise's architecture. However John's compelling vision is that the Zachman framework is a schema for defining the primitive elements of enterprise architecture. He describes the framework as being the basis for enterprise physics, and draws an analogy with the periodic table. This analogy also allows him to justify the fact that not all of the models that comprise the cells in the framework have been articulated yet. Continuing the analogy, he argues that any architectural artefact needed will either be one of the primitive models in the framework, or a composition of these primitive models. By separating out the independent variables in the enterprise (represented by the columns, defined by interrogatives) the enterprise can support the flexibility needed in the information age.

One of the points made repeatedly during the two days was that in the absence of architecture models there are only three ways to support change:
  1. Change by trial and error
  2. Reverse engineer the models
  3. Scrap the legacy solution and start again
Another interesting observation concerned the use of COTS; here the advice was to change the organization and/or business processes to fit the COTS package, and not vice-versa.

John's presentational style was very interesting; it was not a seminar in the sense of dialogue and discussion. It was more a high-powered intensive lecture, with a huge quantity of facts, knowledge and anecdotes delivered at breakneck speed.

All in all, it was an excellent lecture to attend. I left with a feeling that I need to change the way that I think about architecture, which is all I could really ask for.

Saturday, August 13, 2005

Back in Britain

Having spent nearly six years abroad I returned to Britain in March of this year. Things have been a bit hectic but I have a few observations:
  • Brits are obsessed with buying and selling houses; unlike anything else that you buy, house price growth is measured on a month by month basis. The press seems to think that anything other than rampant house price inflation is a sign of a weak economy.
  • Being able to walk to a nice pub that sells decent beer is one of life's simplest pleasures.
  • Bread is better in Denmark.
  • Steak is better in Texas.
I will add other observations in the fullness of time...

Saturday, February 26, 2005

Reflection: type madness or ultimate flexibility

On a number of recent occasions I have found myself looking at Java code that makes extensive use of reflection. Being somewhat conservative in my ways I was initially somewhat suspicious of this technique. However over time I have come to appreciate the flexibility it provides. However to my mind it is one of the features Java provides that is most open to abuse.

Let's consider the positive side of reflection first. Using reflection I can work with classes at runtime that I don't know about at compile time. A typical application of this idea is a piece of software that supports plugins such as Eclipse. The following diagram shows the basic idea.




The plugin manager is responsible for loading plugins according to some policy (at startup, polling during runtime etc). The plugin supplier creates the plugin and some kind of descriptor identifying the class in the plugin that implements IPlugin. This is a string, so the plugin manager uses something on the lines of the following to load the plugin:


Class pluginClass = Class.forName(pluginClassName);
IPlugin plugin = pluginClass.getInstance();
... // use plugin


Note here that due to the fact that constructors can not be specified by interfaces, such an approach requires an agreement about the constructors provided by the plugin class that can not be checked by the compiler. Typically this is resolved by requiring a zero argument constructor, and then having an init method defined in IPlugin which is called immediately after instantiation. Alternatively the agreement could be for a constructor with a list of parameters of specified types, and then pluginClass.getDeclaredConstructor() would be invoked with a list of parameter types as argument. This would return an instance of java.lang.reflect.Constructor which could then be instantiated.

This use of reflection, where at application compile-time not all of the runtime classes are available, is how I think it ought to be used. So far so good. Where's the problem then?

Consider the following example, based on a little application I developed for fun recently (more of that in a later blog). Suppose you wish to use a wizard dialog to gather information in a number of stages. There is a certain amount of repetition in the structure of the dialogs (buttons for next and previous, handlers for these buttons etc) which means it makes sense to capture the commonality in an abstract superclass and the specifics in concrete subclasses - a straightforward implementation of the template method pattern. Stripping away all detail irrelevant to the discussion at hand we might have something like


public abstract class WizardDialog implements ActionListener {

public abstract void display();

public void actionPerformed(ActionEvent ev) {
if (...) // source of event ev is "next" button
{
String className = nextDialog();
if (className != null){ // null indicates the wizard has completed
Class nextClass = Class.forName(className);
WizardDialog nextDialog = (WizardDialog) nextClass.getInstance();
nextDialog.display();
}
} else if (...) // handle other events including previous button
...
}

// return class name of previous dialog
public abstract String previousDialog();

// return class name of next dialog
public abstract String nextDialog();

}


The first step in the wizard might then be implemented as follows:

public class View1 extends WizardDialog {

public void display() {
// populate dialog
}

public String previousDialog() {
return null;
}

public String nextDialog() {
return "view.View2";
}
}


This example summarizes a number of uses of reflection I have seen over the years, from relative novices to high-profile open-source projects that really ought to know better.

There are two drawbacks to using reflection in this way; both revolve around the fact that it isn't actually necessary.

The first drawback is that in general using reflection has its price in terms of performance. It's difficult to quantify this, but I have seen studies where a factor of 10 is quoted as the lower bound. In practice for this particular example I wouldn't worry too much as performance is less of an issue but in general it seems foolish to waste performance in this way.

The second drawback is the one that is more important: by using reflection in this way, type information is being voluntarily discarded. We know that the next dialog is an instance of WizardDialog, yet we willingly throw away that information and instead pass back a string. Having a background in strongly-typed functional programming languages such as Haskell and Miranda, this bothers me. By doing this we are relinquishing the automatic type analysis that the compiler is able to perform for us. This might not seem like a big deal in this kind of toy example, but in real applications that I have worked on, this is the kind of thing that leads to runtime class cast exceptions.

What is the alternative then? Well the point is that we know about all the classes we are working with at compile time, so we can just use them directly:


public abstract class WizardDialog implements ActionListener {

public abstract void display();

public void actionPerformed(ActionEvent ev) {
if (...) // source of event ev is "next" button
{
WizardDialog nextDialog = nextDialog();
if (nextDialog!= null){ // null indicates the wizard has completed
nextDialog.display();
}
} else if (...) // handle other events including previous button
...
}

// return instance of previous dialog
public abstract WizardDialog previousDialog();

// return instance of next dialog
public abstract WizardDialog nextDialog();

}

public class View1 extends WizardDialog {

public void display() {
// populate dialog
}

public WizardDialog previousDialog() {
return null;
}

public WizardDialog nextDialog() {
return new view.View2();
}
}


Given the simplicity of this solution, how is it that I ended up messing around with reflection in the first place? Well, laziness, normally the friend of the developer, was in this case the villain. I am using the value returned by nextDialog and previousDialog for two purposes: to identify the dialog, and to indicate (by returning null or an object) whether there is a next (or previous) dialog. This latter use is exploited by the code that displays the buttons: if there is no next dialog, then the button is labelled "Finish" otherwise it is labelled "Next". Similarly the previous button is labelled "Cancel" if there is no previous dialog. I wanted to avoid creating unnecessary objects each time I invoked nextDialog() and previousDialog(). However in this case my laziness led to a poor solution; if that was my concern I should have used a singleton pattern or cached an object reference, rather than discarding type information and resorting to strings and reflection. Still, at least my laziness gave me something to blog about!

Sunday, February 06, 2005

Sun Certified Enterprise Architect

I recently completed Sun's "Sun Certified Enterprise Architect for Java 2 platform, Enterprise Edition Technology" certification. There are lots of online resources for this certification, so I am not going to provide an in-depth description of what is involved. However I did learn a few things along the way which might be of interest.

The certification is in three parts where Part I is a multi choice exam, Part II is a project and Part III is an essay exam. Part III is essentially just a confirmation that you are the person who did the project. Part I is a bit of a pick and mix across various enterprise architecture topics. By and large it takes a wide and shallow approach rather than a narrow but deep approach. Thus you really need superficial knowledge of the topics in the syllabus rather than in-depth hands-on knowledge.

The project in part II requires you to create an architecture and high-level design for an enterprise application. Some requirements are provided for the application, and obviously the architecture needs to be J2EE-based (more of this below) but otherwise you have pretty much free reign over how the application should be structured.

One of the good things about the project is that the requirements are deliberately vague, inconsistent and misleading. I say this is good, because in my experience it is an accurate reflection of reality. Of course normally such issues would be resolved by talking to the requirements team etc but for the project you just need to provide an overall interpretation of the requirements that makes sense.

Even though I have worked as an architect for several years now, the thing that really dawned on me while doing the project, was something that has also become apparent for me in my current day job: an architecture is a function of the underlying requirements and the design decisions made along the way. Without either of these it becomes very difficult to assess the quality of the architecture or even maintain the architecture. While it is usual to have the requirements documented in some way (either as a formal requirements document or in the form of use cases) it is less usual to include a complete list of the design decisions made. And yet, without them, any architecture expressed as say a UML model has very limited value. Architectural models are intended for communication (the "what" of the system) but without the "why" how can you say that the "what" makes any sense at all?

Since this is a Sun certification, the architecture has to be J2EE based. But what does this actually mean? I took a fairly conservative approach using the model 2 design pattern with various kinds of EJB. Would I have done that for a real application? Probably not; I would most likely use Spring and Hibernate, or even a JDO persistence layer. I could of course have tried that in my project, but I was more interested in getting the certificate than having an argument with an assessor about the validity of my solution. Does that observation devalue the certification? I don't think so; there are situations where the heavyweight canonical Java blueprints approach is appropriate so there is certainly value to be gained in understanding how to apply this approach.

Now that I have completed this, I am going to go over to the dark side and start looking at .Net...

Tuesday, December 14, 2004

Problem Hunting with Identify Software's AppSight

I was recently lucky enough to be invited to a demonstration of Identify Software's AppSight J2EE Application Support System. I was first made aware of their product at JavaOne where I stumbled across their stand and was intrigued by what they had to offer.

The problem that AppSight addresses is how to track down, diagnose and resolve problems in production environments. In my experience when a customer reports a problem it can be very difficult to reproduce the problem in the development environment. Log files rarely contain the detail necessary to paint the picture, due to the runtime overhead that such detail entails. Even very basic diagnosis (is the problem user error? a configuration problem? a software defect?) can be difficult for anything but the most trivial system. Throw into the mix various in-licensed components provided by multiple vendors and the picture gets very murky indeed.

AppSight's solution is based on the software equivalent of an aircraft's black box recorder. This is a software module deployed on the application server that records the behaviour of the application at multiple levels. Using this information the actual steps performed by the user can be reproduced. Identify's own web site provides lots more information, which I don't plan to reproduce, but there are a couple of points which I think are of particular interest:

  • I was immediately suspicious of the run-time cost of deploying such a module. However according to Identify themselves, there is a 2-4% overhead on the server and 1-2% overhead on the client.
  • You can define alerts based on particular events (say database connection pool usage reaching 90%). That in itself isn't so interesting. But what I think is really neat is that you can specify that the level of information gathering be increased based on a particular alert. That is, if there is a problem, automatically start gathering more information, rather than wait for it to become a real issue.

There are a couple of things that occurred to me after the demo that are worth considering:
  • Normally when I get a problem report, I like to use it as the basis for one or more test cases to add to my test suite. Is there any way this could be automated by AppSight?
  • On a more abstract level, is there a risk of some kind of Schrodinger's cat phenomenon? That is, could the presence of the AppSight module affect the behaviour under observation?

After the demo I was left with the feeling that I had seen a product that I would like to use in anger. Just need to find the opportunity now...

[I should point out that I have no connection with Identify Software nor have I actually worked with AppSight in a production environment.]

Thursday, December 09, 2004

Book Review: Beyond Software Architecture by Luke Hohmann

Beyond Software Architecture is a book about some of the business issues that need to be considered when developing a software system. It provides practical advice and insight into how software needs to be considered part of an overall business solution if it is to be successful. The emphasis of the book is on software products (mainly enterprise level) but there are many useful lessons to be learnt for project-based systems.

One of the key concepts introduced at an early stage in the book is the distinction between "marketecture" (marketing architecture) and "tarchitecture" (technical architecture). Under this categorization the traditional software architect (aka tarchitect) is responsible for tarchitecture and the business manager or program manager (aka marketect) is responsible for marketecture. This isn't a dry academic exercise in defining terminology: these two roles pervade the book. The underlying message is that the marketect and tarchitect need to be acutely aware of their own and each other's roles; the tarchitect needs to understand the business forces driving the product, and the marketect needs to appreciate the technical basis for the solution. Having the marketect and tarchitect roles defined is a necessary but not sufficient condition for creating winning solutions.

The book goes on to consider various issues which have implications for both marketecture and tarchitecture. For instance, which kind or product licensing model is suitable for the intended business model; what are the implications of using in-licensed technology in the solution; how should the product be branded; and so on. The point is that these are issues that the tarchitect needs to understand, be aware of and in some cases be involved in decision making for. However these aren't the kind of thing one normally reads about in books on software architecture.

For software architects working with software products, this book is essential reading; certainly in my own case, when I previously worked as architect on a product I would have found a lot of the information in the book directly relevant to my daily work. For those working on software projects, there are still some interesting practical issues such as installation, configuration and release management. However some of the other areas covered are purely related to products, so these can be skipped.

My only minor criticism of the book is that I would have enjoyed more real-world anecdotes; there are a number of sidebars in the book presenting the author's experience of a specific case supporting the more general message in the text. These were very interesting and I would have liked to see more of these.

Thursday, December 02, 2004

Don't mix software and The Economist

I started reading The Economist recently and have found it to be an interesting weekly look at what is happening in the world, providing a slightly broader view than Time or Newsweek. As a rule it tends to stick to politics and economics, though occasionally subjects on the fringe slip through. For instance a few weeks back there was a special article about outsourcing that was very interesting.

When I got my issue this week I noticed there was a special article about the software development industry entitled "Managing Complexity". I went straight to it, intrigued to see what they had to say about my "back yard". Alas the normal erudition and clarity that I have come to expect from The Economist were notable by their absence in this article.

The article began by making the point that a large proportion of software projects go seriously wrong and according to a recent study 60% are considered failures by the organizations that initiated them. So far so good. In the light of the recent Child Support Agency fiasco it is also something that I have been thinking about.

Beyond this point things start to go wrong. According to the article the problem can be summed up as follows
"The Culprit: poor design"
That's it. The article proceeds to present a peculiarly US-centric survey of trends in software development such as open-source, agile development etc. The quotes provided are from US software tool vendors, so it is unsurprising that they are in agreement that the solution to the problem is to buy the newest tools they have to offer.

I could point out the inaccuracies and fallacies in the article in great detail (and I'm sure that this would provide some catharsis for me - if you hadn't already guessed. the article touched a nerve). However I think it would be more interesting to speculate on some of the issues that the article ought to have delved into.

If I had a magic answer I would of course have become rich from this knowledge; however I think there are some clear areas that cause problems:
  • Poor management on the procurement side When an organization procures a software solution, there is some onus on them to know what they want. This may sound obvious, but I have worked on projects where a high-level management decision has been made that a group of users should use a new system that is to be developed, but when it comes down to it no one actually knows what the system is supposed to do. Large organizations often have a budget=power culture, so once the budget has been approved the procuring manager might not hang around very long, instead climbing further up the greasy pole.
  • Poor management on the vendor side Software project management is an area strewn with lots of literature (good and bad). I don't intend to repeat any of that here. However one area often overlooked is the need to manage the expectations of the procuring organization. The vendor management needs to be intimately aware of the maturity level of the procuring organization and factor this into planning. Related to this, a less mature procurer will often come up with new requirements as their understanding of what technology can offer improves during the course of the project. Vendor management has the responsibility to explain to the procurer that expanding the scope of the project by introducing the new requirements has consequences for deadlines and budgets. Note also that increasing the budget to accommodate such an expansion will inevitably be reported by the press as a budget overrun.
  • Overhyped Technology The one thing that unites otherwise warring software vendors is the claim that their new product will solve all of your development problems. This despite all previous experience to the contrary. Sometimes this overhype can get totally out of control which leads to inappropriate technology choices for a project. Witness the recent backlash against the use of EJBs.
These are the ones that occur to me straight off the top of my head. As I think of others I will add them to this list.

Wednesday, November 10, 2004

Teaching Design Patterns

I have a long standing lurking suspicion that design patterns are inherently difficult to teach; my own experience both teaching and using design patterns indicates that they are only really appreciated by students who are able to relate the design pattern in question directly to a program they have developed. Otherwise not only do students have difficulty understanding the design pattern, but it becomes very difficult for them to apply it as they have difficulty recognizing the appropriate situation.

Wednesday, October 20, 2004

Houdini - The Most Stupid Dog in The World

Years ago Yvonne and I lived next door to a dog called Houdini. He was a beautiful looking boxer, just a puppy when we first made his acquaintance. Unfortunately despite his charm and good looks, Houdini was undoubtedly the most stupid dog in the world.

The reason for this statement (and let's face it, dogs aren't renowned for their intelligence so it isn't a title bestowed lightly) is an incident that occurred about a year after we moved into the house next to his. It was his habit to lean his front paws on the shared fence in our back garden and peer over (he was very nosey). One day while in this position, he noticed something flapping around from the corner of his eye. Eventually he couldn't control himself, had a nip at it and then emitted a huge yelp - it was his own tail that had been wagging away. His owner Pam took him to the vet where he was bandaged up and sent home suitably chastened (or so we thought).

The next day, in the same position in the back garden he once more noticed something flapping round, so he again took a nip at it and emitted an even louder yelp; once again his own tail was the victim of his exuberance. Pam took him to the vet again, who as well as patching him up, mounted a conical collar on him so that he wouldn't be able to see his tail.

The following day, pleased as punch Houdini was in the garden again. This time he was not troubled by objects flapping in the corner of his eye. Unfortunately while he was in the garden there was a sudden downpour. Houdini, being a uniquely gifted dog decided to stare straight up into the sky during this downpour. The result of this was that his collar rapidly filled up with water and would undoubtedly have led to his drowning if an alert Pam had not charged out of her house at high speed and knocked Houdini's head forward causing the collar to empty.

After that Houdini was not allowed out in the rain on his own until his tail healed.

I challenge anyone to come up with a more stupid dog than Houdini.

Thursday, September 30, 2004

Policy-Based Design

I recently had the pleasure of reading an article written by one of my Systematic colleagues, Jan Reher, entitled "Policy-Based Design in the Real World" published in the C++ Users Journal. The article shows an example from a real application of the use of policy-based design, as described in Alexandrescu's book Modern C++ Design.

Policy-based design allows functionality to be parametrized in a number of independent dimensions. The textbook solution is an elegant application of C++ templates and multiple inheritance. The point of Jan's article was to demonstrate that the textbook solution can actually be used in a real application.

I found the article very interesting for a number of reasons. One of Jan's concerns was the real-world utility of policy-based design. I see this as a broader issue: many of us are familiar with the situation where an elegant textbook solution evolves into an unmanageable mess when applied in practice, so it is always good to get real validation of such a solution. On a practical note, since I have worked mainly with Java for the last few years, it was interesting to try to recast Jan's solution in Java (pre J2SE 5.0's generics). What became rapidly apparent was that the solution would be significantly more clunky in Java. Actually, even with generics, the Java solution would be less concise than the C++ one due to the use of multiple inheritance in the C++ solution.

Sunday, September 19, 2004

JXTA at FWJUG

Attended the monthly meeting of the Fort Worth Java Users Group last thursday where we learnt about JXTA and P2P solutions from Daniel Brookshier who is part of the JXTA development team and has written a book on the subject. The talk was very good and for me there were a couple of things that I really took away from the meeting:

- JXTA is not a Java specific technology. It is a protocol for P2P communication, and there are already implementations in several languages.

- P2P is really a paradigm shift in distributed computing, compared to a traditional client-server architecture. This is really blowing my mind a bit, and I still don't think I have totally got my head round it.

One of the examples that Daniel referred to several times was 312inc.com's product lean on me that performs backup using a P2P approach. This was a good example of where a bit more thought up front yields a massive saving in initial investment since a traditional approach would involve investing in backup servers etc.