Monday, 4 June 2012

Score your applications

One of the tasks in an Enterprise Application exercise is to get an overview of all applications in the IT landscape. Every application can be described with some basic details like name, description, purpose, product owner, technical contact, nr. of users, vendor, etc. 
Next to these basic details we can gather some indicators like:

  • Application Health: Indicates if the application has a good architecture, well known, good documentation, is well under control in it's development life cycle
  • User efficiency: How good is the application in supporting the users with their tasks (processes) at hand.
Knowing what the good applications are or knowing when the phase out applications is important to keep your IT landscape free of weed. Note that it is never a goal on its own to change the IT landscape, however, when business initiatives arrive you need to know the situation of the application layer. 

1. Application health indicator

The application health is not one particular aspect but rather a combination of different factors. 
Originally I started out with a few factors like Technical complexity, knowledge, etc. I figured this is something more people should have done before me, so I came to a number of approaches. 

I believe there are 2 major factors:
  1. Application design
  2. Development readiness

1.1 Application design

How good is the design of an application? This is something the industry has focussed on before and we recognize the following design factors:
  1. Rigidity: Rigidity is the tendency for software to be difficult to change, even in simple ways. A design is rigid if a single change causes a cascade of subsequent changes in dependent modules. The more modules that must be changed, the more rigid the design.
  2. Fragility: Fragility is the tendency of a program to break in many places when a single change is made.
  3. Immobility: A design is immobile when it contains parts that could be useful in other systems, but the effort and risk involved with separating those parts from the original system are too great.
  4. Viscosity: a viscous project is one in which the design of the software is difficult to preserve. We want to create systems and project environments that make it easy to preserve and improve the design.
  5. Needless complexity:A design smells of needless complexity when it contains elements that aren't currently useful. 
  6. Needless repetition: Cut and paste may be useful text-editing operations, but they can be disastrous code-editing operations. (DRY)
  7. Opacity: Opacity is the tendency of a module to be difficult to understand.

1.2 Development readiness

This factor indicates how easy (or hard) it is to pick up or continue development.We can divide this into a number of sub factors like:
  1. Knowledge: How much knowledge do we have readily available? Both in terms of documentation as in the head of people.
  2. Resource Readiness: How fast/smooth do we get a development team up and running with the proper development tools, (source code) artefacts, etc.

2. User efficiency indicator

We can use the following four factors to determine this indicator:
  1. Conceptual Complexity: How complex are the application concepts
  2. Knowledge: How much knowledge of the application do we have readily available 
  3. User friendliness: How easy is it to use the application 
  4. Learning curve: How long does it take to get people working well with the application
Note that it is somewhat odd to say how efficient an application is, usually one would say "for what purpose?". Applications can do different things for different people who are doing different processes. However, imho, it is possible to take a distance from the people and processes and asses these factors based on a high level point of view. 

3. Scoring

How does one score these indicators? It depends if you have an established Enterprise Architecture body in your organisation or not. Suppose you are at the start of an ambitious project and you want to assess the application landscape in a pragmatic manner. In that case I would use a relative scoring mechanism where every application gets a score relative from each other. 
For example; take one of the factors mentioned above and lay it down on the table. Gather the people who know about that subject ("subject matter experts" in an expensive word). Create 6 boxes or places on the table and ask the people to take every application and place them in one of these boxes where 6 is a good score and 1 is a really bad score. You'll see that people switch an application now and then when they proceed the list, that's what makes it a relative score. 
Is this a correct scoring mechanism? Well yes if it is within the boundaries of that company. It tells you that application x score's better on fragility than application. 
As far as I know there's no industry reference so, outside the company borders the scoring would be incorrect.