Nando's Weblog

delirios e ilusiones

Archive for February, 2008

Underestimating concurrency issues

More often than you can imagine, I have seen systems where the problems of concurrency were severely underestimated. Even people that I considered quite intelligent, when I was commenting about the lack of a concurrency control, argued: But chances are quite low.

Or even worst: a kind of optimistic locking was implemented, but the check of the time-stamp was done through a SELECT statement previous to the update without locking the row. Again, the argument was it is really improbable to get an update in between.

The problem is simple, low probability does not mean impossible and when this happens, it is specially tricky to find the reason for an error in a system that was running without any problem for a long time.

Fixing a bug has two parts: detection and correction. From these two steps, the first one is the most difficult and concurrency problems are (precisely, because of their low probability) very hard to reproduce.

Thinking that it doesn’t worth the effort to solve them, due to the low-probability, is like playing Russian roulette with a single bullet.

Hibernate now makes everything easier. Be aware however, that EntityManager is not thread-safe.


Advertisements

High performant (but useless) systems

Many times, I heard people talking about the problems of performance, when defining some constraints at DB level. Anyone can understand that no matter how fast a response is generated, it is totally useless if it is wrong. However, some people sometimes think that actually it depends and it is just about finding the right balance between performance and data integrity.

For those who are still not sure about this point, this example might be helpful.

The company was an electricity company.

Just few years before, it had paid several millions for a huge consulting project that implied a broad reengineering of the processes and information systems.

Whilst relational databases were already an industry standard, the implementation option on which the solution was based was a non-relational database (Adabas). In order to improve performance, one-to-many relationships were implemented as arrays, with no FK constraint. The result was a much performant system and everyone was proud of it.

But after having seen many inconsistencies, I made a small program in Natural. The input was a screen where the user declared the FK constraints and the source of a Cobol program was automatically generated, compiled and run, in order to make the checks and produce a report of the inconsistencies.

We all know how bad inconsistencies are and how they tend to reproduce themselves. What we might not always be aware of, is about the business and economical consequences they may have.

As said before, many processes were improved not only internally, but also the communication among them. An electricity company has many offices distributed across the service area. Each of these offices does maintenance and other kind of tasks. So the Operations and CRM systems were feeding the service orders each office needed to do on a given day.

One of this types of orders, was to read the meters in order to generate the invoices.

The problem, is that some meters were assigned to invalid offices.

This means, that this meters were never included into any service order to read them.

Yes, you are right. No reading means that no invoice was being generated. They were getting electricity for free. There were many of them. Happy customers, of course.

I hope inconsistencies will no further be seen as a technical problem. They might mean huge amounts of money.