Monday, February 27, 2006

System Integration for Dummies


Working on large scale project you get to learn that “putting things together” is one of the riskiest areas in the whole project lifecycle, yet system integration is somewhat underestimated both as an activity and as a discipline. Anyway, after participating to a large project where barbarian hordes of developers needed to integrate their own component a week before the official release date, nobody wants to repeat the same mistake twice. The current landscape in system integration currently consists in two leading approaches.

Prescriptive approach
The traditional approach focuses on an early definition of stable APIs at the boundaries between the different project components (especially for those that need to be used by different teams). The idea here is to allow development of the different components to advance separately as long as possible. Overall project planning proceeds accordingly: development and integration have well defined scheduled activities

This approach has some known drawbacks: early defined APIs can’t be completely stable, due to evolving scenarios or requirements, better knowledge acquired at implementation time and so on. Moreover, APIs are never completely clear (or I never had a chance to see such a beast), so you could end up having two perfectly legal implementation of an agreed API, which are not compatible in practice. Discovering such differences at integration time might end up in a big mess, deadlines are closer now and patching the APIs or the depending implementation with schedule pressures normally harms code quality and overall system stability.

On the other hand, attention and ceremony put in APIs definition (normally documentation and blueprints are agreed and signed up by the different parties involved) makes them difficult to change before integration time. One team might discover an extra need, but the change request on the API might then be agreed with several parties (which normally don’t love changing their ongoing design, or might find it a great chance to mask underlying schedule slips…) making this change process quite costly, even if it doesn’t affect code and system behaviour yet. I am tempted to define this approach “meeting room integration” ‘cause in large organization most of the effort is spent around a table.

More generally, the prescriptive approach weak point is the same of the waterfall development process: if everything is done properly at the right time, integration might proceed smoothly; but, if a change is required along the way, costs might rise out of control.

Continuous Integration
The opposite approach, was introduced by Martin Fowler’s article and became a building block of the XP and TDD approaches. Continuous Integration focuses on integrating systems as early as possible: the leading principle is that integration is a high risk area and integration costs have to be paid anyway, so integrating early helps mitigating the associated risk. Different components are then integrated while still in development in an update-build-deploy-test cycle, which can be tuned to include the latest version, or the latest stable version, or the last officially released version from the SCM system. The whole system is built and deployed in a “neutral” test environment and reports of the current state of the system are sent to all the tem members involved.

Tests (automated test) play a great role in ensuring system stability: having a comprehensive integration test suite helps spotting potential conflicts between components as early as possible, giving the project management a better grip on the overall status of the system. Comparing this approach with the previous one we might notice that a significant part of the integration test could be developed earlier, without waiting the overall system to be ready.

Of course there are some drawbacks with this approach also: relaxing stability constraints doesn’t necessarily mean “don’t design the APIs”, cost of change is paid differently, but it’s still a cost even if recent refactoring additions to common IDE introduced a significant improvement in this area. Starting integration tests early increases also the need for physical resources. If the hardware is a big cost on the overall project (or you have to rent it somehow), there might be constraints on test hardware availability. Same result may arise from organizational constraints: if the hardware is provided from another department you might have a looser control on hardware planning.

Another subtle consequence of this approach is that imposes iterative development: you need to have some working code to integrate as early as possible, so if some team is doing waterfall development inside its boundaries you’ll discover problems at the very last moment anyway. This doesn’t sound like a drawback (and in fact it’s normally a positive side effect) but need the project management to put a little control in process definition of the different teams too.

Back to my (sad) real world
I bet most of the things I wrote here sound pretty obvious to most of the readers: whether you favour one approach or the other one, or a combination of the two, you know the underlying principles of both, or at least one. If we have to agree on an integration strategy, these are the fundamentals. Then we can agree on a trade-off that best suits the current project scenario.

To my surprise, I realized that this is quite an optimistic assumption. On a large scale project, involving several separate organization, I had the pleasure of being taught that “Believe me, you do not want an integrated test environment: if something goes wrong you’ll never understand where the problem is…” and “running a couple of tests there, and a couple of tests there” was the best approach. After this, option a), b) and c) were all against the law. Option d) was to put the sentence in the dumbest sentences heard and it turned out to be the compelling reason for the whole post.

Tags: ,

Saturday, February 25, 2006

Dumbest sentences heard

I’ll mantain a list of the dumbest phrases I encountered in my job experience. If you recognize yourself in the author… I owe you a beer!

“Hi, I sent you an e-mail with three different version of the xxx.java source enclosed. One of them actually works. Can you please deploy the right one in production, please?”

- a developer -

“Believe me, you do not want an integrated test environment: if something goes wrong you’ll never understand where the problem is…”

- a guy trying to deny the importance of the test environment they “forgot” to provide -

“We just discovered that we are 60 use cases late…”

- a project manager at a milestone meeting -

“Today’s standards are tomorrow obsolescences”

- a guy trying to justify the choice of a custom protocol instead of a standard one, on a large scale integration project -

Thursday, February 16, 2006

Type Checking and Code Smells


I’ve been caught for a while in the endless debate between strongly typed languages, such as Java, and loosely typed languages, such as Python. The main question could roughly be reduced to “is strong type checking worth the price?”, for python supporters the answer is clearly “No!”, cause the java syntax burden is not bringing so much of a value. I usually am a strong supporter of a strong type checking (even if I never used smalltalk, for me everything is an object) but in most java projects I’ve seen, strong type checking was never enforced, so developers were just dealing with a heavier syntax without enjoying its benefits.

Testing can introduce some safety degrees at deployment and release time, ensuring some more confidence on the produced code, but in this way you’re supposed to introduce a full-featured test suite, to really confident on software stability in the long term. But type checking was really meant with another goal: bullet proof (or fool proof) design. The best design example I remember, was the 3,5” floppy: even if it approximately looked like a square, it was not; and there was only one way it could be inserted in the slot (unless you were really creative, of course…). I used this metaphor a lot of times when teaching Java and OOP, but I am starting to wonder if it’s any more valid.

I usually get extremely suspicious (and worried, if it’s a project I am working on) when I see two things in code: empty constructors (and a lot of setters) and a huge list of String attributes. These two smells often came along, so the resulting aroma is a perfect blend. Ok, one thing changed from some years ago: a good IDE now shows more and more information about attributes and parameters, so – sometimes – the pessimistic control offered by strong type checking, can be surrogated by an optimistic naming convention (after all developers are supposed not to be that stupid). Still this is not an argument that can save empty constructors (and the resulting partially instantiated objects) from being intrinsically evil, and I can’t believe that none of those String attributes is a potential class.

But to really get some benefit from a more OO refactoring of the code you need behaviour, and if you’re pushed between a presentation framework, which doesn’t enforce OO data, such as Struts, and a lazy coded DAO layer, there’s not so much behaviour left to add to your classes, so – like an old man that feels out of fashion – I would look at all those String attributes in the code, and feel intimately disgusted, but then instead of invoking a refactoring session I’d rather have a glass of wine, thinking that the old times were better…

An article on the Python vs Java debate by Bruce Eckel
Martin Fowler's code smell "Data Clump"

and ConstructorInitialization


Tags: ,

Thursday, February 09, 2006

MDA Skeptic - Part 5: Maintenance cycle


Here is one of the trickiest areas, but I’d better clarify mi point of view first. I work for a consulting company: we normally deal with enterprise software, sometimes we craft it, sometimes we test it, sometimes we coach/mentor an internal development team, sometimes we just audit the development process. In all of these situations the final owner of the produced code is the customer, there is an obvious maintenance tail at the end of the project, but our team is intended to be a “task force”, meaning that it’s not there to stay.

This raises a couple of possible problems, involving methodology and tools.
In a “pure” software company, MDA is much more a permanent asset, so some of these drawbacks will have a minor effect.

Methodology legacy
Adopting MDA is a big mindset shift for a development team, all of the usual operation for all the roles involved are different. You can be radical and forget about all the MDA stuff done by an external team: at the end PIM produces PSM which produces code. You just move the focus back to code. It’s like moving back from industry to artisanship, but if you’re just “decorating” the legacy MDA-engineered application, it can just make sense (but you’ll pay more if you want to move back to MDA, for example to apply some new functionality). In this case, the model and the architecture are just assets of the development task force, which can deliver proven yet custom applications in a shorter time.

Finding the hot spot
If you decide instead to adopt the full MDA round-trip also fro the maintenance phase, you’ll have to be aware that the whole maintenance methodology gets scrambled. For any reported bug, the problem determination cycle must determine whether the error resides in the PIM, PSM, (custom) transformation, or protected area. Each solution leads to a different way to resolve the bug maintaining the overall system integrity. You’re probably going to pay part of the savings in the early stages of the project, in the maintenance phase, ‘cause you need more skilled developers to get the job done properly. On the other hand you’ll probably have less bugs, in this project, or in the following if you correctly adjust the models and the mappings.

Tools Legacy
Regardless of the chosen approach (partial or full roundtrip), one of the greatest boost to development speed is given by the toolset used. Unfortunately, there’s still a big distance from the claimed official academic MDA approach and the real world offer of MDA tools on the market. Despite the OMG efforts for interoperability, looks like most of the tools have still chosen a “proprietary approach”, somehow forcing the development process to be tailored around the specific tool (which sounds a bit ironic, cause MDA aimed to achieve platform independence). Moreover, most MDA tools look pretty expensive, the vendor lock-in might so be enforced by your boss: “You asked all of this money for this tool? Now you’re going to use it till the end of your days to have it repaid!” (Lightning & thunder).

Diamonds are forever
I haven’t really found any information about somebody who switched from one MDA tool to another one, still inside an MDA process. I suppose it might be interesting to know. I haven’t seen such mobility also in UML tools probably due to price reasons as well, so we might expect a similar scenario in the MDA playground as well, with few big competitors, defining de-facto standards, as long as open source doesn’t show up as a real full-featured viable option.

Tags: ,