Monday, December 10, 2007

Back from the Rome Java Day 2007

… So I am back also from the Java Day, in Rome. It was my first time ever, there. And I was also a speaker (and obviously I was late with my slides…) so I feel like I missed quite a big part of it.

I followed Luca Garulli's presentation about the Roma Framework, which in some ways is a competitor of Grails (the subject of my speech) . Roma has some interesting features, like the built-in user profiling module, and a fancy Echo2 based web interface.

Even if I was dead hungry when I had my speech, I manage not to think about the food and to keep the speech reasonably enjoyable. Anyway, Grails is such an effective tool and full of good ideas that, even in a 10 minutes demo, the audience caught the tools potential, and getting to the end was a piece of cake.

This was one of the closing event, after that we went eating out with all the staff end the speakers in a typical restaurant nearby, and had some good time together. This made the perfect ending for a nice day: despite being a free event and basically based on volunteers the result was absolutely good (thanks to Mara and all the staff that made a hell of a job), with more than 1000 attendees this year.

By the way, presentations slides are already online, and mine can be downloaded here

Thursday, December 06, 2007

Doing Agile In Italy: Paperworks

In the last Agile Day in Bologna we had quite a few demonstrations of how to make Lo-Fi tools (such as Post-it, StoryCards, etc.) work, to master an Agile development process. Fascinated by Tim McKinnon's presentation, I then went to a local office supply shop to find some tools to play with. Unfortunately, when I started describing Tim's nylon attachable pocket board, the girl at the shop started looking at me quite …oddly (ok… if I describe it this way, nobody could understand, except the few that attended Tim's speech, but I assure you I did my best. I also looked for pictures of "the thing" in the 'net but I couldn't find one – …maybe because I don't know the name).

Anyway, I went to the shop. And what I asked for seemed odd. There was something similar, like cork boards – in this case you need some pins to attach story cards to the wall – or something else that could contain, buy not show the story cards. I then shifted on Post-It papers. Which are cool as long as you have the right board to stick them on. They also have some drawbacks: since they're sticky, you do not write anything in the B-side (while you do that in story cards or CRC cards). I also bought some Super-Sticky post it. They stick everywhere. But there's a smaller choice of colors so it feels like you're in Coldplay's "Yellow". I felt a bit frustrated but also eager to try to squeeze the best from what I had.

In a couple of days I had a chance to play with my Post-Its. My task was to define a development process that was suitable for a SOA environment, agile enough to be productive, formal enough to be used in a Banking Institute. We started playing with our Post-It and … worked. We were able to spot contradictions and missing pieces in a very short time. Writing the document will be the long and tedious stuff, but I guess in this case it's unavoidable: agility still has to cope with the suit-and-tie mismatch.

Wednesday, December 05, 2007

Dilbert on Agile

Agile is really making it to the mainstream! Also Dilbert now is talking about it...

Back from the Agile Day 2007

Quite a busy decade, I just came back from the Italian Agile Day in Bologna, and I started preparing myself for the following Java Day, so I didn't have so much time left for blogging.

I was pretty satisfied with the Agile Day overall. The idea of "open" conference, with free subscriptions or Paypal donations, but no official sponsors gave the whole conference, a cozy feeling. Paying for lunch allowed for the best buffet ever seen in a techie meeting (and this is heavily related to the conference location).

I attended Tim McKinnon speech, and I liked the idea of the Star tasks. Too many times good ideas from the team are dropped or censored because "we've got to finish the job", ignoring possibilities to just finish earlier. Antonio Terreno presented eventually another type of "Star Ideas", like the definition of a meta-engine to handle all the workflow in the application Antonio's team was working on, but in this case it was one of Martin Fowler's on a ThoughtWorks project, so censorship it's a little bit difficult to apply. But basically the underlying problem it's the same: the balance between doing one thing over and over, or finding a way to do the same thing fast is often too biased towards simply doing. Making job not only tedious and frustrating, but also less productive.

I then was in a small Open Space lead by Luca Minudel, about self organizing teams, where he dropped some hints about where to learn more. Marines are an example of a self organizing team (you can't wait for orders when you are behind enemy lines), and surprisingly Luca showed a book I had at home about how to raise kids by saying "no". Actually, my wife is reading it. I'll be next shortly…

Francesco Cirillo from XPLabs pointed out that many teams are focusing too much on Agile practices, ignoring that fundamental skills are needed. He challenged the audience with his no-if campaign, backed up by T-shirts. I couldn't agree more. A vast majority of companies focuses on training only on the language side, asking for "Java Programming" or "Java Web Programming" and completely ignoring the OOP side. Unfortunately, agile practices, such as refactoring, require strong OO skills that neither university or a Java class will ever provide. As an architect, I suspect that Object Orientation is a complicated stuff, so we've got to acknowledge that we can't have complex systems completely OOP, but some procedural part will always be there. As an "agilist", I do agree that a heavy OOP shakedown is sometimes probably better that playing with the process.

Probably the most intriguing happening was the "Futurespective" held by Marco Abis, which was demonstrating a twisted way to handle project retrospectives by shifting them in the future. This creates a shared metaphor that reduces team and self-censorships allowing for more ideas and suggestions to emerge which was pretty effective. All was held with post-it with an Agile/"lo-fi" feeling, that actually eased spontaneous comments. "Paper Agile" was also one of the topics of Tim McKinnon's speech. This boosted some ideas in my mind, more about that in the next posts.

Apart from being in Bologna, and being cheap. It was absolutely a valuable experience. Given that half of my job is teaching, I have to look for all the possible chances to learn new things and this was definitely worthwhile. Thanks to all.

Tags: ,

Wednesday, November 21, 2007

The case about changing requirements

I've received a deep-hearted comment to this article (in Italian) from a former collegue. The project estimation process pointed out that "changing requirements" were the most dangerous risk factor for a software project. Even if I proclaim myself an "agilist", which means I should "embrace change", the temptation to "fix the requirements" is still there. Two main reasons for that:

  1. You've gotta start from somewhere
  2. Most of the contracts are structured on a fixed price basis, which can be accommodated by change request, but the base price is crucial.

Agile philosophy states some principles. Fixed requirements are utopia: external situations evolve, and more than everything else, stakeholders learn something more about what they want as long as you show them some progress. It's not that change is more efficient than fixed requirements. In some cases, if your requirements are stable (which means you made a pretty good job in the gathering phase) a waterfall might even be more efficient, but if something changes – and 99% of the times it does – you're screwed.

Knowing him, and having been working in the same context he was, I knew that his temptation to fix the requirements was not a "waterfall revenge", but something else. In many projects, especially in the public sector, the initial requirements are not evolving during the life of the project. They're just wrong. They're written by the wrong people, with a wrong goal in mind. Still they're written, ant there's a contract based upon them.

It's easy to smell a situation like this: try asking for a real user (something really XP)… I've been answered: "You can't meet the user". Reasons may vary: "there are too many", "they don't agree with the management" (I bet so), "it's a waste of time", "they have different opinions" (this is interesting), but basically the case is that who is paying is not who needs the software. And who wrote the requirements is often none of the two.

It's interesting to note that an agile process helps dealing with this situation: every agile process eagerly seeks for real feedback. If you have a fake requirements writer, you have to start asking him questions about the application. I mean …a lot of questions. In this case there are probably two options: one is not answering, but the contract should say somewhere that some support might be given; the other one is finding somebody that can provide the answers. They might be surprising, but early surprises are better than last-minute ones.

Tuesday, November 20, 2007

Is Domain Modeling a “liberal” activity?

This post might sound quite stupid, but I guess this is nothing new… There's been a long debate going on between two parties: those who prefer modeling a system starting from an Object Oriented domain model, and those who prefer starting from a Data Model. I am not talking about legacy systems where this choice is somehow context-driven, I am talking about the rosy scenario: just starting from scratch. My personal style is to never-ever start modeling from the data. I did it, when I was younger, and was developing a MSAccess VBA application. But that's it. In a OO system an Object is a richer entity than a Table: there's behavior that comes into play and I think that the table is sort of a projection of the domain model. To be completely honest, I have to admit that I am far more familiar with UML and Java than with ER diagrams or SQL, but that's not the main reason for me to do that.

I started to suspect that there is something more deeply rooted here. Ok, we can't really classify UML or ER modeling as a "primal instinct" – although some graffitis might be classified as "Cro-Magnon UML profile" – but peers tend to choose one way or another one "by heart" unless when constrained by the tools in use. Getting to a precise, DDD-like, domain model is a complex task, which involves a deep understanding of the real domain, that can be achieved through empathic discussions with domain experts and users. The goal is to understand a different culture, and the way is to respect it. Doesn't it sound a little bit like "make love not war"? From the data-driven guys point of view, this sounds a lot like "he had a difficult childhood" or "we've got to consider the environment he's been growing in" which results in a big loss of time. The domain is just another word for the data model, which is made up by data and constraints, all the rest is "OO-hippy stuff".

I assure you I was sober, when I thought this. I still think this is fairly stupid, but it fits a lot of my collegues… ;-)

Friday, November 09, 2007

The unexpected side of being a freelance.

Some time ago, I switched my job turning into a real freelancer. This means more unpredictability, more risk, more freedom, and a lot more things to learn.

Since I want to offer my consulting and training services on the European market, not only in Italy, I needed to define some branding, that could be suitable for english speaking countries. My obvious choice was to stick on something like "Alberto Brandolini IT Consulting". Which sounds a little presumptuous per se, but also has some extra drawbacks. Given that it's pretty long, an acronymous could fit better. But if the result is A.B.I.T.C., you could end up with advertising like "I was not satisfied with the overall performance of my IT department, then we called ABITC, the result was impressive, and we've learned a lot".

Gotta find something better...


Upcoming Events: Java Day 2007 in Rome

Java Day is approaching, the official site is up, and so is the program.
I'll be one of the speakers, talking about Grails

I've checked the official program of the day. And I'll be speaking around lunchtime. This is bad, because at that time I'll be incredibly hungry, and being hungry is the only thing that gets me angrier than bad code... At the last java conference in Milan I remember finishing late the morning session, and being the first afternoon session. So I had to skip lunch, really wanted my speech to and as soon as possible, just to have a pizza, a sandwich, or ... whatever. I'll be prepared this time.

Wednesday, November 07, 2007

Monday, November 05, 2007

Photos from the saddest cubicle competition...

Wired published photos of the saddest cubicles sent by readers. And what you see is only a small part. of what you can actually get. What you, hear, feel and smell is what definitely makes your workplace special...

Tuesday, October 30, 2007

From crossover comes breakthrough

One of the biggest sins in software development industry is "knowledge hubris". Software engineers are "natural born specialists" (not all of them, but I guess that a vast majority could fit under this definition), people who have a deep knowledge of some technical aspects and have a tendency to "drill down" for a deeper knowledge of a specific topic. Experience helps in defining what's the better balance for the situation, enabling you to make decision based on a superficial knowledge (and maybe some expert's advice) instead of studying the whole stuff before taking any decision step. Given that software engineering is a profession that needs continuous refreshment of the acquired knowledge, and that the field is constantly enlarging, reading more books in a shorter time is probably the only viable option.

Many of us think this is enough.

I had a couple of loads delivered by Amazon in the last two weeks. So I have fresher books about Software Architecture, Project Management, Software Development Process, Software Security, Grails, Groovy, Ruby, SAP and so on. I looked at the pile and felt somewhat disappointed: "looks like I am merely changing the perspective on the same stuff", or maybe I become too much influenced by Amazon's recommendations. While browsing some books bibliography (in a childish attempt to score more than 50% books read) I realized that probably the more interesting perspective could come from the books I've never heard of. Agile movement brought a lot of external influences into the software development field: SCRUM introduced elements from process engineering (that were quite obvious for process engineers, if only anybody thought about asking…), Cockburn and DeMarco and Lister analyzed deeply the human factor which is predominant in software development, both in terms of self organization and team dynamics, bringing anthropology, communication science and psychology into play. After all, in software development the rough material is people, but most of us focus primarily on technology forgetting that, unlike technologies, there is no obsolescence in people.

Thursday, October 25, 2007

Videogames for the Development Team - Updated

This is just a slightly different version of an old post...

I’ve got this stuff bouncing in my head, right after reading this
Joel Spolsky’s post trilogy about project management. You’ll realize that my head is pretty foggy lately…

The average developers attitude is best expressed by 1st person shooters, such as Quake. A developer has a dirty job to complete, and has not to be afraid to dig into a bloodbath to finish it. Collaborative teamwork is encouraged for better result, but in the end it’s you against the enemy.

Team leader’s attitude is best trained by strategy games such as Warcraft, or similar. You organize the teams, assign task, and make characters’ experience evolve, so they can perform more complex task. Controlling parallel activities is just the nature of the game.

A project manager perspective is almost the same that a game like SimCity can give. You don’t control characters anymore, but create conditions for them to do stuff, such as provide houses, roads, and so on. And watch interesting happen: if your city is a nice place to be, then people will be glad to join your city, otherwise they will leave.

You might be tempted to choose a different approach, Sims-like, but as Joel’s article explains pretty well, it’s just pretty fine grained to work effectively (to be honest, I always have flies over the garbage can...).

The perfect game for the DBA is Dungeon Keeper, well... you might guess that I am not a DBA, but the whole game is defined in an access provider perspective.

Ok, somebody might be curious about my favourite game… well, I’ve always been a Civilization’s fan, from version 1. But the one and only that really pleased my ego was Populous II, the moments I could send a tidal wave and watch poor innocent human beings drowning in a tidal wave… Does this mean something?

Tags: , ,

Monday, October 22, 2007

The eternal question

It's pretty hard to claim you're a professional, when the goal of the profession itself isn't clearly defined. I bumped into this interesting post on Ted Neward's blog, which tells a lot both about the Software Architect job and the pitfalls of the recruiting process.

Thursday, October 18, 2007

Enabling factors for Domain Driven Design

There was a nice follow up to an Eric Evans interview on InfoQ, which led to this interesting post by Chris Beams. I totally agree with Chris: when I first read about Domain Driven Design it sounded a lot like the good old OOP that I learned in the university (shall we call it GOOOP?), just a little bit more pragmatic and organized. However, J2EE forced developers to drop that pure OOP model (but I still suspect that pure OOP is not for the masses…) in favor of a heavy procedural model disguised as a J2EE architecture.

In the last years technologies such as Spring and Hibernate became mainstream, laying the ground for lighter architectures. More specifically Spring brought the POJO model to the services part of the architecture, while Hibernate enabled POJO representation for the Domain Model. Freed from many of the constraints of the J2EE platform, architects may now aim to the best possible representation of the model, which currently is represented by DDD.

Tuesday, October 16, 2007

A nice puzzle for the ubiquitous language

Multi Lingual Domain

During the Domain Driven Design seminar I had the pleasure to attend in JAOO, a question arose from one of the participants, something like: "we have a domain model which is expressed in Spanish, but then the company bought an Italian one, and then the model had to be extended to the Italian scenario (which is obviously slightly different), and then to a British one…". The main problem is that a model should be expressed in a language. Countries like United States, Great Britain, Canada, India etc. are privileged because they seldom have to choose the language in which the model has to be expressed.

Our usual scenario instead is a mixture of English and Italian; getNome() and getCognome() are typical examples of this language mismatch. You can put some effort to rule out the situation, declaring an "all Italian" rule (which doesn't necessarily mean spaghetti-code) or an "all English", but don't expect a 100% compliance to it: you'll always end up with a mess. Technical terms (especially in banking and insurance industry) are scary enough in Italian so no developer would easily stick to an English definition of the term, but most of the analysts would agree with them (or use them as an excuse). Anyway, given that the domain model must be shared with the domain experts, most of the times you simply have no choice.

Defining the core domain

Given that language issue in non-English speaking countries is already an issue in a single nation scenario, how could the initial puzzle be solved? There is no real answer. Basically it depends a lot on the team special skills: if you have people with cross competencies, you probably could leverage this knowledge, to expand the core domain, including multi-lingual scenarios. If those people are not there to stay, you could be anyway putting the core domain at risk of being unmantainable.

Given that national domains will be subject to EU and national regulations we have also a two-level driver for change. EU regulations would push for a shared domain, while national one will call for separate ones. Assuming that you can't really predict where the change will happen (but EU regulations leave you more time to think than national ones) you are left with the basic question: "how much of the model is really shared?" the answer is going to be "a small portion" now but probably going to increase as long as a deeper knowledge of the system is shared. At the end of the day, the really tricky thing to do, is to evaluate the costs of sharing information versus the costs of maintaining duplicated code. One can invest on knowledge sharing (Wikis, mailing lists and so on) but I guess that if the developers don't "cross the boundaries" spontaneously to attack the duplicated component (or to translate it to English), forcing them to do so will be probably more expensive than keeping separate domains.

Language specific wrappers

If shared components are to be used, then language will be an issue again. With simple terms English would be fine. With more complex ones, then code readability might suffer. A possibility is to define language-dependent wrappers, whose only purpose is to embellish the original component, providing access to keywords in the language of the local domain. I personally prefer to use English terms and to write a multi-lingual javadoc (but alignment is risky) or wiki documentation. But in some circumstances this might help.

Monday, October 15, 2007

The neglected diagram

During the last JAOO conference, a couple of speakers (Eric Evans and James R. Coplien) eventually dropped a quote about UML experts forgetting to use the UML Object Diagram. It might have been a coincidence, but since I sometimes teach UML classes (and I normally tend to get through the Object Diagram quite quickly) this triggered some reflection "Am I doing something wrong?"-style.

It turned out that my slides were right – they were saying something like "this diagram us useful for explanation purposes" – but during classes the pitch tended to switch towards a more documentation-oriented version of UML. Many of the audiences were programmers or designers, interested in reading UML or in using UML tools, so the approach made sense. Still the main message was missing.

UML is a powerful communication tool. By establishing a standard, the three amigos opened the way to standard CASE tools, and a common language between software professionals. But the real point is that UML is a communication tool, and was designed to be flexible, to be used in different contexts. Designing a system in a CASE, or provide a non-ambiguous documentation for the development team, or using UML to generate code from a diagram is just one side of the medal. To design a successful application, requirement gathering and analysis are more crucial. In this phase UML can come in handy, but only as long as the all stakeholders are able to actively participate in the discussion.

Downsizing UML

To be successful in this information crunching phase, it's often better to stick on the minimal amount of UML that can be understood by the participants. People involved in this phase need to be aware of what's happening, and – most of all – creatively contribute to the result. Surprisingly the Object Diagram is one of the easiest to be understood by non software people, while Class Diagram, even simplified, still looks sometimes suspect to some.

Domain Specific Notation

The main goal is sharing the information. Forcing people to learn a simplified version of our language is a type of white-man-between-natives attitude. UML is definitely not "the message" but just a tool. If there's already a language in place, it's probably better to learn this one instead of forcing all the business people to learn UML. If there is no language in place, but UML simply doesn't quite fit the job, we can also define our Domain Specific Notation, if this is more easily understood than UML, and use it for a defined purpose.

Defining to what extent this extra language should be used is heavily dependent on the project context: an offshore development team will probably still prefer UML, because is standard and could have a more formal meaning. An in-house team, meeting the business people on a regular basis could find more efficient to grasp the information from the source in the specific notation.

Tags: , ,

Friday, October 05, 2007

Brief summary from JAOO 2007

Last year was my first time at JAOO, and I ended up doing a short buzzword-based review of the overall conference. This year I had slightly different feelings about it, but I think that keeping the same format will still be valuable.

  • TDD: Lots of attention on the topic. Robert C. Martin held the opening keynote and a dedicated tutorial. An interesting – and controversial – issue popped out during James R. Coplien speech about "frictions" between a sound architecture and a full TDD approach.
  • Web 2.0: whatever it means, there was a lot of hype about it. AJAX, interactivity, Ruby, Grails, and in general the possibilities of development where the world is your stage are definitely interesting, and we're still far from seeing the final destination. Really hot topic.
  • Ruby: one year after, what looked a promising language really hit the mainstream. Most of the attendants were still on Java, but many of those were having a Ruby project going on. And they all were happy. There are still a few na├»ve elements, but they don't change the overall picture of a really successful language.
  • Erlang: the language has been designed for parallel processing, and has truly brilliant ideas in it. Google bases its computational power on MapReduce, but also a lot of Social Software (more than I imagined) is putting strong effort on parallel computation as well.
  • SCRUM: SCRUM trainings were completely sold out. SCRUM methodology itself deserved a whole – crowded – track.
  • Agility: while only a few speeches explicitly talked about agile, still it is an interesting issue. Developers listened to other's developers trying to measure how agile they were an how they could improve. It's definitely a path and not a label, so talking about it still matters.
  • Domain Driven Design and Intentional Software: from different angles they both try to tackle the cultural mismatch between Domain Expert and Domain Modelers. Eric Evans by presenting techniques and a clean conceptualization of the steps needed to have the modelers successfully interact with the experts. Charles Simonyi's Intentional Software takes the other way round, by presenting tools that allows domain expert to express the software in their own Domain Specific Language, sort of MDA without all of those meta- words. Both approaches have strengths and weaknesses: I am a DDD fan, but I'll keep focused on both.
  • SOA: this acronym looked dead and buried. There was a SOA track in 2006, and this year only a few mentioned it. I have the feelings that SOA was – and still is – a great idea, but completely messed up by big vendors (or big buyers). Sort of what happened to UML some time ago…
  • Software Architecture: I followed part of the track. I ended up having a broader vision of the overall possibilities (I've really been stuck with JEE for too long).
  • Ethics: a lot of speeches focused on ethical aspects of the software development process. "The Pragmatic Programmer" was the reference book of the whole conference, giving a somewhat introspective feeling. Maybe we didn't learn that much about the new cool stuff, as much as what we've learnt about who we are and what our job is.
  • Semantic Web: unfortunately it wasn't even mentioned.

After all, I was a little less excited than the year before, maybe because I missed the surprise effect, and also because some speeches were resounding the ones of the year before. There were no astonishing news on the stage, but we were not looking for the new messiah, so I guess this is pretty good.

Wednesday, September 26, 2007

Incoming bestsellers

After a dinner with Chris and Miquel we ended up with a bunch of titles for potential bestsellers.

  • Binge Coding, when too much is just enough
  • Glamour Programming (ok …I am recycling here…)
  • Beer Driven Development
  • Unintentional Programming, Oh bollocks!
  • The Dogmatic Programmer, It's my way or the highway, buddy!

I took pictures of the potential covers, but lame camera, low light and a straight firm hand made them definitely not worth publishing. Let's see if I can come up with something better with a drawing tool…

Pair programming’s side effects

While attending JAOO 2007 I had the chance to talk for a while with Pramod Sadalage, which gave me some interesting hints about pair programming. I was basically concerned about the amount of time spent by development teams with Instant Messaging tools, such as Skype, ICQ, GoogleTalk. There's basically nothing wrong with those tools; normally they're something that acts as a development process accelerator, especially in offshore or de-localized development situations. Unfortunately, this is not always true: instant messaging can be easily abused, and cause frequent interruptions in people's work. The number of interruptions is related to the team size, but has something to do with the way tasks are assigned to people, with the underlying architecture, which increase the number of people you are depending upon to complete your task, assuming that all the incoming messages are related to the job … which is basically not true. Given that every interruption destroy concentration in the receiver and that it takes about 15 minutes to become productive away, preserving the so-called state of flow is a crucial task.

Pramod's suggestion was to adopt pair programming, on a dedicate workstation. When you're pairing you are less inclined to chat with others, because your peer is just sitting there. The dedicate workstation also helps, because you can configure it not to have IM software installed on it. Pairing sessions anyway aren't for the whole day. So after a couple of hours – and with a bunch of well done lines of code – developers can check out the e-mail, and so on. But productive time is preserved.

Tags: , ,

Saturday, August 25, 2007

Presentation 2.0

A not so recent, but still powerful, video about the power of Web 2.0, by Professor Michael Wesch.


Thursday, August 16, 2007

A prophecy come true

While working for the Italian Ministry of Internal Affairs, on a social registry data exchange application, we faced the topic of validation rules applied to birth names according to the Italian law. I challenged the application submitting "boundary values" for names, while studying the law. So I discovered that Obi Wan Kenobi is a legal name while 2Kool, D3PO or C1P8 are not. Chewbacca is fine. Giulia and Julia are both allowed, while Juli@ is not. As always, anyway, the Italian law is open to different interpretations, cause it states that you can't give ridiculous or shameful names to your babies, but avoids to define what is exactly ridiculous or shameful.

Colleagues looked at me in strange ways (it often happens), but today I've got the proof that my concerns were right, somebody in China tried to call the son the name '@'… Read the Article (in Italian).

Friday, August 10, 2007

So …what is lock in anyway?

When designing an application architecture, one of the things to consider is how to deal with lock-in on third party components. The problem is generally multi-faced (a component might be freely available as open source, or being tied to a license fee) and is generally managed on a risk analysis basis. It can also be tied to large scale aspects such has hardware choices, Operative systems etc. I'll focus here on choosing 3rd party libraries and components for a generic enterprise application (thinking of JEE, but not necessarily only of it). On the practical side this is managed by taking care that the component substitution cost is kept as low as possible. Typical approaches were to intercept and wrap dependencies, with a proxy or an adapter class: no direct access to the 3rd party component is allowed, access to the provided service is available only via the adapter class. In a J2EE context looked like everybody was concerned about Log4J, so the logger wrapper became the architect-level counterpart of the String-manipulation class that every developer felt obliged to write at a certain point of his career.

Technically speaking, you want to reduce coupling between the caller and the provider of a given service, so you extract a more or less generic interface from the service, and provide a (usually brainless) adapter to the 3rd party component. This way you achieve compile time decoupling from the component, at the price of a small increase in complexity (a bit in the code, a little more in documentation, guidelines and control). This price is generally proportional to the complexity of the used portion of the component API (which is one of the reasons why everybody wrapped Log4J…).

The point is that this effort is not really worth a lot of times. For basically two reasons:

  1. You are trading a certain cost now for a possible one in the future (ok, you've done risk analysis on the subject, but… you've got the point.)
  2. You are probably overestimating substitution cost. Ok, I am telling you but I am thinking me, 'cause I know I did… Common refactoring techniques, enhanced IDE support, or even a trivial search and replace capability make this type of code-level change really faster. Developers are generally worried about the number of occurrences of a given required change more than the diversity of the single ones.

What's left out, is more subtle effects of lock in… which results in way of organizing the surrounding components, or in the way to structure the application. In those cases, you can still enhance the capability of your adapter, providing an higher level of service abstraction, but as before you end up paying more at the early stages for a possible risk that could be better mitigated some other way.

Tuesday, July 31, 2007

Doing Agile in Italy – part II

To my surprise, the previous post about the influences of job market regulations on agile development in Italy has been cited in an InfoQ article, and had a couple of interesting comments. This glamorous parenthesis happened while I was on holiday, and my Inbox was dangerously inflating. So, after getting back to work and diligently processed all of the issues waiting for me, here's the second part of my post.

The "too much process" rebound

Be it for our latin nature, be it the average size of projects and companies, I've never faced a situation in which "too much process" or "too much paper" was an issue. There can be a correlation with the Italian job market issues presented in my previous post, in the sense that structured processes definition were meant also as a way of protecting the project from turnover, by making everyone a "replaceable part" of the process. Agile practices lowered the process burden, and put the emphasis on individuals implicitly stating that the best way to handle turnover is to have no turnover at all, thus putting some effort in transforming the team in a nice place to work in.

The average company, has instead sort of a "raw" development process instead, and spoken words prevail on written ones. Given this scenario, I feel like I am a sort of "fake agilist", because many times my job is not to strive for a thinner process definition, but just set up a process, whatever it might be. I also think that one needs to start from something structured, to acquire degrees of freedom later: as an example, even if I fancy the "user story" concept a lot, I seldom use it in undisciplined contexts, and favor a more traditional use case approach. Once you master use cases and are able to distinguish the key parts of the specifications, you can benefit from a less structured and more distilled way to express them. Doing otherwise will probably result in a "blank paper syndrome".

Letting the kids play

Apart from careful people selection and the job market issues discussed here, the single most effective team motivator is probably the introduction of a new "cool" technology in the project. This approach is particularly effective in situations where the technology scenario is particularly flat. Once, I've found myself pushing for a technology change, and the only real reason for it was to have something new to "let the kids play", and to have a motivated team. Of course, there is a limit to the number of new concurrent technologies that a team can master simultaneously, but a "technically boring" project breeds low motivation, and turnover (so you have learning costs anyway) and productivity tends to flatten in the long term.

Tuesday, June 26, 2007

Doing Agile in Italy

During the latest JAOO conference, Alistair Cockburn deliberately provoked the audience stating that "we can't do agile in Europe" and this was due to EU regulations for contracts, which have to be filled with the requirements of the software to be developed by a contractor. This was a serious issue also, because in this case the law (unconsciously, I suppose, unless there is a powerful waterfall lobby in Brussels) poses a serious constraint on the development process. A contractor has to face the fact that requirement discovery in an agile fashion might drive the development team out of the officially signed project goal, some might go for a more ambiguous contract (but the customer has to agree on that, and this is not a piece of cake, especially with new customers), some others might require more paper to change the project scope (but this doesn't agile), the latest option is to take the risk and be "paperless" for a while.

Setting up an Italian development team

In Italy we have also some peculiarities related to the job market, which have some interesting effects on the way software is developed. I remember an old yet interesting article by Steve McConnell, dwelling on the notion of "problem programmers" meaning programmers with a negative impact on productivity. In other words programmers which are normally unable to finish assigned tasks, who need external help to finish the tasks, whose tasks needs extra cleanup work afterwards or which are sucking time an productivity to other members of the team. Put in this way this sounds somewhat scary, but it's not infrequent to meet such a beast. McConnell has only one solution to the problem: get rid of the problem programmer as soon as you can, which goes to a small set of options:

  1. Resolve the causes for him/her being problematic
  2. Move him/her to another team (hoping that this is a variation on option 1)
  3. Fire him/her before the damage is too big

In Italy we can't fire. Job market regulations make firing an employee a very long nasty and complicated process, and 99% of the cases a non-viable option. I mean, in some places, you can't fire somebody even if you found him stealing (which puts the problem programmer in a much better light). The average Italian solution to the problem (there is always an Italian solution) is to go for temporary recruitment, hiring programmers from body rental companies, the third millennium version of mercenaries. Unfortunately this option has some drawbacks.

  1. In many cases you have less control on who you are hiring. Even if I've seen pretty bad recruitment strategies in many places, being the one to choose who you are going to work with it's still a recommendable option.
  2. You'll have less control on turnover issues: people might leave the company which hired them (and your project) for reasons which have nothing to do with you.
  3. You have less control on motivation, you can't do much to improve it (at least on the salary side, you can do a lot, and probably more effectively on other aspects), you can do a lot to destroy it (with bureaucracy, holidays management, etc.).
  4. Being hired by a third party is often a treat to motivation per se. One might ask "why doesn't this company want to actually hire me?" and feel they are just a replaceable part of the process, which helps them feel less guilty the day they'll decide to leave.
  5. Last, but not least, the control chain sucks money. So the sub-employee is also usually sub-paid compared to the official counterparts.

So you generally end up with external teams with potentially lower motivation, or internal teams with people which tend to "seat" on a safe job, without being challenged for a while, and that need "something to do". I don't really want to end up arguing on the ongoing political debate which is going on in Italy these days on the topic, and please don't derive conclusions on my political orientation from this article. If you have to achieve high productivity from a software development team the best way is hire the best people, and get rid of the problematic ones. In Italy is hard to do both, so the net result is that you have to squeeze the best out of the existing team, which is sometimes …challenging.

Friday, June 22, 2007

IDE refactoring capabilities

I had to spend some time recatoring some classes, working with Eclipse, a couple of days ago. I selected some code inside a long method, and went for an "extract method" refactoring. Good surprise! The IDE (I was using Eclipse, but I bet that IntelliJ IDEA has the leading edge here) also checked out the entire file looking for occurrences of the code portion that was cutted and pasted all over the place, and replaced all of the occurrences with the new method (whic was the part I was planning to do afterwards...) the net result was that a 20 minutes operation became a 10 seconds operation.

On al larger scale this has some interesting consequences: if you know what to do (if you have read and understood the first 70 pages of Martin Fowler's Refactoring), refactoring a large amount of crappy code is a lot easier that you might think, so its worth doing it in many more situations. To be effective you must anyway be confident on your team OOP capabilities, and on their IDE power user skills, being the former much more important - and harder to achieve - than the latter.

Sunday, June 17, 2007

Things they wouldn't teach you of in college - Part 2

In my previous post, I focused on some of the scope differences between software project assignment in the university, and real world projects, Pointing out that there is no maintenance phase. Well, it ca be if you fail the exam and have to evolve your personal assignment, ....but don't take it as an advice.

You are not alone
Well, time is not the greatest scope limitation you have to cross: being in a team can be far more challenging. One of the first thing you'll have to learn is sharing your code with your team, a practice which involves knowledge of SCM tools mechanics (first encounter with a CVS, Subversion or VSS is often a sort of initiation ceremony) and of the group mechanics as well. Something like "who breaks the build takes the blame" or "the last one to commit gets the conflict", but it's just the fact that the team as a whole carrying on the burden of the whole development. So you need coordination, and you need to agree with your teammates on the borders of your own activity.

Sharing the code also shows the symptoms of the "it works here!" syndrome, making it clear that you can't rely on your IDE (or on your memory) anymore, but you need more specific tools such as Ant or Maven to carry on the build process. The notion of repeatability, platform independence, environments and so on (have you ever heard the term "production environment" while at the university? I didn't).

A more subtle question is related to the notion of good code. Writing code just for yourself is close being an onanistic practice: good code is something somebody else finds useful. Which means that:
  1. it works
  2. easily
  3. understandably
Code must work. This might sound trivial, but the point is that code should do what is expected to do. Expectations, unfortunately, could be quite non deterministic if left alone. So you gotta get a grip on what people expects your software to do, and to the average developer's expectation, which leads basically to one word (and one adjective): good documentation.

Documentation can make a lot on ease of use, but doesn't address point 2 completely. Software must be easy to use: doesn't matter if steps from 1 to 17 are thoroughly documented in MyFabulousPieceOfSoftware.howto, if you can achieve the same result in 3 steps, or one. The two things come together: as Vincent Massol points out in an old post about documentation driven development, documentation could be a good test for code and an inspiration for refactoring. If documenting takes too much it's probably because interfaces are far too complicated, or are not placed at the right level of abstraction. It's too easy to mess with interfaces and method signatures when we design both the client and the server class, for example 'cause we are stuck in the mud doing it, but we forget that we don't want classes that call our methods to be stuck in the same complexity that we are.

Understandability of the code refers to the internal mechanics of our systems. And to the beauty of the design. This is a tough lesson: a clean model is potentially useless if nobody else can understand it. It will lead to a cathedral in the desert, or - worse - to the code equivalent of a ghost town. Even if you are the best OO designer around, you can't write code that nobody else can maintain. Elegance in software can have dreadful effects, if nobody but you can appreciate it. Don't get me wrong, clean elegant software it's most of the time better than spaghetti code, but in a team, design elegance must stabilize on a level that can be reachable by a large enough number of teammates. To achieve this you can lower down your expectations, or raise team modeling skills. I tend to favor the second option as much as I can.

Friday, May 11, 2007

Things they wouldn't teach you of in college

I had an interesting conversation with one of my colleagues, a couple of days ago, about the best way to coach the newly hired developers who just graduated in university. He was doubtful about what to teach, since some of the technologies used in the project were new for him too.

Well, it's not about the technology, at all. A young, smart developer, can learn a new technology faster than I can do. He's probably already learned a couple of fashionable technologies for the thesis, still he desperately needs to learn all of the things that university doesn't teach (sometimes, doesn't even mention) but that are vital for becoming a good developer, or a good IT consultant.

So I started thinking about some posts, hoping that they could be meaningful or useful for the case...

The test of time
A student can write code. Can also solve difficult coding puzzles. But writing good code is another stuff. The main reason for this, is that coding for an exam is a pretty simplified job compared to coding for production (which is the typical developer's stuff).
Think about the context:
  • you are given a task (sometimes you can even choose it)
  • you are given some constraints
  • you can work alone, sometimes in a team
But the most important question is: "What happens after I deliver the code?" and the answer is ... nothing. Well, nothing, unless you fail the exam. But the point is that in the university there is no such thing as the maintenance phase. Your code lasts the time of an exam (it's like a butterfly, in a way...) and then it's gone. Will you ever need that code again? I bet not.

So you're missing a big part of what the code is for, which is users and colleagues. The former will ask you to change how the code works, the latter the way the code is written. More about this in the next posts.

Saturday, April 21, 2007

Gotta get going...

"Excuse me, sir. I think you got the wrong shoes on!"
"What do you mean?"
"Looks like you put the right foot into the left shoe and vice versa."
"Yeah, looks like you're right. That's why they were hurting so badly."
"Aren't you switching them?"
"No, I'm late, I gotta get going now."

One of the strongest advantages of iterative development is that the concept of iteration leads also to the concept of checkpoint. The moment you release an intermediate milestone is also a moment to think and verify if you're going in the right direction. That's also a moment where you take a breath and think instead of just doing things.

Put in another way, iteration doesn't mean repetition. Splitting the project time line in iterations means that there should be fixed moment reserved to approaching the project from another angle. SCRUM makes this "what's the biggest slowing factor of the project?" question at the center of the project lead on a daily basis. Long lasting waterfall projects normally place this question far too late.

If you are in an agile or iterative project, and there's no difference, or perceived change from iteration n to iteration n+1, this is generally a bad smell. Normally it means that this is not an agile project or, more precisely, that it's not an adaptive one. Checkpoints are probably used only to track the elapsed effort and delays, and to re-adapt estimates. A more pervasive, SCRUM like, survey might lead instead to a different way of doing certain things, or to a suggestion about how to improve them.

It's just a larger scale application of the tuning methodology: test (iterate), measure (finish the iteration), diagnose (the iteration meeting), and correct (plan for the next iteration). If the iterations borders are blurry, you have a lot less test information, if you're not having a meeting, you're probably not getting all the information you need for a good diagnose, if you never stop and plan for a change, you'll never improve. If you keep on postponing needed changes, just because you are too busy, ... you'll always be.

Monday, April 16, 2007

My personal workplace survey

Being a consultant, I frequently have the chance to work in different organizations, and see different approaches to software development. Being a reader of Alistair Coburn's Agile Software Development, and Tom DeMarco and Timothy Lister's Peopleware, I have a strong personal feeling on the way workplaces affects software development.

Some basic principles first...
For those who are not familiar with the topic, I'll squeeze the information a fer lines. Mr. Coburn puts a great emphasis on the way communication takes place in a software development team. Communication is meant as sharing of common useful information, and it's probably one of the biggest efficiency booster in a software development team. If you start analyzing what developers do during the day, you'll note that coding is actually only a small part, communicating can be often as big as coding. Differently from coding, though, communication time is not really tracked in productivity reviews, so most of is effect is below the observability source. He also introduces the notion of information radiators, those artifacts, such as writeboards, diagrams, calendars and so on that help indirect information diffusion in the workplace.

DeMarco and Lister, focus on the effects of the workplace on individual productivity. A developer, and, more in general, a knowledge worker, does a very special job which is a mixture of knowledge and creativity. This type of job is influenced a lot by environmental factors (researches show a factor 13 between the best and wort measured individual productivity). The underlying model is that a knowledge worker is getting in the productive state of flow only after about 15 minutes after an activity is started. Every time his/her concentration is interrupted by something, 15 minutes are lost.

... and then the survey
For places I've seen, the situation is pretty close to a Waterloo. One thing that must be considered anyway is that, being a consultant I was not a full time employee (so I didn't have the same rights of my employed peers). Also, we normally help organizations in trouble; healthy organizations normally don't need us, so we probably see places a little bit below the average. I tried to define workplaces according to the main distinction of software or non software company, meaning that in some cases developing software was the core asset of the company, while in some others it was just something that needed to be done. Anyway here's the survey...
  • Small financial company IT department, small room (few people in it). We were free to attach papers on the wall (but looked suspiciously), the place was basically silent. High overall efficiency.
  • Big financial company IT department, very large scale project, huge open space. No walls, so nothing could be attached on it; quite a few meeting room not to disturb the colleagues. Messy shared information system based on notes. Structured documentation, but very few readers, improved over time. Lots of windows, but they couldn't be opened for security reasons. Smoking forbidden only in the open space. Headache basically every evening. Very low overall efficiency, boosts during somebody's holidays.
  • Average Financial company IT department. Small room, silent, about ten people in it. Meeting room available, but seldom used (maybe too formal). People often talking less than necessary.
  • Software company. Average size open space. Lots of people talking loud. Whiteboards available in separate rooms. Low efficiency.
  • Software company. Average size open space, high background noise. Lots of whiteboards available. High overtime ratio, to maximize silence's effects.
  • Part of big financial company IT department. Small rooms, some people worked for years in a servers room (no windows), high background noise. Headaches. Only junk food available in the surroundings.
  • Software company. Small rooms. Boss smoking in the aisles, and changing requirements. Extremely high overtime, enough to define efficiency as non measurable.
  • Software company. Small room, many people. Whiteboards forbidden, meeting rooms sometimes available, very few people taking notes on paper. Later, the whole department moved to a single room, formerly a shop. No whiteboards, no meeting room, increased quit ratio.
  • Software company. Small room, smaller desks, very high background noise. Only paper whiteboards (non-erasable). High overtime ratio, low productivity.
As you might notice, the picture is somewhat desolating. I managed to see a better place, where developers had better rooms, whiteboards, space and silence. But that was not in Italy, so it can't be included in my survey. But the facts that only a few of the company manage to reach a sufficient score (and none of them is even close to excellency) really makes me wonder...

Saturday, April 14, 2007

The dark art of cheating software estimates - Part two.

Just a quick addendum to my previous post about software estimates (and how to fool yourself cheating with them).

The most effective trick to miss a deadline (and the following one) is to put unrealistic estimations over developers activities. To achieve this result you have basically two ways.

Define the estimates at the wrong level
This normally happens when the team leader defines the activities and assigns them over the head of the developers. Experienced developers know better what should be done and which activities take time, so you should rely on their point of view, after all they're the experts in their domain. Imposing an estimation on the developer's head has also the undesired effect to cause an emotional drift in case of activities taking longer than expected. A young developer could assume that he is the problem (and maybe the real problem is just something forgotten at a higher level) and feel responsible for the delay. If you just say "You have 3 days to finish this" you might end up having the only crappy piece of software the developer was able to write in those three days.
And, of course, skills and environmental factors are different, so the same activity could take 2 days if assigned to one developer and 4 to another one. If the estimates are coming from above, it's easier to forget about that, when quickly reassigning activities.

Ok, this is just something that might happen. I am not saying that one should completely rely only on developers' numbers. A good team leader has its own estimates in place, that can be used manage risk related to optimistic developers, and so on. But this is an activity that should be performed behind the scenes (my first team leader always added 30% to my developer estimations). Comparing developers estimates with yours could also help to spot potential problems, like activities that shouldn't take that much.
By comparing different developers estimations on similar tasks, you could also spot if somebody found a smarter way to do something, and have him teach to the others (or discover that somebody is not finishing the activity and leaving some dirt under the carpet as an undesired gift for the following iteration). Put in another way you should rely on the developers on the estimations and take yourself the burden of find out how to speed up activities.

Asking estimations in the wrong way
One thing you should never do is ask "Will it be finished for Tuesday?" and - even worse - the follow-up "You told me this was going to be finished for Tuesday". Of course, in the middle, everything can happen. If you want to be hated you can ask the question one day, then interrupt with a higher priority task, then on Tuesday ask the follow-up, with a blaming pitch.
The point is that, as a leader, you shouldn't forget the effect of your role. A milestone should be a leader's problem, not a developer's problem (and shouldn't be managed in terms of blame anyway). Asking a question in this way is no different from a woman coming out from the hairdresser with a new "transgressive" haircut asking "How do I look?". It's just a compelling invitation to lie.
At the end, the numbers are exactly the ones the team leader wanted, but in this case the blame is all on the developer. Playing this trick, is unfair, doing it repeatedly is just a way to increase pressure and to threaten team alchemy.

The right way to put question is "How much time you need to do this?" and then do everything possible to ensure that all of this time will be spent on that activity. A good leader manages pressure from above and should shield team from it. If the collected numbers are too high, there is probably a problem, which needs to be investigated and possibly solved, soon.

Don't forget that providing a detailed estimation of the activity, is an activity itself. It shouldn't take a day, but be careful when you get an immediate reply; it's a sign that somebody isn't probably thinking enough. So leave your developers the time to think about what should be do and how much time will be needed. As Joel Spolsky states, this is design activity, after all.

Monday, March 26, 2007

Steve Jones's SOA vendor ratings - updated

As before, Steve Jones does a great job in summarizing the SOA vendor landscape in a few lines, here is the link to Steve's post. You may not agree with all the remarks, but it's certainly useful.


Resources for a Maven controlled release cycle

While setting up a development and release process based on Maven and CVS, we've found a lot of useful documentation provided by the University of Calgary. Unfortunately we've found it after we've done 90% of the job already, but at least it was a confirmation of our choices... by the way, here is the link.

Sunday, March 04, 2007

Tuesday, January 30, 2007

The dark art of cheating software estimates

Most widely used techniques for estimating software projetcs are
  • guess a number
  • ask a number and multiply it by pi
  • pick the release date, count the days backwards and multiply them by the available people
  • function point analysis
  • use case points analysis
  • ...
I personally prefer UCP, because it best fits our usual development process scenario, where the use case list will also be the starting point for the real development phase.
In UCP the estimation is the result of two main factors: the application complexity (roughly measured vie the number and complexity of the use cases and actors) and the environment factors which - as everybody knows - heavily influence the outcome of the project.

The second reason why I like UCP methodology is that when I did retrospective analysis on finished projects, the results where pretty precise. Which is obvious, if you think about it, because retrospective analysis is exactly the way estimation methodologies were tuned. There are two important points so far:
  1. The UCP methodology is pretty accurate, if you correctly evaluate the starting factors
  2. You can still make mistakes, because factors might change (new use cases might be added during development, or environmental factors may change, or reveal themselves wrong)
The outcome of the estimation process is a number, or a range of numbers, which can represent hours, days or months of development effort. It's a measure of human time needed to construct the desired application. Here start the real problems: you pass the number to the management, they multiply it for the salaries and realize that the project will be too expensive. Put simple: you say time and they say money.

Here comes the cheating
The official estimates will be close to reality, but reality is bad news... so what will you do? The problem is that the link between effort and price looks completely blocked before the project starts. During the project, everything might happen: you might be 2 months late, hire new people and train them, and so on. Each time, you add a random factor that invalidates your fixed ratio between worked hours and the overall price. Still what happens 99% of the times, when showing estimates to the management, you'll be asked to reduce them.

Sometimes there are business reasons for it. It's like buying a car, the salesman knows that the final price will be $50.000, but will tell you about $39.900 and the optionals... Some other times it's like a pavlovian reaction: time is a cost and must be compressed. Wasn't it sad enough, I've never heard of a smart suggestion coming out in this phase. Normally you can have one of the three:

a)"Let's skip the analysis phase"
b)"Let's skip the design phase"
c)"Let's skip the test phase"

If you still are optimistic, I have to warn you that the a) and b) almost always imply c). Or, put in another way, c) is always implicit in such a situation.
The most annoying thing is that the same overall result could have been reached talking only about money. Just lowering prices. But everybody assumes that the other one is lying or maybe the way (presumed) cost reduction is achieved makes the difference for somebody.

But let's get to our numbers, as I said before good news are that prediction are rather accurate, still they might fail. One way is having a wrong count of use cases, meaning that more can appear along the way. But a new use case is a new feature (it's the $500 optional on our car) so it's not a problem, as long as the price varies accordingly, and the time also does. Often, what happens in the closed room it's a bargain of money vs time, sort of "I'll pay you this extra money, but you'll have to deliver it all at the original planned date...". hmmm
External factors are more tricky, cause they're harder to evaluate. Sometimes they're just mere assumptions and it takes time to realize if they're right or not. An example of a tricky one is "motivation": you can assume motivation is 3 on a 0 to 5 scale, cause you simply don't know. Then it's hard to find a special moment in the project lifecycle when motivation drops to 2, triggering a recalculation of the estimates. If you have a boss saying "I noted that mood dropped in the development team, can you please update the estimates according to that?".
So your initial assumption are kept "locked" shielding the official numbers from the force of reality. But every time the estimates are kept safe, old, and untouched, you can assume that they're just a lie, and distance from the truth will have to be filled somehow. The difference is that if the truth is exposed, people tend to behave accordingly; if the truth is under the carpet everybody feel free to cheat a little bit more.

Wednesday, January 17, 2007

Designing to lower the TCO

I was reading this post from Steve Jones, and had mixed feelings: as I commented on his blog, I am trying hard not to be that type of architect. But forcing an organization to consider the Total Cost of Ownership for the whole project lifecycle is often a tough job.
Sometimes is the organization itself to be badly shaped, maybe with separated budget between development and production, so managers aren't enforced to save somebody's else's budget. Sometimes the cost of ownership is perceived as a "normal mess" and becomes alarming only when it's totally out of control, which can be ten times bigger than acceptable, or more, due to the "buffer effect" of the dedicated people.

Sometimes is the overall project planning that plants the seeds, for the "touch and go" architect. Time is allocated mainly before the whole development starts, so the architect can't see the developers in action. I mean, architecture is challenged by development, and by time too: developers could not behave as predicted and find different ways to solve coding problems, and evolving tools and framework could make the balance of forces that drove some design choices not any more valid (that's a good reason for architect to document the reasons of their choices). It's often not a matter of being right or wrong, but instead a matter of seeing the whole picture , which is - obviously - much easier at the end. Following a project till the late stages clearly makes a better architect, but seeing everything possible from the early stages is what software architects are paid for.

There's some sort of analogy with the traditional drawbacks of the waterfall approach with the analyst role. Agile processes have put of a lot of efforts in introducing iterative release cycles, which are just a way to anticipate real feedback as much as possible. Iterating architecture, for some reasons, seems to have a longer path, but I'd say it's probably the same problem, only with different category of users.

Tuesday, January 09, 2007

One year of blogging

I started this blog just one year ago. Time for a quick review.
  • I didn't get rich by blogging. My AdSense account registers 9 US dollars, which makes $ 0,75 a month, and globally a beer in a fashionable pub (or two in a crappy one). Wow.
  • I am still blogging, even if the last two months the rate of my postings dramatically decreased.
  • I had a ratio of one post per week, which is pretty good, considering how busy I am normally.
  • I haven't run out of ideas, instead I get a lot of good hints and ideas by checking the blogosphere.
  • Some of my friends, colleagues, and customers read my blog. Sometimes, also somebody I don't know. Not so many, but it's more than nobody.
From a statistical point of view, more interesting than the published articles, are the unpublished ones, and the reasons behind not publishing them.
  • A couple of them, simply had no point. Thought I had something smart to say, I wrote it and didn't sound that smart.
  • Some post were too personal. Publishing it would have meant shifting a "private" fact in public, or maybe violating a non-disclosure agreement with the customer. As a consultant, the most interesting things you can encounter are the one you are supposed not to talk about.
  • Some of them kept bouncing in my head but I had no time to write them down, then I simply lost the right time to talk about a topic, cause it wasn't hot anymore.
  • I also thought about starting a different blog, in Italian, about Software projects that simply don't work (there's plenty of them, especially in the public sector) but I though it wouldn't be that wise to attack beasts of that size without having a good lawyer behind.
I guess now I have a clearer idea about what blogging is all about. I am still wondering if blogging in English, instead of Italian, was the right choice. I think it just gives me more freedom and a bit less of sarcasm, which is something I often abuse, and generally makes me think about things from a different perspective, which is normally a good thing.

Ok, end of the plea. If anybody'reading this, I guess I can assume he or she is a reader. So deserves my deepest thanks for being able to stand my endless whine (sounds a bit like Green Day's "Basketcase"...).

Tags: ,

Sunday, January 07, 2007

How to become a communication paranoid

In the last week I found myself thinking or discussing about which was the most suitable container for different pieces of information. In one case it was a planning doc: was it best to have it shared on the CVS or published on the wiki? A similar question arose about a to-do list for restructuring activities: keep on sharing the same excel list, or have the list inserted in a bug tracking system? Same question for a project glossary: wiki, excel or a word document?

One common choice factor between the different activities, is ease of use. The shape of the container should make it easy for the users to feed the information needed. One thing you should be aware of is that when people talks about “the users” they often intend themselves, or the people who provide the information, which should generally be a minority of users, compared to the readers, who generally get the most benefit from the shared information. In this case ease of use turns out to become accessibility and it’s probably the primary factor to consider.

But what I am getting paranoid about lately, is finding the optimal way to attach the necessary meta-information to the provided information. Context and collateral informations might be in the e-mail in which I am sending a word document as an attachment. But the user is reading the mail, printing the doc, and showing the doc to somebody else, without some crucial extra info such as “this document is bullshit, the real one will be prepared next week”. To avoid information abuse, I find myself using different tools just to ensure that information is handled in the proper way. So a common weakness of wikis, such as the not-so-easy printability of the contained information, becomes a plus when I don’t want the information to be printed (because it’s evolving too fast).

Clearly, meta-information can be attached in many ways, but sometimes implicit meta-information is a more efficient way to carry it.