Thursday, December 11, 2008

Italian Domain Driven Design Group Launched!

In the last days, I’ve realized that there are many more people in Italy interested in Domain Driven Design than I expected. This is good news :-)
I’ve also realized that there was no specific place to talk about this. Discussions on this blogs are welcome, and relationship with the agile community is tight, but after discovering that there is a pretty active DDD community in Sweden, I’ve thought: “Hey! We deserve a DDD community as well!”, so here we go. I’ve just started the Italian DDD mailing list on Yahoo Groups. Anybody interested in sharing experiences and discussing Domain Driven Design related issues is warmly welcome.

Here is the link: http://groups.yahoo.com/group/DDD-IT

Tuesday, December 02, 2008

Does agile add costs?

Another interesting question that popped out during my “Agile: the B-Plan” presentation at IAD2008 regarded the different cost schema of an agile process, compared with a traditional one. Somebody asked “How can I sell an agile process, given that I need some kind of presence on the customer site, and this will trigger extra costs for travel and hotel expenses?”. This sounded like “if you want an agile process, it will cost more”. Strong objections arose: “This is not truly agile”, which is true, just adding a communication channel towards the customer doesn’t make you agile. But I think the problem is a little deeper than that. I’ll try to elaborate.

Selling the process
Does it make sense to sell your development process? I think it doesn’t. As software companies, we are expected to sell results, not the way we use to achieve them. Agile is better to achieve results? Great, this is our business.

Unfortunately the last statement is not entirely true: agile does not work without customer collaboration in place. So the customer must be aware somehow that starting an agile project involves some commitment (dedicated time from key resources) from the organization. Not all the organizations are equally ready for this type of engagement, so you might find yourself desperately looking for some type of feedback, and this feedback might be arriving only close to the deadline. Ouch, we’re back to waterfall.

The myth of the on-site customer
XP approaches the customer collaboration problem introducing the on-site customer role. Well, in practice this is not likely to happen, for dozens of reasons. So a negotiation phase is necessary, to establish a communication channel with the customer, as well as a tuning phase to ensure results are good. This is an area were simply “playing by the book” will lead you nowhere: you’ll need to understand why you need this type of communication, how can you achieve that, and which are the shortcomings and risks of the agreed trade off. And both sides must be aware of that. Scrum places much of this responsibility on the Product Owner. The efficiency of the communication channel between the team and the PO is critical to project success.

Frequent feedback loops are necessary to both sides. The team needs feedback to know they’re developing the right thing. And the customer needs to see how the things are going, to reduce project risks. Risk management is probably one of the most powerful key words if you still want to sell an agile process. But I think talking of risk management sounds better to the management’s ears.

Project specific cost schemas
However, there are no dogmas, even in this area. A face-to-face communication with the customer, is a lot more efficient. But it does have a price. Sending 2 people to the customer offices for a day or a week has a cost, which is a lot depending on the project specific constraints. In Italy, hotel prices went up in the last years, while IT salaries went down. Traveling to other countries might be really expensive. In some circumstances, the cost of knowing that I am doing the right thing might be higher than the cost of “learning-by-mistakes” (but remember: there are some vicious hidden costs on this side, to consider). Unfortunately, almost always it takes the right person in the right place to understand exactly what we are missing. So the only way to know if we can safely reduce some costs by having a less efficient communication channel towards the customer, is to have an efficient communication channel towards the customer. Otherwise we won’t know what we’re actually missing.

Getting back to the title question: I think agile processes have a different cost schema (you save a lot in printed paper and pay more in post-it... ), but are generally more efficient overall even in fixed-scope, fixed-costs projects.

Friday, November 28, 2008

Agile Job Trends

The graph at indeed.com tells a lot on the reasons why there is so much debate within the agile community...

Thursday, November 27, 2008

The dark side of self-organizing teams

Agile methodologies focus on the notion of self-organizing teams, as a key to software development success. This works for a lot of reasons: talent is not constrained to follow a pre-defined process, the process is adaptive and tailored on the individuals and so on.

But it looks so beautiful and simple in books, and so hard to achieve in reality. One reason is that there are two main patterns for self-organizing team creation:
  • being already on one
  • spontaneously creating one
Penguins are self organizing. Males stay in the middle of the antarctic continent for months with an egg on their feet keeping each other warm, and discussing about politics and soccer. At springtime female penguins arrive from their shopping and fishing just in time to start feeding the new born baby penguins. Young penguins follow the group habits without questioning and the ritual goes on and on like this year after year.

Creating a new group involves discovering common passion, is just like starting playing baseball in a small town. Maybe 2 kids are just playing with a ball and a bat. Another one joins the two, same day same hour, then later on, they discover they’re enough to form a real team. There can be some key moments in the growing phase, but many times, the original vision is shared by the majority of the team.

What is really hard, is trying to transform a team who’s always been directed from above, into a self-organizing team. There are so many things that can get in the way, that is often better to start with a newly formed group instead of turning an existing team into a self organizing one.

...not necessarily good
But the notion of self organizing team is not necessarily good in itself. Self organizing teams can be pretty nasty, (Mafia, Al-Quaeda, Ku-Klux-Klan are all examples of self organizing teams, as this interesting post says). But the question is not only tied to team ethics, it’s also related to what the team can do to achieve its goals, which involves powers and responsibilities.

If you follow soccer a bit you’ll probably know the story of Antonio Cassano. One of the most talented Italian players, he played in AS Roma for a while. After some initial success, its role (not his skills) started being questioned. AS Roma sold Cassano to Real Madrid and right after that started an impressive record of 11 victories in a row. After the embarrassing parenthesis in Spain, now Cassano is doing fine again with Sampdoria. This is a clear example of how a team can improve as a team by getting rid of some talent.

Talent is hard to manage (and this could be the subject of a dedicated post), but the key point here is that the team might decide to drop team members that don’t share the same values with the rest of the team. You need to grant this right to the team, otherwise shared vision and behavior will never emerge, but you’ll also need to be prepared to the consequences, as a manager or as a team member. And consequences can be pretty nasty: like telling a colleague that he/she is not welcome (maybe with some “unless...”).
One key point is to be sure that the most influential team members have a positive influence on the team. If you can’t guarantee that, you’re probably doomed, right from the start.

Wednesday, November 26, 2008

So ...where’s the fun part?

I have the feeling of having killed somebody’s dream in my last post. And also of walking on the edge of being misinterpreted. I think I’ve got to blog about it again from a different angle.

I like developing software. I think it’s fun. Even if I like solving complicated puzzles, I think the best part of software development comes from working within a team. I have some friends working for the Toro Rosso F1 Team (being from the same town makes it easy), and I know how everybody felt after winning in Monza. This is the kind of feeling a team should work for. Not every team could be so lucky, but ...you got the point.

Are we just executors?
So, are we just implementing a specification? Nope. If that was the job, pick some MDA tool and have it do the job for you. I just asserted that coding is a lot less creative than many developers think. Finding a proven solution (hopefully a good one) is often a preferable alternative to an original one. And “original solutions” often degenerate in in-house frameworks lock-in.

But being mere executors just doesn’t work either. Following specifications is a waterfall, but it’s also a waste of talent. And such a scenario will eventually lead to low motivation, and talent loss within your company.

Creativity might slip in when the puzzle is more complex than usual, or when tempting something new. In this cases, you have to fight with the creativity tools: get away from the workstation, have a (time-boxed) brainstorming session, involving somebody else, be provocative, avoid censorship. The key is to recognize which type of problem you’re facing and to use the right tool for the job.

Learn and challenge
Another area of software development where you definitely need some creativity is interaction with users and stakeholders. Understanding the problem domain is an interesting activity, which will lead you in unexplored areas of knowledge. Understanding the way users are interacting with your application, and the reasons for that, might lead you to propose some features that might come in handy, or make the difference!

Software development teams are often a concentration of pretty smart minds. Pretending that the only focus is code, is wasting talent and, often, a mess.

Tuesday, November 25, 2008

The alibi of creativity

One of the most intriguing parts of the discussion, that was part of my speech in IAD2008, was a question (interestingly asked by one of the few non-developers) regarding possible negative effects of time-boxing (especially in the form of “pomodoro”) on a creative activity such as developing software.

I quickly dropped my opinion, but I was more interested in making others opinion on the topic emerge. Then I had some tome to think better about it: here are my thoughts.

Coding is not a creative activity

I mean ...there’s creativity involved, but most of the time we are solving problems, in ways which are probably been explored by many others before us. So, basically, coding is a problem solving activity. Creativity slips in when the problem to solve is new, non conventional or pretty hard, or does not strictly involve coding (in this respect, the process might be far more creative than coding). Sometimes we have the luck to deal with applications which are pioneering, but many more times, this is not the case.

This doesn’t mean that we have to be mere follower’s of somebody’s else ideas. But “being creative” or pretending to be, too often has the undesirable side effect of “reinventing the wheel” which is definitely not what we want. Put in another way, “being creative” is often just an excuse for being “too lazy to study”.

Time-boxing does not constrain creativity

Ok, then there are times where the strict time constraints imposed by time-boxed activity, a pairing session, a pomodoro, a day or a sprint, is not enough to allow for the “inspired” solution. As Simone Genini pointed out, “as a manager, you don’t want to wait for developer’s inspiration”. So what you need is a repeatable attitude for this problem solving activity.

Back home I realized that this happens in creative fields too: comics are written on a monthly basis (even if not every author can fit in the schedule), ad campaigns are creative, but organized as time-boxed projects, and so on.

Even in software development, the time-box constraint is less stringent than it might be: if one of your best developers came out with what he consider a sub-optimal solution, something he doesn’t really like. You can be sure that he’ll keep thinking about it, and probably will attack the problem again, once a better solution appeared to him in his dreams or under the shower. Just allowing more coding time to be explicitly allocated to that task won’t probably get the solution any closer.

Monday, November 24, 2008

Back from the Italian Agile Day 2008

It’s been a nice and hard day. I had two presentations to lead, and I wasn’t in great shape. But I think the overall result was not that bad. I just finished uploading the slides on slideshare (but they’ll be available also on the Italian Agile community web site). Here are the links.

Agile: il Piano B

Agile vs SOA deathmatch

Anyway, many ideas emerged on the related discussions... I guess I’ll blog about them for a while.

Wednesday, November 19, 2008

Getting Domain Driven

Last weeks have been especially busy, so I wasn’t able to post that much. This doesn’t mean that nothing happened to me in the meanwhile...

In September I attended to Craig Larman’s Agile Project Management training, which I am supposed to take over in 2009, together with Allan Kelly, for Skills Matter.

Wasn’t it enough, in October I certified for Eric Evans’ Domain Driven Design class, which I just co-taught last week, together with Gojko Adzic, and Eric Evans himself. As the previous experience (where I acted as an active student), this one was also really pleasant and interesting at the same time. All of the people attending where somewhat special, and questions were really interesting and let us get quite a bit of their unique perspective too. When classes are that interesting it’s hard to say who learned most, but going home satisfied after a hard work, (and a couple of beers) is definitely rewarding.
Also, we started collaborating on the training material and structure, together with Hans Dockter, who'll be joining us as a trainer in 2009, in a very open and collaborative way. It's great to be part of such a team!

At the same time I started beating the drum of DDD also in Italy with my first article on the subject, which has been published by Mokabyte some days ago. Since DDD is one of the IT related topics that I found most interesting over these years, I am pretty excited about the whole matter. By the way, writing about DDD is also extremely challenging. Eric carefully weighted every single word in hs book, which is actually a careful distillation of DDD principles. My writing style is basically the opposite: I write a lot, I digress, I make a lot of (hopefully) funny examples just to make a small point. Maybe I'll be able to deliver the message, maybe not.
I am eager for feedback anyway ;-)

Tuesday, November 18, 2008

The scrum mother - re-explained

Honestly, I never thought my last post could have been that controversial. But I had so many diverging feedbacks: some loved the post, some demolished and told me it was terrible, and some others liked but got the meaning the other way round)... Maybe it is necessary to clarify things a little bit.

I am very convinced that authority stands in the middle of being a good Scrum Master. SM is not a leading role in Scrum, he/she just preserves the integrity of the process, without effectively taking part in it. SM could participate in a discussion, but must not take any decision. SM must ensure that the discussion terminates with a shared decision.

That’s why I came up with the analogy of the Italian mamma and the old-style Italian family. The father has the authority, and brings the money home (hopefully), the mother manages to keep everything running. Preparing the food does not deliver any value to the outside (unless you own a restaurant) but allows family member to be healthy and do their work.

But I guess the metaphor could have turned weak, because I was referring to a very specific type of family, and every single reader had his own idea of mother, and the concepts overlapped diverging a lot from my intentions.

How to raise kids
Probably, the flaw of the mother example is related to the different perception of the mother role within a family. I’ll explain what I meant, going straight to the SM role, to be as clear as possible. SM are not supposed to prevent developers from hurting themselves. SM are supposed to allow the team to grow, by letting them do and recognize mistakes, by allowing them to develop self-confidence and do something on their own. You must be around when your babies are learning to go on a bicycle, but if you keep holding them, or force them to use those small extra wheels ...they’ll learn later. The same applies to swimming, where confidence is just about everything. If you keep being always around, they’ll start crying the first time they’re alone. Delayed growth, lack of confidence and a lot of time spent just “being around” by the senior management.

Friday, November 14, 2008

The Scrum Mother

It started out as a typo, then it became a joke, then I began thinking that there might have been something interesting behind the joke.

One of the common pitfalls, when applying Scrum is to choose the wrong person as a scrum master. It turns out that this is quite a common situation, for many reasons. A company willing to adopt Scrum will probably select some of its best people to take a Scrum Master Certification or a generic Scrum Training. Before knowing what Scrum is about and what the Scrum Master role is responsible for, in practice, the selection criteria that a company will adopt would very probably be based on the traditional system of values of the company itself. Which would not match some of the key values of Scrum itself.

Despite the word “master” a Scrum Master is not given any real “power” on the team, a SM is not supposed to decide anything. It’s a supporting role, which can be rotated among different members of the team. Choosing the project manager or the team leader as a SM simply won’t work because old habits and roles will keep on sneaking in, preventing the Scrum magic (making the team self-organize efficiently) to happen.

A different set of qualities
Software companies often evaluate their employees in terms of technical skills, problem solving or leadership attitudes. These are not really the most interesting skills for being a successful scrum master. A good scrum master governs the process but does not have an active role in technical decisions. So being the best coder in town does not make you the best candidate for being a scrum master, instead, it probably stands in the way.
Qualities needed are probably more related to a feminine approach rather than the typical testosterone-driven competitive approach of many coders. Unfortunately these qualities do not always appear in individual performance evaluations, because are more related to the overall team behavior, rather than individual.

The traditional Italian team
It turned out that in Italy there is a quite common team which is ruled by a similar set of practices. It’s called “family” and the mother - the “mamma” - plays a key role in keeping everything working.
Meetings (called lunch or dinner) happen every day at a given hour. Every member of the family is supposed to show up at least right before that.
Mamma’s efforts are generally targeted towards the meeting preparation. They prepare the food, fetch the ingredients and set up the meeting room.
Mammas are generally checking everybody’s look and mood, to ensure everything is right. If, for any reason, something is not right, there’s no way to escape a one-on-one meeting about that.
At the end of every week there is a larger lunch with members of other teams involved (generally called “relatives”). A little more of ceremony is involved, but the basic idea is to keep all the teams in sync.
Mammas are working hard, but many times they are not bringing home any money.
If fights arises between team members, mammas act as peacemakers, preserving team harmony ant any cost.

Ok, I guess the last part will sound like a lot of stereotypes... it is. In many places this type of family no longer exists. Also, this is at odds with women rights, so I need to make it clear that I am not advocating this type of family. I am merely trying to have some fun stating that some of the key scrum practices are not new at all, they’re probably patterns that existed long before. Some of this patterns can be found in the traditional Italian family and helped preserve the team in terms of unity and sense of membership for ages.

Tuesday, November 11, 2008

SpringSource acquires G2One

A little bit of a sudden, thanks to Nicola for pointing me to that :-)

Here's the official news

Monday, September 29, 2008

Italian Agile Day

Date for the Italian Agile Day 2008 has finally been defined! Needless to say, I am happy this will be again in Bologna. :-)
See you there!
Italian Agile Day

Wednesday, September 24, 2008

Me and MSProject

Yesterday evening I attended Craig Larman's brilliant talk -  in Skills Matter, in which he deeply criticized some commons dysfunctions of traditional project management, and doomed MsProjects and Gantt charts as useless and dangerous in software development.
I couldn't agree more with him, so imagine my surprise when today I discovered that on my naymz page I was actually sponsored by MsProject!

Needless to say, I am quite disappointed. :-(
I know it's AdWords driven, and that some keywords in my profile triggered the ad. But...

Tuesday, September 16, 2008

Why should I eat dog food?

I was reading this interesting article that my friend Mirco Casoni wrote on Mokabyte. He basically reviews and criticize the idea of framework and the unfortunate implementations of the framework concept he happened to see as a consultant (implementations which I probably know very well too, since we’ve been colleagues for years).

One of the main points touched by the article was the long-term ineffectiveness of in-house frameworks. A solution that was developed internally, to bridge a gap in standard implementations, turns later into a drag factor; the short term advantage becomes burden of legacy code and so on.

The old java legacy
One of the reason of “the rise of java frameworks” at the very beginning of this millennium (sounds epical, doesn’t it?) was the low level of abstraction of the java specification itself. JDBC, EJB, J2EE Web were coming out-of-the box without a high level implementation. SUN accompanied the technology with publications such as “Core J2EE Patterns” that used to be a bible in these years. It really sounded like “here is a bunch of small lego pieces hat you can put together to make ... everything! And here (but not in the same place) are the guidelines to make something useful and working properly”.

Unfortunately this is not what many people wanted: people wanted an efficient and solid architecture, easy to set up and reuse. But J2EE offered too many choice-points (how to implement a MVC Model 2? how to deal with JDBC exception? How to make EJB perform? and having the right answer to all he questions was a tough job. So, frameworks were seen as a way to formalize and distill the body of knowledge captured in various patterns book, and to prevent the rest of the team from making awkward choices.

Later on, the implementation of the java platform got better: old EJBs were dropped and many related EJB patterns sunk with them and JSF finally looked like an official standard implementation of a Model 2 MVC framework. It has to be noted that this happened after Spring got momentum, and after the MVC landscape got overcrowded...

Put in this way, there were tons of reasons to build frameworks in those years, because there were holes (or craters) to be filled ...anyhow. Unfortunately some solutions were better than nothing, providing a short term advantage, but rapidly transforming into a legacy nightmare.

If you love somebody, set them free
Some in-house frameworks look definitely naive now, but they looked like (and probably were) a reasonable solution, at the time they were designed (well ...not all of them). Specific design decisions apart, what really makes the difference between a friendly framework and a “traitor” is the cost of getting rid of them. Vendor design frameworks with a very high vendor lock-in potential (it’s part of the game, on the business side), but in-house framework designers often do the same mistake, with no real good reason. Depending on third-party is generally seen as a risk. Depending on yourself is often a higher risk, but seldom considered.

Documentation quality and availability (as Mirco points out) are other key factors, that should make you favor a standard solutions instead of your in-house framework. We used to have a good MVC solution, but switched to Struts as soon as it became a viable option, and so on. Few companies are large enough to afford production of documentation for an in-house framework (one of the reasons why they prefer to sponsor open source projects), so you have to be ready to leap from your solution to the promising open source one, sooner or later.

A framework fills a gap. This means that there is no standard solution for the given problem (otherwise, a framework would probably already be available) and means also that the in-house solution might be non-optimal. This is fine, we’ll probably get more informations about the optimal solution by implementing ours. What we should know in advance, is that we should “design for replacement”, meaning that our framework design solutions should have the primary requirement to be easily discardable. Thus limiting pervasiveness of our solutions.

Often, this is not a technology issue. In-house frameworks have “fathers” that make it difficult to replace them (more or less consciously or willingly) or simply slowing down necessary decisions. Can you be in love with a piece of code? Some do.
   
Which framework now?
Does it still make sense to think frameworks? Well, it largely depends: applications frameworks such as Spring, Hibernate or also Grails offer an off-the-shelf solution to many common architectural problems, leaving architects free to concentrate on application-specific domain decisions. On the other hand they’re all domain-neutral, and a domain specific framework could be of great value for a product company, especially if it is designed to be technology-neutral, and could provide great value in the mid-term.

Friday, September 12, 2008

Farewell Vista


I am finally getting rid of Windows Vista. My transition to Mac OS X is almost complete (some folks said that Vista is Steve Jobs' ultimate marketing tool).

There are still a couple of open issues (I have tons of valuable material in Word 2007 and PowerPoint 2007) and I am tied up with the new fonts that Microsoft added in Vista and Office 2007. So, looks like I am moving and keeping all the Office 2007 stuff (which by the way is an excellent tool) and dropping Vista (which by the way is crap) and all the related performance and productivity drag. Just to summarize, here is one of the reasons why I quit.


Thursday, September 11, 2008

Agile SOA Presentation slides posted on Slideshare

I just posted the slides from my in-the-brain session @SkillsMatter on slideshare. If you want to have a look, just follow this link.

Wednesday, September 10, 2008

Sustainable Software Architecture

A software development project is a bounded activity. One of the key goals of Software Architecture is to find the best trade-off (or the sweet spot) for a given project ecosystem.
Such ecosystem is the result of many combined factors such as:
  • project size
  • team size
  • team location (es. co-located vs distributed or offshore team)
  • team skills (experienced developers, contractors, folks just hired, and so on)
  • team members availability (not all teams are well-formed at the time the project starts)
  • architect avalability
  • turnover
  • logistics
  • marketing constraints
  • deadline constraints
  • etc. etc.
I learned to know that keeping software architecture clean and preserved from all this is a dead end road. Those ecosystem constraints affect the way a project is carried on, and will also affect the optimal architecture for a given project. Put in another way, there's no optimal architecture for a given problem, unless you include all the variables in.

As Kevin Seal pointed out at the last week SM event: "Architecture is about things that are expensive to change", and in the open source era, the most expensive things to change are time (which can't be reverted) and people, which have normally a long and inefficient learning cycle.
This makes choices like "choosing the implementation language" an expensive one, because despite the free availability of development environments for different platforms, team skills might have to be formed, and properly skilled.

Make the architecture fit the ecosystem
Answering to questions like "What's the right tool for a given load requirement?" is a non trivial job, but it's the one we're (hopefully) prepared to do. It's right in the software architect mindset.
What I've found more tricky is the need to define just the minimum affordable level of architecture, in a given application. In an ideal world we would like to have the most reliable architecture, making coding easy and meeting all of the nonfunctional requirements. We would also like to have all the team members getting familiar with that, and understanding roles of every architectural component and the reason they've been implemented the way they are.

Too much of a dream. Architecture definition is often a time bounded activity also. For a consultant,  the Architecture Specification Document might be a deliverable mentioned in the contract, so even if there are more efficient ways to deliver architecture information (podcasts, meetings, comedies, tattoos, ...just to name a few) a document must be prepared. But the contained information must be delivered some other way...

Pairing with programmers, or simply coding are great way to get a grasp of coding reality (meaning that architects might learn something really useful from the way architecture and other coding tools are used), but also a bit naive when the matter is "How to deliver architecture information". Architecture have (I mean they must have) a broader scope than developers, taking into account long term factors, while a developer is generally focusing on a problem that has to be solved now. And developers are not all the same. Seasoned veterans and college guys have peculiar skills ad interests, and a completely different background.
Moreover, properly training the team might be a goal for a company investing it its own development team while it might be not a viable option for an external or a contractors-based team. This might sound a bit cynical (it is), but even if I prefer teaching (after all, it's part of my job) as much as I can about the architecture in place, I have to face the fact that information will flow on a limited bandwidth channel: there will not be as many chances as I would like to discuss the architecture, there would not be as many questions, and developers might not be that curiuos, after all.

Finding the sweet spot
Generally, I think that the ideal sweet spot is doing as much architecture as it can be managed, by the team (buy I usually learn about the point location by surpassing it). This is not a fixed line. A good team will probably ask for a more elegant architecture, al long as the project advances, pushing (or pulling) architecture evolution. In some critical situations, like long projects with a lot of turnover, the architecture should be robust enough to prevent developers to harm themselves, keeping the learning curve as small as possible. Some more work for the architecture team, but usually a bit sadder.


Thursday, September 04, 2008

How much Architecture is Enough?

During my London staying, I had also the chance to attend Simon Brown's in-the-brain event, which turned out to be more two-way than mine, with Simon softly facilitating a really open discussion. As it is often the case, interesting ideas emerged. Here are some of my side notes, mixed with some of my following ruminations.

Communicating the architecture

One of the biggest challenges seems to be finding the right way to effectively communicate the architecture. It turned out that architecture documents are not the best way to convey architecture information alternative and possibly more creative ways are needed, spanning from Wikis, to podcasts, to live sessions. Honestly, the Google Chrome Comics is a great way to deliver an architectural view. But maybe it's not a viable option for common teams.

Also, a prototype could bring so much information and is particularly efficient in edge situations, where the team is not properly trained or skilled and needs to produce some code anyway. Bad as it may sound, also cut&paste information propagation (the worst case scenario) might be preferable to paralysis or anarchy.

Championing the architecture

Without an active championing role, the architecture is at risk of being twisted or thwarted. Without an active role, an architectural specification is just a piece of paper.

Non Functional requirements

Managing non functional requirements seems to be a hot spot for many agile teams: requirements do not naturally show up as user stories, so their emergency has to be somehow facilitated. Also, automatic tests on nonfunctional requirements are necessary.

Strictly tying delivered software to business value might eventually put non functional requirements aside, but having a fully operating passed load test delivers both business value and confidence.

Resisting temptations

A pragmatic strategy to define if an occasional need needs to be transformed into an Architecture requirement is to count the number of times you've been hit by the need to refactor a given thing, with 'three' as a threshold for a rule of thumb. Before getting to the threshold one should "resist to the temptation" of refactoring the code to a more elegant solution.

Personal note: it's interesting to notice that this works only in teams where you have an efficient information flow within the team. If development process does not facilitate this sort of information exchange (es. no daily scrum meetings), your counting might reach '20' before the need for a given refactoring is communicated to the architect-in-charge (whatever that means).

Sustainable architectures

I also advocated the cause of "sustainable architectures": putting too much stuff on an architecture definition might result valueless if the team isn't able to grasp it. Sometimes it's better keep everything as simple as possible, or to step back and start with a less sophisticated solution, and invest in training or improving the team technical skills (or the architect communication ones) before building and obscure cathedral.

Deferrable architectures

A key point is that architecture is about things that are expensive to change in software applications. Interestingly, a too sophisticated architecture (architects love to create frameworks) is more frequent than the need to heavily refactor an existing one. Honestly, I think that in many cases, simply defining a clean layering architecture might be enough, to allow a clean solution of emerging architectural requirements.

Personal note: also, the cost of change has largely dropped: if we compare heavyweight frameworks that were popular some years ago, with what can be achieved with Spring and AOP in terms of pluggable, orthogonal requirements, it's clear that the scenario has changed radically, at least in the JEE landscape.

Should architects be part of the team?

It turned out that architects are sitting on the borderline of the team. There are architects that are definitely not in the team (and absolutely do not code), and some which are more willing to pair with developers, improving team skills and gaining feedback on the correctness of the architectural choices. At the end of the day, developers are architecture primary users, and, as an architect, I really would have my choices tested and challenged by a real feedback. On the other hand, an architect should preserve its ability to think long term, so can't be fully involved in coding, especially tasks which are on the critical path. Moreover, they might be external roles, hired on a short term basis, or with reduced availability, so being part of a fully involved Scrum team might be problematic. Doing Scrum by the book, the Team might ask for an external architecture support as long as the team thinks they need it.

Project start effect (personal note)

As a consultant, I might have a slightly biased perspective, but in many times, the architecture definition has just been the result of a "resource allocation incident". Sometimes, the team was just set and ready-to-code, so there wasn't really much time to have them wait for a good architecture specification to start coding. Sometimes, I joined a project when development had already started, so I had to impose an architecture over a (barely) existing another. Finally, some other projects would not even start before an "architectural view" documentation is developed, as this is part of the project budget negotiation by the management (SOA is a typical example of this case); so when the project starts you have more or less a "specification" already in place. In other words, the early days really influenced the whole project direction, and this rarely has to do with architecture, per se.

Tales from my first in-the-brain session

Just right after my long awaited holidays, I rolled to London as part of my recently started collaboration with Skills Matter as a trainer. As a side activity, I had the chance to held a so-called "in the brain" session, giving a presentation about how to make agility work in a typical SOA scenario.

I must admit that I am quite happy with the result: despite a couple of night spent fine-tuning the slides, with my laptop in miserable shape (I had no battery left, and the 'o' key was activated by pressing 'p' and 'i' meaning that when I typed "pippo" the result on the screen was "poiopopoo"), everything went quite smoothly and I had definitely a good time. Also, the pub follow-up has been largely appreciated.

By the way, if you're interested in the content, a podcast is available on Skills Matter website.

Friday, August 08, 2008

Maven 2 Integration for Eclipse

I finally had some time to play with latest Eclipse version, named Ganymede, and the recent version of Maven2Integration, plugin for Maven Here are some impressions about it

Installation

Installation, by the way didn't go as smoothly as I expected. I pointed to the M2Integration update site, but Eclipse plugin dependency management is still messy. I got warnings I couldn't install M2Integration: part was related to integration with Subversion (which is an optional package, but I wanted it), So I had to install Subclipse plugin, pointing to the Subclipse update site, even if Ganymede isn't yet mentioned on the supported versions… the risk was worth taking :-)

I still had a couple of issues open: some packages related to zest couldn't be found (strangely this happened only on my Windows machine, on Mac OSX this didn't happen, but I guess it's unrelated to the OS). Those are part of a graphical library needed to draw fancy dependency graphs, so I had to to go to the GEF Update site to install them as well.

The last missing piece was support for AJDT. Well I won't probably need AJDT right now, but I preferred to have a complete installation, instead of remembering that I left some pieces behind. So included also the Ganymede compatible AJDT update site to my list. Then I was ready to install and run the m2Integration plugin.

Impressions

Looks like maven support within Eclipse got to an interesting stage. The plugin now assists throughout the whole POM management, and is pretty well organized. The plugin just interacts with Eclipse nicely, as the default build engine. There are still a couple of places where it might be more helpful: I typed a space in the artifact ID (my mistake, but the plugin should also help Maven newcomers…) and I had a nullPointerException. And while selecting the archetype, the choice was huge without contextual information. So it looked somewhat scary. Anyway the overall impression is pretty good: its definitely a useful tool if you're using maven within eclipse; and I really liked the dependency graph:

Here is a screenshot.




Wednesday, July 30, 2008

Did we get it all wrong?

I've got some thoughts bumping in my mind for quite a while, then somebody today on the Italian XP mailing list pointed me to this post by Jay Fields, urging me to blog about it. The main topic is the distinction between two different species of software professionals, hackers and developers. You might have faced this distinction with different names in the past ("The pragmatic programmer" is another name for the same god-like figure), but the main point is that hackers are brilliant in solving technical problems, while developers are better in interacting with stakeholders to better understand what the problem is. The bottom line is that hackers are at higher risk to develop brilliant solution to non-existing problems while developers are better in ensuring that they are trying to solve the real problem.

Breeding developers

Ok, we need both species. Are we good in breeding them? I'd probably say "Nope", at least for developers. From what I can recall, our school system is largely interested on the technical side of the software development professional training. So we're taught computational theory, operative systems, programming languages, algorithms and the like. Most of these disciplines are oriented towards an individual problem solving approach. We learnt that some sorting algorithms perform better than others, so do some data structures and architectures. Most of the exams were defined in terms of finding the solution to a given problem. This couldn't be further from the real world, because what is really difficult is to define and scope the problem. But to define and scope the problem you need to interact with more and more people, many of them without a strong technical background, which will tell you valueless information, lies, bullshit, crap, and …a tiny slice of the truth which you'll have to pick up and transform into the foundation of your application.

Unfortunately, this is an activity that involves a completely different part of the brain from the one used for coding. This involves social skills and the ability to successfully interact with different people. Involves the ability to start swimming (each time) in a completely different water, to learn about a problem domain we know just a little about. And …we were not prepared for it. Moreover, we were somehow chosen not to be good at it. I am consciously talking stereotype here, but the "computer guy" (or whatever we've been called when we were teenagers) is the kind of person that likes the It's-just-me-and-the-machine approach, or that doesn't find strange to spend a whole night awake playing World of Warcraft or trying to fix a bug. Somebody else might be having a beer in the meanwhile… I am not saying that computer geeks do not have social skills, what I am saying is that social skills are a largely underestimated component of what makes a good developer (you can still call it "social engineering" to attract hackers to the discipline, by the way), and that you can't expect to find these qualities so easily.

Thursday, June 26, 2008

SOA without ESB?

I've just watched this presentation held by Martin Fowler and Jim Webber about SOA. It's both brilliant and controversial, but definitely worth a look.
More about this will probably follow... ;-)

Tags: ,

Monday, June 23, 2008

Back from the Scrum Master Certification Class

I'm just back from a Scrum Master Certification Class, taught by author of "Applying UML and Patterns" Craig Larman. I have to admit I am pretty happy with the outcome: even if was not new to many of the topic presented, the course used the books as a starting point and covered many issues related to setting up Scrum, and common pitfalls that might be encountered along the way.

As it often happens, the trip is more interesting than the destination: reading a book one may have the idea of Scrum and Agile as a green field. Well, it might be so …once you manage to get there. But getting there might actually be a trial-and-error process: one might have to answer questions which are not answered in the books, and there is no dogmatic or one-size-fits-all answer. The most tricky ones are definitely "questions you shouldn't answer", meaning that just asking the question is wrong, but it takes a lot of confidence and guts to answer "Your question is not meaningful", or "I must not answer to this question" especially the asking person is your customer or your boss…

As always, the side effect of these trainings is also that interesting people gather in the same place, and so interesting conversation might happen in the coffee breaks.

Tuesday, May 27, 2008

The dark side of user stories

I've always had a twofold feeling towards user stories. Despite being critical about many drawbacks of the Use Case approach (the one I hate most is the time wasted in filling a template with the same useless information repeated over and over), I think that User Stories have some limitations that a team willing to turn agile has to be aware of.

Interlude N°1

Seven years ago, a younger me collecting requirements for a small application that should have handled the operations required to set up a secure banking account. Every person interviewed was telling me a different story in terms of "special cases" (the whole story seemed a collection of if … ), and the resulting workflow seemed messy and impossible to tame. I tried a different approach and started modeling considering states instead of actions. It turned out that the workflow was pretty simple, after realizing that there was no before-after correlation between some actions, and a loose workflow could have mastered the whole stuff with 2 screens, and a bunch of checkboxes.

Interlude N°2

JAOO Conference 2006. Jeff Sutherland talking about Scrum to a subjugated audience. Telling something like "If the product backlog is not ready, I give the whole development team a day off.". Silence. People dazed and confused. "There is no chance that any line of code written without knowing its purpose will turn useful. Instead we'll have to pay for writing that line, for testing that line, for refactoring that line and to convince the team that this line shouldn't be used.". Pause. "The other reason is that there is no better way to have the management hurry up and finalize their decisions than having the whole development team having a day off". I liked that.

Interlude N°3

Business application. Very complex domain. Requirements were heavily related to the Italian fiscal system (which means that the application should mimic the laws, but the laws are not intended to be understood, moreover many of them are flawed from a logical approach – but this is a whole different story, that can't fit here…). It took the lead analyst a couple of weeks to be able to nail down a specification for one of the complex use cases, and he was working together with the domain expert every day of the week.

Back to our story

One of the key points of user stories is that they act like placeholders of the real specification. Instead of writing a detailed spec, structured like a Use Case document, some months in advance, a User Story provides the minimum amount of information required to identify the story, estimate it, prioritize and relate it to other stories. Before development of that specific story starts, Story Cards serve basically as a low-fi management tool (estimation and prioritizing), allowing to defer the detailed requirement specification to the very last moment. This way, requirements will be more precise (because they could have changed in the meanwhile and because the whole team is more mature and expert of the problem domain) and would require less documentation, due to the shortest release cycle.

Unfortunately, there are some preconditions that make this magic happen. One is the availability of a domain expert. It's a key factor in many agile processes (not to mention the XP "cohabitation") but really reads like a simple thing, in a book. In reality one might not be that lucky. Domain experts might not be so available (I agree with you that this is a business suicide – don't get me wrong), or not so expert, or maybe simply do not have the right perspective. Well …these are basically the reasons why the analyst role has been invented. Forget what we've seen in the "Rose years" where an analyst was a man who produced useless diagrams. An analyst is a knowledge cruncher (this definition comes from the DDD book): somebody that becomes able to master the domain like the domain experts do (or possibly better) and to derive a model from fragments of information.

Many times, User stories are seen as a way to leapfrog the analysis phase and the analyst altogether. A good design will "emerge" as a result of iterations. Well, …the use case of Interlude 3 was one iteration. But if requirements were collected by developers straight from domain experts (as in Interlude 1) the result could just let you completely stuck with crappy code for ages. Many teams do not suffer so much from this limitation. Maybe they have rare beasts like developers with brilliant analysis skills, or the analysts really embraced the agile mindset and perfectly fitted in this role. Sometimes, instead, the analyst is perceived like "the person who writes the specs", and since the specs are not that necessary anymore, so is the analyst.

Hmmm … stormy weather approaching.


Friday, May 23, 2008

Cleaning up our test code – Part 2

Assuming that we've created our test object in a few moves (please read the previous post about it), now the focus switches to the way we use test code to assert correctness of production code. JUnit assertion family relies heavily on equals-based assertions. Unfortunately, the equals() method is far from being used consistently, so equals-based testing has some dangers we need to be aware of.

The equals() contract

Talking about the equals() method, there is a general behavioral contract, which is the one defined in the Java Specification, and it is used heavily in the Collections classes to retrieve objects from different container classes. As every good java developer knows, overriding equals() needs us to adhere to the implicit behavioral contract, and also that we override hashCode() to ensure consistent behavior with different container types. So, to effectively test or domain objects, we need to override both methods. So far so good.

There are also a few convenient methods to do this: Eclipse allows you to generate equals() and hashCode() from a given set of attributes. The resulting code quite good, but it's like a grey area in your source code, in terms of readability. Jakarta commons implementation is less "automatic" but provides the developer with better control over the final result.

Enter a new player

If you're using Hibernate to persist your domain objects, you'll probably know that this requires some attention on the way equals() and hashCode() are defined in your application. This is primarily tied to the hibernate persistence lifecycle (which generally populates id fields upon writing operations) and to lazy-loading (some fields are loaded only if they're explicitly accessed inside a Hibernate session). The Hibernate recommendation is to define equals() and hashCode() methods according to equality of the so-called business-key, which can be roughly be mapped to "a unique subset of required non-lazy fields, excluding id". Id-based equality should be managed only by Hibernate, while business operations should rely on an equals() method based on the business key. To purists, this sounds like an undesirable implicit dependency on the Hibernate framework (your POJOs are still POJOs, but not exactly the same POJOs you would have had without Hibernate).

Equality as a business-dependent concept

So far, we have 2 separate equality implementations: id-based equality (that should be used only bi Hibernate, behind the scenes) and business-key equality that will be used in our business code and will be implicitly used if we uses containers from the Collections framework. What should we use in testing? Unfortunately, there is no one-size-fits-all answer, but the choice depends heavily on what we are testing and what is precisely the desired behavior of your application. If we are adding information on some non-mandatory field, then simple equality won't check it. If we're changing the value of a mandatory field, and want to check that this doesn't trigger creation of a new Id, you need to explicitly check that field.

Often, applications with a nontrivial domain model can't rely only on a single notion of equality (are two BankAccount instances equal if they have a different balance?), this is more or less clear, during analysis, but the presence of an assertEquals() method in JUnit makes blindly using equals() so tempting…

Smarter predicates

Once we've realized that equality is too generic to be applied blindly, the following step is to try to apply the right context-dependent equality in the appropriate context. The obvious solution to do this is to decompose equality to an attribute-level check: so instead of having

assertEquals(a, b);

we end up with something like


assertEquals(a.getName(), b.getName());
assertEquals(a.getSurname(), b.getSurname());
// … you got the point

Which is longer, less maintainable, with a higher coupling, and … ugly. Most of the times, anyway a business relevant notion of equality doesn't show up only in tests. I would argue that 99% of the times the same equality is hidden somewhere in your code in form of some business rule. Why not having the same rule emerge and be used in the test layer as well?

A good way to do this is to rely on the Specification pattern, developed by Eric Evans and Martin Fowler which basically delegates to a dedicated object the assertion of the applicability of a given business rule on a domain object. Put in another way, Specifications are a way to express predicates or to verify expectations on domain objects, in a way that could look like:

assert(spec.hasJustBeenUpdated(a));
assert(spec.isEligibleForRefund(b));
assert(spec.sameCoreInformations(a,b));

After thoroughly testing the Specification (a thing that we should have done anyway, since it is a business implementation), we could be able to reuse the same logic as an assertion mechanism in our test layer, making our code shorter and cleaner. Not all business oriented assertions will be that useful in the test layer, but some normally do. As I said in the previous posts, one of the main goals was to be able to write a lot of tests, and to write them in few minutes. Being able to rely on a higher level of abstraction definitely helps

Friday, May 09, 2008

I thought I had an original idea....

... instead I discovered the name Soap Opera Test has been already invented :-( . Good news is that no one blamed me o sued me for that. I just discovered it googling the term, I've found a 2003 reference in this article by Brian Marick who credits Hans Buwalda for term invention, as it can be seen in this paper.

However the meaning of the term is slightly different. The original meaning refers to a comprehensive test strategy, while I focus on the antipattern perspective. Maybe the emergency of the Soap Opera Test term is an antipattern itself...

Thursday, April 24, 2008

Cleaning up our test code

In the last posts about testing in a TDD fashion, I tried to dig into the reasons why many developers tend to write Soap Opera tests which end up being less effective and maintainable that they should be. As I said earlier, test share a common logical structure, I stuck to this one:

  1. Set up
  2. Declare expected results
  3. Exercise the unit under test
  4. Get the actual result
  5. Assert that actual result match the expected results

In the popular essay Mocks Aren't Stubs, Martin Fowler uses a differently grained structure (setup, exercise, verify, teardown) but I'll stick on the original structure, at least for now.

Since "refactor" is the weaker step of the TDD mantra, I tend to keep a slightly different approach, trying to think at my tests in the cleanest possible way. There is normally not so much to do with step three – which is often no more than a single method call – but often a lot can be done to improve all the other phases.

Setting up the test

JUnit structurally poses a separation between the common set up shared by a family of test and the specific set up or initialization needed for a single test to run. The former is placed in the setUp() method, while the latter is generally the beginning of our test method. A common situation is to place here the definition of all the Domain Objects needed to perform the operation to be tested. This is also a good place to check if your application has been properly designed. Creating objects for testing purposes should be a straightforward activity. As a rule of thumb, creation of the necessary objects shouldn't need more than three-four lines of code. Does it sound thin? Let me explain.

Objects should be ideally created one-shot. In other words, you should have robust constructors available to create the domain objects without having to bother with setter methods. This might not be a completely viable options if you have a lot of optional attributes in your entities, which probably shouldn't be in you constructors anyway. You are definitely luckier if your architecture already has some Factory in place (such as the ones prescribed by DDD, by the way).

Complex object structures should be available off-the shelf: if I need to test Entity A, associated with B and a specific C instance, and this is a boundary condition for my application, I want this combination to be readily available for any single test that pops out in my brainless mind. Ok, you can achieve a similar result with cut-&-paste, but … please … (ok, the official reason is that you end up slowing test development and increasing unnecessary coupling). An off-the-shelf test object approach fits particularly well with the agile style of personalizing users and typical entities: if I am developing a banking system and Fred is my average user with a banking account and a VISA, while Randolph and Mortimer are very rich people with many accounts investment and so on, I want to have available in my test framework something like createFred() and createRandolph() or createMortimer(), to be used in many more short tests. Such convenience methods are particularly useful when business functions or business objects are complex, and such complexity ends up preventing people from writing the test they should write.

The worst case scenario you might have to deal with, happens when you have only the Java Bean empty constructor and a plethora of setter methods. Setting up complex objects will take pages, and the code is a perfect humus for small bugs in the test definition. In addition, I will hate you :-). In general, test code might just be as buggy as production code, so writing the shortest possible test sounds like a good advice, both from a robustness and readability point of view. In general, creating objects for testing greatly benefits from the presence of dedicated Factories, and this should be taken into account when designing the application. Creating objects should be easy because we want to do that over and over and over.

In Java, Spring helps a lot in managing creation of other type of objects, such as Services, Controllers, Managers or DAOs. After all, Spring is like a "pervasive factory" that takes care of object setup and initialization. Typically, services are Spring Managed, while entities are not. So we have to deal with entities by ourselves. If Factories are not available, I often end up writing factories for test specific purposes. Depending on the level of control I have over the application they often make it for a refactoring on production code as well. If factories are already in place, we can subclass or wrap them with a test-layer factory class that provides us with the aforementioned convenience methods


Wednesday, April 16, 2008

Italian Grails User Group Started

Today we've just set up the Italian Grails User Group on YahooGroups. The group URL is
http://it.groups.yahoo.com/group/Grails-IT/

Everyone interested is warmly welcome! (ma si parlerà in italiano...)


Tags:

Tuesday, March 25, 2008

Why do test antipatterns emerge?

In the previous post I presented an example of what I call the Soap Opera Test Antipattern, and some possible side-effects like having test code implicitly coupled to the application code. Reasons for this post arose from a discussion which is still going on in the Bologna XP mailing list, and reinforced by this post by Jason Gorman. Of course, every methodology works perfectly well …in theory. But practice with testing systems leaves us with a bunch of challenging issues when applied (more or less blindly) to real world situations.

So why do we end up having Soap Operas test in our code? I think one reason is rooted in the heart of the TDD mantra "Red, Green, Refactor". Here's why:

  1. Red. You want to add a new requirement, you do so by adding the corresponding test. You're done when you added the test, and running it results in a red bar.
  2. Green. You get to the green bar as quick as possible. Hacks are allowed, to get to green because being far from the green makes you dive too deep and you have no idea about what it takes to get back to green. You're done when you have the green bar again in your xUnit test suite.
  3. Refactor. This is a green-to-green transition that allows you to clean up the code, remove duplications, and make the code look better than in step 2.

Step 3 looks a little weaker than the others for a few reasons

  • It's the third step. If you're time-boxed, this is where you're gonna cut, by telling "done" to your boss, even if you feel that something's still missing.
  • The termination condition is less defined, compared to step 1 and 2; "green" is a lot less disputable than "clean". To declare step 3 over you have to satisfy your "personal definition of code beauty", assumed you have one. Moreover, refactoring goals are often personal: TDD book suggests to write them on a paper and keep it for the day. This means that you refactoring goals are not shared with the team. This is not a mandatory approach, for example I am the kind of guy that normally starts polluting the bug tracking system with refactoring suggestions. But I also know that very few of them will actually make it to production code (unless I am supremely in charge of the project…). Anyway, I think that most of the time refactoring notes are something too trivial to be shared on the Bug Tracking System. But the best way to achieve that is to have them fixed before they have to become remainders.
  • It's a matter of culture. If you're doing TDD but lack some crucial OOP skill, you're in danger of writing sloppy tests. There's a lot of good OO in a framework like JUnit, and designers made it good enough that the OO part is well hidden behind the scenes. But this does not mean that developers should code like neanderthalians when it comes to coding tests.

Putting it all together, the result is often test code which is less effective than it should be.



Friday, March 14, 2008

The soap opera test antipattern

If you are coming from a romantic programmer attitude, or simply didn't care about testing your code, then every single line of tests code is valuable and adds some stability to your system.

After a while, anyway, the testing code mass could increase significantly and become problematic if not correctly managed. I've pointed you to the Coplien vs Martin video in my previous post. Now I won't claim that I've found the solution of the issue, but some thoughts on the topic might be worth sharing.

Starting to test

When embracing TDD or test first, or – less ambitiously – when starting to use xUnit frameworks for testing, you simply have to start from somewhere. You choose the target class or component, define the test goal and code your test using assertions to check the result. If the light is green then the code is fine, if it's red… well, you have a problem. You solve the problem, refactor the solution to make it better, in a green-to-green transition, then move to the next feature, or the next test (which will be the same thing, if you are a TDD purist).

Every test adds stability and confidence to your code base, so it should be a good thing. Unfortunately, when the test code mass reaches a certain weight it starts making refactoring harder, because it looks like extra code to be affected in a refactoring process, making refactoring estimations more pessimistic, and the whole application less flexible.

Why does this happen? I suspect testing skills tend to be a little underestimated. JUnit examples are pretty simple, and some urban legends (like "JUnit is only for unit tests") are misleading. Testing somehow is a lot better that not testing at all. Put it all together in a large scale project and you're stuck.

The soap opera test antipattern

The most typical symptom of this situation is what I call the soap-opera test: a test that looks like an endless script.

@test
public void testSomething() {
// create object A

// do something with this A

// assert something about A

// do something else with A

// assert something about A

// create object B

// assert something about B

// do something with B

// assert something about B

// do something with B and A

// assert something about B and A

}

The main reason why I named this one "soap opera" is straightforward: there is no clear plot, there are many characters whose role is unclear, things are happening slowly, and conversations are filled with a lot of "do you really mean what you said?" and there is no defined end. The second reasons is that I always dreamed to name a pattern, or an antipattern… somehow.

Even if I was too lazy (or sensible) to put some real code in there, some issues are pretty evident:

  • Test looks like a long script;
  • if you're lucky, the purpose of the test is in the method name or in the javadoc, assertions are too many to make the test readable or to make out the purpose by simply reading the code;
  • I bet a beer that 90% of the lines you have on a test like this are simply cut&paste from another test in the same class (if this is the only test you have in your system the bet is not valid);
  • The test can get red for too many reasons;
  • Really looks like the inertial test code mass mentioned before.

What's the point in "looks like a long script"? My opinion is simply that it doesn't have to look like that! A good test has a well defined structure which is

  1. Set up
  2. Declare the expected results
  3. Exercise the unit under test
  4. Get the actual results
  5. Assert that the actual results match the expected results

I grabbed the list from here, the original article talks about many JUnit antipatterns (but calls the soap opera antipattern "the overly complex test" which is a lot less glamorous). Setting up can't be accomplished completely by the setUp() method, cause some preparation is obviously test-specific. Steps 3 and 4 often overlap especially if you're testing a function. But the whole point is that this definitely is a structure, while a script is something less formed.

Multiplying the asserts has a thrilling effect: when something goes wrong all of your test start getting red. In theory a test should test one and only one feature. There are obviously dependent features, but a well formed test suite will help you a lot in problem determination by pointing right to the root cause. If the testing code for a feature is duplicated all over the test suite… you just get a lot of red lights but no hint about where the problem is.

Testing against implicit interfaces

Even if you clean up your testing code and refactor to be in one feature/one test situation you'll still experience some inertia, due to testing code. This definitely smells: we were told that unit tests are supposed to help refactoring, allowing us to change the implementation while controlling behavior on the interface. The problem is that we are often doing it only in step 3 of the above list, while we are depending on application implicit interfaces in creation of test objects and sometimes also in asserting correctness of the result. Creating a test object might me a nontrivial process – especially if the application does not provide you with a standard way of doing it, like Factories or the like – and tends to be repeated all over the testing code. If you're depending on a convention, changing it will have probably a heavier impact.

In general, when writing a test, step 3 is very short. Basically just a line of code, depending on the interface you've chosen. Dependencies and coupling sneak in from test preparation and test verification, but you've got to keep it under control to avoid getting stuck by your test code base.


Wednesday, March 12, 2008

TDD vs Architecture debate

Some days ago, I was watching this video on InfoQ, where James Coplien and Robert C. Martin were discussing about some undesired side effects of TDD, particularly on the architecture side. One of the key point was that testing code increases the overall weight of the code base, making it harder to eventually refactor the architecture.

Another interesting issue presented was that TDD doesn't necessarily enforce testing all the possible boundary conditions, but often ends up in a sort of heuristic testing, which is less effective that testing based on a design-by-contract assumption.

Honestly, TDD book put a lot of emphasis on efforts to remove duplications, also between production and testing code, but I have the impression that this portion of the message is often lost by test writers. I've got some ruminations on the topic that will probably make up enough stuff for some more posts in the following days.



Friday, March 07, 2008

Social Networking Patterns

I've had some interesting reaction to my post on Social Networking, that I wrote basically to apologize for making people wasting their time. After concluding that social networking is probably some sophisticated IT warfare weapon developed to harm productivity of the western countries, I've had an interesting conversation with Giulio Cesare Solaroli, the mind behind the Clipperz online password manager, about the fact that as platforms are becoming more open, intercepting users behavioral pattern is a key concern for any social web application.

I am not quite sure if the notion of pattern fits exactly the situation, but I blogged about it before, and then found site WikiPatterns which published a consistent catalog of behavioral patterns, that reflect themselves in the shape if the information. There are more than 50 patterns and antipatterns just for a Wiki, in a scenario with some evident boundaries, like

  • People are told to go to a wiki
  • The people working on a wiki are already some kind of group (development team, company, etc.)
  • They should share a common goal

A social networking tool such as LinkedIn, Naymz or Spock, has a similar high-level goal which is provide some form of valuable knowledge as a result of individual contributions by the user, but is far more open. Nobody asks you to go on a platform (well, … somebody invites you…), you're not necessarily part of the same group, and there is no such thing as "the common goal". I've asked myself "why do I keep my LinkedIn page updated?", and here are the answers.

  1. I like learning how a new tool works
  2. It's useful for my marketing as a freelance
  3. It's useful for my job, cause Web 2.0 and the like are part of my consulting portfolio
  4. I can't stand fake or incomplete information
  5. I hate writing CVs and LinkedIn looks like the right place to write information only once
  6. Vanity

There are probably some more reasons, but here we are talking only about the relationship between me and the tool. For some of my friends reasons are completely different, and some other are not on linked in and they're not interested to move in. But the tool is a networking platform, and this means that a lot more variables and scenarios are possible. I'll drop down something.

  1. What if somebody wants to connect with you and you don't know him?
  2. What if somebody wants to connect with you and you don't remember him?
  3. What if a friend connects with you but not in the right position?
  4. What if a friend endorses you for the wrong position?
  5. What if somebody asks for an endorsement?
  6. What if somebody endorses you, but you have no direct experience about the way he/she works?

Ok, one can develop some sort of "SocialNetiquette", but thinking about it is some sort of undesired side effect (it wastes brain cycles). But the key point, at least for me, is that I couldn't make up a consistent behavior. In other words, I don't give the same answer to the same question – after all, I am a consultant, so "It depends" is my mantra… As a result, some of my connection are strong, related to people I know well, that I've worked with and so on, but some are not. Are we abusing the tool? Or we're still using the tool the way it was intended. Or… does this question actually make sense?

A key argument about all Web 2.0 technologies is that providing strict rules about the way a tool is used is a losing approach. Tools should instead "follow" users needs and ideas and transform themselves into something that wasn't exactly planned in the beginning. It's sort of seeding something and then taking care of what's growing. More realistically, Linkedin can't ban users because they connected without knowing each other well enough (would you like to be interviewed by linkedIn police about your connections?), so its body of knowledge is made up of contributors which are not providing a consistent behavior (as individuals and as a crowd), which are posting incomplete and sometimes wrong information. Yet it works.

I still have the feeling of being part of a big experiment, but according to the Hitchhikers' guide to the galaxy, this does not necessarily mean that I am stupid.