Blog

While issue trackers originate as tools to manage projects more effectively, during the last years of work I have been through some situations where their misuse backfired.

Tools originally conceived to improve workflows and project lifecycle became a significant burden for the team using them, occasionally making difficult situations even worse.

This post is a collection of bad patterns I have seen happening. It is not a survey of all the possible situations that can occur. It is not meant to be an argument against issue trackers (if it tells anything, it will probably be about the teams I was part of), but rather an overview of things that went wrong because of the way a particular team used those systems.

In retrospective, most of the problems were due to a lack of discipline and experience of the project teams, and they are less frequent – if present – in a team of seasoned professionals. But, while training and education can certainly help, I would love to consider a different aspect: the issue tracking systems were not helping as they could have.

I still remember one of the most interesting questions I have been asked last year when I was interviewing for a Software Engineering position:

How would you explain to a 12 years old what an API is?

At first, I was surprised: it certainly sounded like an uncommon question for a Software Engineering interview. And yet I tried to answer as best I could, trying to come up with a concise definition that could convey what I meant, but without assuming any specialist knowledge about computers and software.

As I progressed with the interview, it became evident what kind of ability my interviewer was trying to test.

This weekend I was playing with Facebook’s JavaScript SDK and I needed a quick way to serve over HTTP the files I was working on, so that I could access them in a browser window at http://localhost. I didn’t want to go through the hassle of setting up Apache on my Mac, though, and I was looking for some quick alternative to installing a local web server. After some Googling, I found a wonderful one liner that did the job, provided that you have Python installed.

Car dashboard

There is one thing that often frustrates me and yet it happens quite often: I am driving and listening to the radio, and at some point they play a song I like. I would love to buy it, but I don’t know neither the song title nor its author.

Sometimes radio announcers say it immediately after, but some other times (most, actually) they do not, leaving no alternatives other than firing up Shazam to discover what is the song you like. And, well, if you are driving, that generally is a poor choice.

Now, just think about how we could redesign that process to make it more effective, using present-day technology and infrastructure.

Here is one possibility I would love to see happen.

…is not the device itself, but it is the whole ecosystem that surrounds it.Kindle, 3rd Generation

I have received my Kindle as a gift at the beginning of this year, and it quickly became my favorite gift of all time.

That can be quite surprising, knowing me. I own many different devices, gadgets and computers, but I have always been fond of the smell and feeling of paper books. Knowing that, some people were ready to bet that I would use the Kindle for just a few days and neglect it shortly after for something else (e.g. my iPad).

I must admit that I am sure this is exactly what would have happened with any other e-reader device. My experience with the Kindle, however, was (surprisingly) awesome, and it due to reasons I didn’t expect.

During the course of the last months, we have seen frequent news of security breaches, with many websites falling victims of malicious attacks. While this by itself is not a news, the frequency and scale of this kind of attacks hardly passes without notice.

Sony’s example is probably the most visible example of this trend, as Kevin Mitnick points out.

But they are not the only ones: the attacks on Citigroup and security company RSA are even more alarming. If even those companies that should be dealing with security issues every day are not impenetrable, chances are everyone’s data is at risk. Or, at least, that’s the message that most of the newspapers appear to be conveying.

While it’s easy to dismiss those people as fools, those facts should teach us something different: there is no such thing as a secure system.

I had been thinking about the idea behind this post for a while now, but reading this post about getting newbies involved in open source just convinced me to write it down.

Being a concept developed in the Open Source world, it is no wonder that distributed revision control systems give their best in that context. There are many pros and cons, that other people described in detail better than I can do. Of all the features they offer, however, the one I prefer is the least technical one, and it is related to the way they encourage new developers to contribute to open source projects. In that perspective, git and mercurial are a lot more effective than svn, for example.

It all comes as a side effect of authors and committers being two different roles. This can encourage new contributors, who are approaching a new project for the first time, and individuals who may not have the time and energy to dedicate long periods of their time to a project, but may be able to contribute with just a few patches.

How GitHub displays both the author and the committer of a single change. Oh, and yes, there is something wrong with the dates. 😉

Think about that. Recognition is one of the most important drivers for Open Source contributors but, unfortunately, centralized revision control (subversion, CVS and the like) doesn’t help in giving credit to newcomers or occasional contributors.

That’s because, generally, sending a single patch (or even a few of them) is not enough to be granted commit access to a project repository (and rightly so) and the commit itself must be done by a project member with enough privileges.

As Software Engineers, we often tend to be overly optimistic about software. In particular, it often happens that we underestimate the probability of systems and components failures and the impact this kind of events can have on our applications.

We usually tend to dismiss failure events as random, unlikely and sporadic. And, often, we are proven wrong.

Systems do fail indeed. Moreover, when something goes wrong, either it’s barely noticeable, or it leads to extreme consequences. Take the example of the recent AWS outage: everything was caused by a mistake during a routine network change.

Right now, some days after the event, post-mortem analyses and survival stories count in the dozens. There is one recurring lesson that can be learned from what happened.

During the last weeks, I’ve been writing a lot of code while commuting.

Since I was working on an application that uses Facebook’s and Amazon’s APIs and I was working offline, I often found myself unable to test the code I was writing against live data.

I must confess I struggled at the beginning.

I had no chance to try out my small application while I was working on it and, generally, I had to code during 1-hour trips, waiting until I got to work/home to see whether everything was performing as expected.

The outcome? I was surprisingly productive. And I’m sure it was not just because of the absence of common distraction sources (IMs, Email, Twitter and the like).

So, what made the difference? Here are some of the insights I got from the experience.

Let’s start with two quick facts: 1. Google recently refurbished Google Bookmarks (after neglecting them for a couple of years), giving them more importance in search and allowing us to share them with friends more easily. 2. Meanwhile, a different team (I guess), implemented Bookmark Sync from Chrome, a new features that synchronizes bookmarks with a Google account (quite handy when you routinely use Chrome on many computers). Those bookmarks end up in a read-only directory in your Google Docs space.