Coding

Do languages shape IDEs or IDEs shape languages?

I spend most of my days at work on powerful IDEs like Eclipse or Netbeans, tools so advanced in functionalities that their feature lists span over several pages. Their power, however, has its own drawbacks: their memory consumption is measured in the gigabytes, and running them on underpowered computers is the most frustrating of experiences. Issues that any Java developer on Earth will have to face, sooner or later. Having grown frustrated by Eclipse being too slow on my 4 years old work laptop (I will not comment on this), I decided to drop the IDE for a while, switching to an extremely powerful editor that offered me the one thing that matters the most to me: blazing fast navigation between different source files. Of course I knew I would miss some of Eclipse’s advanced features but I wanted to give it a try, especially since Andraz’s post left me with a bit of curiosity: how much the tools we use affect our abilities? And why IDEs are so used by desktop developers while they are not so popular with web frontend developers who generally use scripting languages? A commonly accepted explanation is that it is easier to write IDEs for static languages, when lots of information is known at compile time. It is easier to extract information from the code and use it to build powerful and useful features. However, after a few weeks of experimentation, I ended up with a slightly different point of view on the whole matter: the popularity of IDEs for Java encouraged coding conventions would not be so widely accepted if the majority of coders used a plain text editor to edit their source files. Those conventions grew so popular that that today Java appears to be designed to be used with an IDE. Let me give a few examples.

Design horror: same command, different meanings

As I already had the chance to write in a previous post, I really appreciate distributed version control systems; I consistently use them at work and for many of my side projects. I typically switch between git and mercurial repositories, with the former being my primary choice lately, and there is one specific command that always troubles me when I do that: pull. There is one wonderful piece of inconsistency between the two systems, one that often leads to confusion for new adopters and unnecessary hassle for experienced users. If you are familiar with both systems, you may already be thinking about the culprits. If you are not, you may be more careful about the pull and fetch commands after reading this post.

We need smarter issue trackers

While issue trackers originate as tools to manage projects more effectively, during the last years of work I have been through some situations where their misuse backfired. Tools originally conceived to improve workflows and project lifecycle became a significant burden for the team using them, occasionally making difficult situations even worse. This post is a collection of bad patterns I have seen happening. It is not a survey of all the possible situations that can occur. It is not meant to be an argument against issue trackers (if it tells anything, it will probably be about the teams I was part of), but rather an overview of things that went wrong because of the way a particular team used those systems. In retrospective, most of the problems were due to a lack of discipline and experience of the project teams, and they are less frequent – if present – in a team of seasoned professionals. But, while training and education can certainly help, I would love to consider a different aspect: the issue tracking systems were not helping as they could have.

Tip: serve local files over HTTP with one console command

This weekend I was playing with Facebook’s JavaScript SDK and I needed a quick way to serve over HTTP the files I was working on, so that I could access them in a browser window at http://localhost.

Authors ≠ Committers: why most Open Source projects should switch to distributed revision control

I had been thinking about the idea behind this post for a while now, but reading this post about getting newbies involved in open source just convinced me to write it down. Being a concept developed in the Open Source world, it is no wonder that distributed revision control systems give their best in that context. There are many pros and cons, that other people described in detail better than I can do. Of all the features they offer, however, the one I prefer is the least technical one, and it is related to the way they encourage new developers to contribute to open source projects. In that perspective, git and mercurial are a lot more effective than svn, for example. It all comes as a side effect of authors and committers being two different roles. This can encourage new contributors, who are approaching a new project for the first time, and individuals who may not have the time and energy to dedicate long periods of their time to a project, but may be able to contribute with just a few patches. How GitHub displays both the author and the committer of a single change. Oh, and yes, there is something wrong with the dates. 😉 Think about that. Recognition is one of the most important drivers for Open Source contributors but, unfortunately, centralized revision control (subversion, CVS and the like) doesn’t help in giving credit to newcomers or occasional contributors. That’s because, generally, sending a single patch (or even a few of them) is not enough to be granted commit access to a project repository (and rightly so) and the commit itself must be done by a project member with enough privileges.

Failure is an option

As Software Engineers, we often tend to be overly optimistic about software. In particular, it often happens that we underestimate the probability of systems and components failures and the impact this kind of events can have on our applications. We usually tend to dismiss failure events as random, unlikely and sporadic. And, often, we are proven wrong. Systems do fail indeed. Moreover, when something goes wrong, either it’s barely noticeable, or it leads to extreme consequences. Take the example of the recent AWS outage: everything was caused by a mistake during a routine network change. Right now, some days after the event, post-mortem analyses and survival stories count in the dozens. There is one recurring lesson that can be learned from what happened.

What I learnt from coding offline

During the last weeks, I’ve been writing a lot of code while commuting. Since I was working on an application that uses Facebook’s and Amazon’s APIs and I was working offline, I often found myself unable to test the code I was writing against live data. I must confess I struggled at the beginning. I had no chance to try out my small application while I was working on it and, generally, I had to code during 1-hour trips, waiting until I got to work/home to see whether everything was performing as expected. The outcome? I was surprisingly productive. And I’m sure it was not just because of the absence of common distraction sources (IMs, Email, Twitter and the like). So, what made the difference? Here are some of the insights I got from the experience.