Time again for my periodic trolling around the net for “Linux on the Desktop” articles.
In this episode, I refer you to this Infoworld article, which talks about why Linux desktop hasn’t succeeded yet. The author, Randal Kennedy, claims that because of the development structure of Linux (how distributions serve as aggregation points), when a problem arises, it is difficult to figure out who is accountable, and users are often forced to fix things themselves.
Although many of Kennedy’s posts are somewhat uninformed and purposefully inflammatory, here is some merit to this particular argument, but I think I can expand on it a bit more. I think the real problem is that, essentially, each distribution is a fork. While the underlying source code may be mostly similar among distributions releases around the same time, there are enough technical differences between distros (which compiler? which libc? which gnome? which kernel?) such that for all QA and testing purposes, these are different entities. As any software developer knows, having more “products” proportionally, and sometimes geometrically, increases the QA “matrix”, i.e. all the possible scenarios that need to be tested.
So here we have an ever increasing community of Linux users that can all help each other out. The more people there are, the less testing a distributor should have to do right? All the testing gets spread out among the users, right? The users submit bugs and sometimes even fixes, and the collective effort ensures the overall quality of the product, right?
Well, sort of. The twist is that because distros are essentially creating unrelated products, the more distros there are, the more QA work that is needed. More generally, while free software allows for an infinite matrix of possibilities, this also means that there are an infinite number of scenarios that need testing.
At the end of the day, it’s not clear to me that open source software, developed by many, but also “forked by many”, and thus tested in very inconsistent ways, can produce the same quality as commercial software, which is developed by smaller groups, but also more focused and tested well. You can think about it as the ratio between the possible configurations of your software and the amount of testing resources you have. It’s not clear that the ratio is any better for open source software developed by the community as it is for a single company developing a commercial product.
Just because Ubuntu or Fedora comes along and says “we’re going to make something that just works,” doesn’t mean it’s gonna happen. Sure that’s obvious, promises are just promises. But I think that even the core approach that they’re taking — being selective about packages and applying distro-specific patches to smooth out the rough edges — is insufficient.
The real way to produce better quality free software is to avoid as much as possible the multiplication of testing scenarios. What does that mean? It means getting rid of unnecessary choices. It means eliminating meaningless differences. It means a lot more cooperation between Fedora and Ubuntu, and whatever Gnome-based distro that comes next, to make sure that all the common parts they use are built in the same way and tested in the same way. Does it really make sense for Fedora and Ubuntu to have different kernels? From an end-user perspective, of course not. Same could be said for the version of gcc, or the version of glibc, or the version of gnome or kde. What it really means is that these distros should look more similar than they are different, especially if the differences are not providing any clear value.
Sure, there are scheduling issues and such. Distros pick versions of software that are available when they release. But from a user perspective (and even I would argue from the developer perspective *) version differences for software that was released within the same year are usually not that big. Either distros should coordinate when they release their software, or they should coordinate which versions of software they decide to include in their time-based releases.
The sad part is, it is the very open nature of Linux that makes me pessimistic that any kind of consolidation will happen. It requires an immense amount of discipline to keep yourself from adding an incremental new feature that breaks compatibility or increases the testing load significantly. Few FOSS projects have such “self control”.
At the end of the day, it may just be that someone like Ubuntu will create such a following that it, by itself, can realize the scale needed to make community based wide testing coverage possible. But as evidenced by many of the small problems that exist even in the latest Gutsy release, we’re not anywhere close yet.
* by this I mean that to a developer, a new distro releasing with slightly older components is a reasonable trade off, if it means that it reduces the amount of variation that the developer needs to support between distros.

Leave a comment

Your email address will not be published. Required fields are marked *