I still see people arguing about whether GNU/Linux is “ready for the desktop”. The truth is, it really depends...
For me, I switched almost “cold turkey” from Microsoft Windows 3.1 to Debian GNU/Linux 2.1 “Slink” in about 1999 or 2000 (at the time, I liked to say I “upgraded from Win 3.1 to Linux”).
I just didn’t like the look or smell of Windows 95, which is why I had resisted upgrading for 5 years. I was poor, and always ran slightly out of date computers, for money reasons. So I was particularly turned off by a new “improved” operating system that took 3 times longer to do 1/3 of the work. I had seen references to Linux for a few years, and knew that there was free operating system out there. But I figured it was mostly a “hobby” thing. Like most people, I didn’t see how “free software” could work (beautiful ideal, but how could programmers sustainably produce such a thing?). I’d read Stallman’s Gnu Manifesto and other sources on the subject. But it took me a while to warm up to the idea (I hadn’t really been that impressed by “shareware”). Still, ever an idealist, I wanted to try the proof of principle—that’s why I picked the Debian distribution: it embodied the same ideals that went into Linux and the GNU project.
Then of course, there was the fact that Linux, as a Posix-compliant operating system (“Unix” to speak sloppily—but SCO has demonstrated the danger of that kind of sloppiness), would run code I knew from college. In fact, it really brought back some memories the first time I saw that prompt on my home computer (that was kind of a thrill, since I associated Unix with high-end systems at the university).
Of course, it was a pretty brutal transition. I turned in a few papers written in HTML, using Netscape composer, printed out to paper. So I took a short-term hit in terms of quality and convenience. I had to sweat a bit more whenever I had to rely on the computer to get my work done (on the bright side, I didn’t really need the computer for work, it was mostly for home use).
I also spent two weeks once, getting sound to work on an IBM Thinkpad laptop. This was mostly because I was stubbornly insisting on using ALSA (version 0.5) instead of the more prevalent OSS. I felt ALSA was the future, was more thoroughly free software, and I wanted it to run. By then of course, I was trying to prove something.
By 2002, though, I was able to do just about everything I had previously done on Windows, and had ditched the last proprietary extension (Netscape—I switched to Mozilla, of course). By then of course, I had seen enough of Debian Linux to know that free software did indeed work, though I admitted to doubts about why. For me, Eric Raymond’s papers on the “Cathedral and the Bazaar” and especially “The Magic Cauldron” were particularly enlightening.
So, what I would say is that Linux has been “ready for the desktop” for a long time—for people who aren’t wimps about it!
But frankly, if we were on a remotely level playing field, Windows wouldn’t be “ready for the desktop” either: I have nothing but frustration, every time I have any kind of run-in with Windows. I think the only reason why people feel otherwise, is that with Windows, they can blame the problem on somebody else (“they installed it wrong”, “this hardware doesn’t work right”, “call a technician”), and thus avoid having to mess with it. But of course, I’m the “technician” everybody calls in my family, and so, I see things differently—from the point of view of the “buck stops here, I have to fix it” person, Linux is WAY, WAY, WAAYYYY easier to work with than Windows.
And IMHO, user-based help mailing lists whip “tech support call centers” every time. People talk about picking proprietary “for the support”—but my experience is exactly the opposite.
I once got a response on Debian users in TEN MINUTES. I would’ve still been on hold at “tech support”. True, they aren’t all that fast. But neither is tech support. More importantly, I’ve worked tech support, and I have a pretty good idea of how limited their system is—if it’s not in the “knowledge base”, they can’t tell you, and even they can’t ask the developers! So with proprietary tech support, you’ve got about 50/50 odds of hitting the “no solution, take it back to the store” case (which is not much help if you bought it “as-is” off of a truck in downtown Dallas’ “First Saturday Sale”—which is where a lot of my computers came from). This is especially true when you are the kind of technically-inclined person who wouldn’t have called tech support if it were the kind of dumb problem that would be in the knowledge base.
What about the “wimps”? Well, there are a lot of people who don’t know much about computers and lack the confidence or the patience to learn it. There’s also a necessity for a certain tolerance of frustration that just comes with the field, which some people just don’t have. I don’t think Linux will be ready for those people until there is a large amount of industry support for it: until it comes pre-installed, and there’s a local computer repair tech who’ll support Linux. In some places, there already are such things, but it’s a small minority. And of course, the industry support will require there to be a market—so it’s kind of a chicken and egg problem. On the other hand, GNU/Linux use is on the rise, so there will be some point at which it starts to take off (in another blog entry, I speculated that this might happen through a new wave of “sub-PC” devices evolved from today’s embedded systems—commercial versions of the “OLPC $100 laptop” and so on. If so, usage will probably spread to desktop PC from there).