Open source code has helped steer us through many technological changes and is still the backbone of a surprising amount of modern connectivity. The increasing move to the cloud though could potentially be heading us in a different, more awkward direction.
In the minds of many, open source code is written by eager teams of highly-qualified altruists, donating their free time in the hope of bringing quality software to the masses. In reality, a good proportion of the most prominent work is done by nine-to-fivers in big office blocks, taking the shilling from the companies that rely on the open source software they help develop. Occasionally, a particularly qualified and enterprising individual might make a consultancy business out of the situation, although the result is the same: good code is expensive.
Whether or not that explains why the overwhelming majority of open source code is often so, er, inexpensive, the fact remains that some of the world's most important software - web servers, and so on - remains heavily dependent on the willingness of large companies to release changes to open source material. Necktie-wearing upper management might like to spin it as a willing piece of corporate largesse, but it's no great leap to assume that companies which donate time to open source code only give away the results because, when they distribute the software, they have to.
Because of a subtle technical and legal distinction, though, there's some concern that the status quo might be at some risk, at least in some specific areas.
Consider an Android cellphone, poster child of the dedicated open source zealot as an example of Linux being used by huge numbers of non-computer-people. Never mind that this is something of a false equivalency in the first place. Insisting Android is "Linux" in the same sense Ubuntu is "Linux" is, while technically correct, a bit like comparing a lawnmower to a snowblower because they'll burn the same fuel. Forget that, though. What matters is that while Android often finds itself on a pedestal as an example of open source success, most of what makes it work is not open source.
Yes, it is possible - not very much fun, but possible - to run an Android phone based entirely on open source software. The software which makes it more fun, including the overwhelming majority of third-party applications, is proprietary, but that's not really the problem. The problem is that a lot of the software which makes Android phones work isn't even on the phone. It's on a server somewhere. Witness how much of a modern cellphone stops working when it isn't connected to a network.
Where Google got the code that runs Google Maps, and what changes Google has made to that code, is academic if the company is not distributing the resulting software. Apple users need not be smug, here; the server farms that run the App Store are not made out of rows of gleaming iMacs. They're made out of racks full of generic PC hardware running large amounts of open source software, just like everyone else's. Has Apple or Google made proprietary changes to what it's using? Probably, but there's no way to tell.
How well the open source paradigm has really worked in practice is a longer story than we have time to tell. Most open source code, by line count, is awful. Much of what isn't awful is trivial, the sort of thing that can be written by a single user in reasonable time. Much of what isn't awful or trivial (and much of what is) are applications directly related to software engineering, code for coders, which will largely be used to create more of the same. Even if we exclude all of those things, though, we discover a world of user-facing software which often has a user experience about as comfortable as the business end of an orbital sander.
Worse, as applications drift cloud-wards, like so many well-filled helium balloons, they float out of reach of open source entirely. In that case, our ability to evaluate and fix the source code is just as limited as under proprietary, closed-source model. The problem is that when the code's running on someone else's CPU, we might not even know the difference, and companies might face a much smaller imperative to give away big buckets of money in software engineering.
The relationship between lightweight thin clients and the servers that keep them topped up with data has swung back and forth at least twice in the history of widespread electronic computing. As soon as computers began to support more than one user at once, the idea of a mainframe driving lots of terminals became common. Fast desktop computers changed that. Phones changed it again. Now we call the terminals "cellphones," the distances are longer, and the signals are wireless, but we're still using simple terminals compared to the enormous server farms behind them. Whether that'll change again one day is yet to be seen, but the whole thing seems like a pendulum that might keep on swinging.
So, there's an irony here, in that while the cloud overwhelmingly runs on open source code, it might actually not be particularly good for open source code. When Richard Stallman more-or-less founded the open source movement in the 1970s, it didn't face many of the problems it now does. The issues of managing really big projects (there's still no competent open source NLE) were irrelevant on computers incapable of running more code than can comfortably be written by one person. Problems creating applications intended for non-specialists were irrelevant when computers could only be used by specialists. And, despite living in a world where mainframe-terminal arrangements were common and Internet precusor ARPANET was well known, Stallman did not anticipate the cloud.
And if more code moves to the cloud, there's at least a possibility that companies will become less willing to make the very large, very valuable, publicly-available contributions they often have to date. What we do next is currently anyone’s guess.