Bradley M. Kuhn's Blog

2014

July

  • 2014-07-15: Why The Kallithea Project Exists

    [ This is a version of an essay that I originally published on Conservancy's blog ].

    Eleven days ago, Conservancy announced Kallithea. Kallithea is a GPLv3'd system for hosting and managing Mercurial and Git repositories on one's own servers. As Conservancy mentioned in its announcement, Kallithea is indeed based on code released under GPLv3 by RhodeCode GmbH. Below, I describe why I was willing to participate in helping Conservancy become a non-profit home to an obvious fork (as this is the first time Conservancy ever welcomed a fork as a member project).

    The primary impetus for Kallithea is that more recent versions of RhodeCode GmbH's codebase contain a very unorthodox and ambiguous license statement, which states:

    (1) The Python code and integrated HTML are licensed under the GPLv3 license as is RhodeCode itself.
    (2) All other parts of the RhodeCode including, but not limited to the CSS code, images, and design are licensed according to the license purchased.

    Simply put, this licensing scheme is — either (a) a GPL violation, (b) an unclear license permission statement under the GPL which leaves the redistributor feeling unclear about their rights, or (c) both.

    When members of the Mercurial community first brought this license to my attention about ten months ago, my first focus was to form a formal opinion regarding (a). Of course, I did form such an opinion, and you can probably guess what that is. However, I realized a few weeks later that this analysis really didn't matter in this case; the situation called for a more innovative solution.

    Indeed, I recalled at that time the disputes between AT&T and University of California at Berkeley over BSD. In that case, while nearly all of the BSD code was adjudicated as freely licensed, the dispute itself was painful for the BSD community. BSD's development slowed nearly to a standstill for years while the legal disagreement was resolved. Court action — even if you're in the right — isn't always the fastest nor best way to push forward an important Free Software project.

    In the case of RhodeCode's releases, there was an obvious and more productive solution. Namely, the 1.7.2 release of RhodeCode's codebase, written primarily by Marcin Kuzminski was fully released under GPLv3-only, and provided an excellent starting point to begin a GPLv3'd fork. Furthermore, some of the improved code in the 2.2.5 era of RhodeCode's codebase were explicitly licensed under GPLv3 by RhodeCode GmbH itself. Finally, many volunteers produced patches for all versions of RhodeCode's codebase and released those patches under GPLv3, too. Thus, there was already a burgeoning GPLv3-friendly community yearning to begin.

    My primary contribution, therefore, was to lead the process of vetting and verifying a completely indisputable GPLv3'd version of the codebase. This was extensive and time consuming work; I personally spent over 100 hours to reach this point, and I suspect many Kallithea volunteers have already spent that much and more. Ironically, the most complex part of the work so far was verifying and organizing the licensing situation regarding third-party Javascript (released under a myriad of various licenses). You can see the details of that work by reading the revision history of Kallithea (or, you can read an overview in Kallithea's LICENSE file).

    Like with any Free Software codebase fork, acrimony and disagreement led to Kallithea's creation. However, as the person who made most of the early changesets for Kallithea, I want to thank RhodeCode GmbH for explicitly releasing some of their work under GPLv3. Even as I hereby reiterate publicly my previously private request that RhodeCode GmbH correct the parts of their licensing scheme that are (at best) problematic, and (at worst) GPL-violating, I also point out this simple fact to those who have been heavily criticizing and admonishing RhodeCode GmbH: the situation could be much worse! RhodeCode could have simply never released any of their code under the GPLv3 in the first place. After all, there are many well-known code hosting sites that refuse to release any of their code (or release only a pittance of small components). By contrast, the GPLv3'd RhodeCode software was nearly a working system that helped bootstrap the Kallithea community. I'm grateful for that, and I welcome RhodeCode developers to contribute to Kallithea under GPLv3. I note, of course, that RhodeCode developers sadly can't incorporate any of our improvements in their codebase, due to their problematic license. However, I extend again my offer (also made privately last year) to work with RhodeCode GmbH to correct its licensing problems.

    Posted on Tuesday 15 July 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

  • 2014-06-18: USPTO Affirms Copyleft-ish Hack on Trademark

    I don't often say good things about the USPTO, so I should take the opportunity: the trademark revocation hack to pressure the change of the name of the sports team called the Redskins was a legal hack in the same caliber as copyleft. Presumably Blackhorse deserves the credit for this hack, but the USPTO showed it was sound.

    Update, 2014-06-19 & 2014-06-20: A few have commented that this isn't a hack in the way copyleft is. They have not made an argument for this, only pointed that the statue prohibits racially disparaging trademarks. I thought it would be obvious why I was calling this a copyleft-ish hack, but I guess I need to explain. Copyleft uses copyright law to pursue a social good unrelated to copyright at all: it uses copyright to promote a separate social aim — the freedom of software users. Similarly, I'm strongly suspect Blackhorse doesn't care one wit about trademarks and why they exist or even that they exist. Blackhorse is using the trademark statute to put financial pressure on an institution that is doing social harm — specifically, by reversing the financial incentives of the institution bent on harm. This is analogous to the way copyleft manipulates the financial incentives of software development toward software freedom using the copyright statute. I explain more in this comment.

    Fontana's comments argue that the UPSTO press release is designed to distance itself from the TTAB's decision. Fontana's point is accurate, but the TTAB is ultimately part of the USPTO. Even if some folks at the USPTO don't like the TTAB's ruling, the USPTO is actually arguing with itself, not a third party. Fontana further pointed out in turn that the TTAB is an Article I tribunal, so there can be Executive Branch “judges” who have some level of independence. Thanks to Fontana for pointing to that research; my earlier version of this post was incorrect, and I've removed the incorrect text. (Pam Chestek, BTW, was the first to point this out, but Fontana linked to the documentation.)

    Posted on Wednesday 18 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-11: Node.js Removes Its CLA

    I've had my disagreements with Joyent's management of the Node.js project. In fact, I am generally auto-skeptical of any Open Source and/or Free Software project run by a for-profit company. However, I also like to give credit where credit is due.

    Specifically, I'd like to congratulate Joyent for making the right decision today to remove one of the major barriers to entry for contribution to the Node.js project: its CLA. In an announcement today (see section labeled “Easier Contribution”, Joyent announced Joyent no longer requires contributors to sign the CLA and will (so it seems) accept contributions simply licensed under the MIT-permissive license. In short, Node.js is, as of today, an inbound=outbound project.

    While I'd prefer if Joyent would in addition switch the project to the Apache License 2.0 — or even better, the Affero GPLv3 — I realize that neither of those things are likely to happen. :) Given that, dropping the CLA is the next best outcome possible, and I'm glad it has happened.


    For further reading on my positions against CLAs, please see these two older blog posts:

    Posted on Wednesday 11 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-09: Why Your Project Doesn't Need a Contributor Licensing Agreement

    [ This is a version of an essay that I originally published on Conservancy's blog ].

    For nearly a decade, a battle has raged between two distinct camps regarding something called Contributor Licensing Agreements (CLAs). I've previously written a long treatise on the issue. This article below is a summary on the basics of why CLA's aren't necessary.

    In the most general sense, a CLA is a formal legal contract between a contributor to a FLOSS project and the “project” itself0. Ostensibly, this agreement seeks to assure the project, and/or its governing legal entity, has the appropriate permissions to incorporate contributed patches, changes, and/or improvements to the software and then distribute the resulting larger work.

    In practice, most CLAs in use today are deleterious overkill for that purpose. CLAs simply shift legal blame for any patent infringement, copyright infringement, or other bad acts from the project (or its legal entity) back onto its contributors. Meanwhile, since vetting every contribution for copyright and/or patent infringement is time-consuming and expensive, no existing organization actually does that work; it's unfeasible to do so effectively. Thus, no one knows (in the general case) if the contributors' assurances in the CLA are valid. Indeed, since it's so difficult to determine if a given work of software infringes a patent, it's highly likely that any contributor submitting a patent-infringing patch did so inadvertently and without any knowledge that the patent even existed — even regarding patents controlled by their own company1.

    The undeniable benefit to CLAs relates to contributions from for-profit companies who likely do hold patents that read on the software. It's useful to receive from such companies (whenever possible) a patent license for any patents exercised in making, using or selling the FLOSS containing that company's contributions. I agree that such an assurance is nice to have, and I might consider supporting CLAs if there was no other cost associated with using them. However, maintenance of CLA-assent records requires massive administrative overhead.

    More disastrously, CLAs require the first interaction between a FLOSS project and a new contributor to involve a complex legal negotiation and a formal legal agreement. CLAs twist the empowering, community-oriented, enjoyable experience of FLOSS contribution into an annoying exercise in pointless bureaucracy, which (if handled properly) requires a business-like, grating haggle between necessarily adverse parties. And, that's the best possible outcome. Admittedly, few contributors actually bother to negotiate about the CLA. CLAs frankly rely on our “Don't Read & Click ‘Agree’” culture — thereby tricking contributors into bearing legal risk. FLOSS project leaders shouldn't rely on “gotcha” fine print like car salespeople.

    Thus, I encourage those considering a CLA to look past the “nice assurances we'd like to have — all things being equal” and focus on the “what legal assurances our FLOSS project actually needs to assure its thrives”. I've spent years doing that analysis; I've concluded quite simply: in this regard, all a project and its legal home actually need is a clear statement and/or assent from the contributor that they offer the contribution under the project's known FLOSS license. Long ago, the now famous Open Source lawyer Richard Fontana dubbed this legal policy with the name “inbound=outbound”. It's a powerful concept that shows clearly the redundancy of CLAs.

    Most importantly, “inbound=outbound” makes a strong and correct statement about the FLOSS license the project chooses. FLOSS licenses must contain all the legal terms that are necessary for a project to thrive. If the project is unwilling to accept (inbound) contribution of code under the terms of the license it chose, that's a clear indication that the project's (outbound) license has serious deficiencies that require immediate remedy. This is precisely why I urge projects to select a copyleft license with a strong patent clause, such as the GPLv3. With a license like that, CLAs are unnecessary.

    Meanwhile, the issue of requesting the contributors' assent to the projects' license is orthogonal to the issue of CLAs. I do encourage use of clear systems (either formal or informal) for that purpose. One popular option is called the Developer Certificate of Origin (DCO). Originally designed for the Linux project and published by the OSDL under the CC-By-SA license, the DCO is a mechanism to assure contributors have confirmed their right to license their contribution under the project's license. Typically, developers indicate their agreement to the DCO with a specially-formed tag in their DVCS commit log. Conservancy's Evergreen, phpMyAdmin, and Samba projects all use modified versions of the DCO.

    Conservancy's Selenium project uses a license assent mechanism somewhat closer to a formal CLA. In this method, the contributors must complete a special online form wherein they formally assent to the license of the project. The project keeps careful records of all assents separately from the code repository itself. This mechanism is a bit heavy-weight, but ultimately simply formally implements the same inbound=outbound concept.

    However, most projects use the same time-honored and successful mechanism used throughout the 35 year history of the Free Software community. Simply, they publish clearly in their developer documentation and/or other key places (such as mailing list subscription notices) that submissions using the normal means to contribute to the project — such as patches to the mailing list or pull and merge requests — indicate the contributors' assent for inclusion of that software in the canonical version under the project's license.

    Ultimately, CLAs are much ado about nothing. Lawyers are trained to zealously represent their clients, and as such they often seek to an outcome that maximizes leverage of clients' legal rights, but they typically ignore the other important benefits that are outside of their profession. The most ardent supporters of CLAs have yet to experience first-hand the arduous daily work required to manage a queue of incoming FLOSS contributions. Those of us who have done the latter easily see that avoiding additional barriers to entry is paramount. While a beautifully crafted CLA — jam-packed with legalese that artfully shifts all the blame off to the contributors — may make some corporate attorneys smile, but I've never seen such bring anything but a frown and a sigh from FLOSS developers.


    0Only rarely does an unincorporated, unaffiliated project request CLAs. Typically, CLAs name a corporate entity — a non-profit charity (like Conservancy), a trade association (like OpenStack Foundation), or a for-profit company, as its ultimate beneficiary. On rare occasions, the beneficiary of a CLA is a single individual developer.

    1I've yet to meet any FLOSS developer who has read their own employer's entire patent portfolio.

    Posted on Monday 09 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-08: Resolving Weirdness In Thinkpad T60 Hotkeys

    In keeping with my tendency to write a blog post about any technical issue I find that takes me more than five minutes to figure out when searching the Internet, I include below a resolution to a problem that took me, embarrassingly, nearly two and half hours across two different tries to figure out.

    The problem appeared when I took Debian 7 (wheezy) laptop hard drive out of an Lenovo Thinkpad T61 that I was using that failed and into Lenovo Thinkpad T60. (I've been trying to switch fully to the T60 for everything because it is supported by Coreboot.)

    image of a Lenovo T60 Thinkpad keyboard with volume buttons circled in purple. When I switched, everything was working fine, except the volume buttons on the Thinkpad T60 (those three buttons in the top left hand corner of the keyboard, shown circled in purple in the image on the right) no longer did what I expected. I expected they would ultimately control PulseAudio volume, which does the equivalent of pactl set-sink-mute 0 0 and appropriate pactl set-sink-volume 0 commands for my sound card. I noticed this because when PulseAudio is running, and you type those commands on the command line, all functions properly with the volume, and, when running under X, I see the popup windows coming from my desktop environment showing the volume changes. So, I knew nothing was wrong with the sound configuration when I switched the hard drive to a new machine, since the command line tools worked and did the right things. Somehow, the buttons weren't sending the same commands in whatever manner they were used to.

    I assumed at first that the buttons simply generated X events. It turns out they do, but the story there is a bit more complex. When I ran xev I saw those buttons did not, in fact, generate any X events. So, that makes it clear that nothing from X windows “up” (i.e, to the desktop software) had anything to do with the situation.

    So, I first proceed to research whether these volume keys were supposed to generate X events. I discovered that there were indeed XF86VolumeUp, XF86VolumeDown and XF86VolumeMute key events (I'd seen those before, in fact, doing similar research years ago). However, the advice online was highly conflicting whether or not the best way to solve this is to have them generate X events. Most of the discussions I found assumed the keys were already generating X events and had advice about how to bind those keys to scripts or to your desktop setup of choice0.

    I found various old documentation about the thinkpad_acpi daemon, which I quickly found quickly was out of date since long ago that had been incorporated into Linux's ACPI directly and didn't require additional daemons. This led me to just begin poking around about how the ACPI subsystem for ACPI keys worked.

    I quickly found the xev equivalent for acpi: acpi_listen. This was the breakthrough I needed to solve this problem. I ran acpi_listen and discovered that while other Thinkpad key sequences, such as Fn-Home (to increase brightness), generated output like:

    video/brightnessup BRTUP 00000086 00000000 K
    video/brightnessup BRTUP 00000086 00000000
    
    but the volume up, down, and mute keys generated no output. Therefore, it's pretty clear at this point that the problem is something related to configuration of ACPI in some way. I had a feeling this would be hard to find a solution for.

    That's when I started poking around in /proc, and found that /proc/acpi/ibm/volume was changing each time I hit a these keys. So, Linux clearly was receiving notice that these keys were pressed. So, why wasn't the acpi subsystem notifying anything else, including whatever interface acpi_listen talks to?

    Well, this was a hard one to find an answer to. I have to admit that I found the answer through pure serendipity. I had already loaded this old bug report for an GNU/Linux distribution waning in popularity and found that someone resolved the ticket with the command:

    cp /sys/devices/platform/thinkpad_acpi/hotkey_all_mask /sys/devices/platform/thinkpad_acpi/hotkey_mask
    
    This command:
    # cat /sys/devices/platform/thinkpad_acpi/hotkey_all_mask /sys/devices/platform/thinkpad_acpi/hotkey_mask 
    0x00ffffff
    0x008dffff
    
    quickly showed that that the masks didn't match. So I did:
    # cat /sys/devices/platform/thinkpad_acpi/hotkey_all_mask > /sys/devices/platform/thinkpad_acpi/hotkey_mask 
    
    and that single change caused the buttons to work again as expected, including causing the popup notifications of volume changes and the like.

    Additional searching show this hotkey issue is documented in Linux, in its Thinkpad ACPI documentation, which states:

    The hot key bit mask allows some control over which hot keys generate events. If a key is "masked" (bit set to 0 in the mask), the firmware will handle it. If it is "unmasked", it signals the firmware that thinkpad-acpi would prefer to handle it, if the firmware would be so kind to allow it (and it often doesn't!).

    I note that on my system, running the command the document recommends to reset to defaults yields me back to the wrong state:

    # cat /proc/acpi/ibm/hotkey 
    status:         enabled
    mask:           0x00ffffff
    commands:       enable, disable, reset, <mask>
    # echo reset > /proc/acpi/ibm/hotkey 
    # cat /proc/acpi/ibm/hotkey 
    status:         enabled
    mask:           0x008dffff
    commands:       enable, disable, reset, <mask>
    # echo 0xffffffff > /proc/acpi/ibm/hotkey
    

    So, I added that last command above to restore it to enabled Linux's control of all the ACPI hot keys, which I suspect is what I want. I'll update the post if doing that causes other problems that I hadn't seen before. I'll also update the post to note whether this setting is saved over reboots, as I haven't rebooted the machine since I did this. :)


    0Interestingly, as has happened to me often recently, much of the most useful information that I find about any complex topic regarding how things work in modern GNU/Linux distributions is found on the Arch or Crunchbang online fora and wikis. It's quite interesting to me that these two distributions appear to be the primary place where the types of information that every distribution once needed to provide are kept. Their wikis are becoming the canonical references of how a distribution is constructed, since much of the information found therein applies to all distributions, but distributions like Fedora and Debian attempt to make it less complex for the users to change the configuration.

    Posted on Sunday 08 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-06-04: Be Sure to Comment on FCC's NPRM 14-28

    I remind everyone today, particularly USA Citizens, to be sure to comment on the FCC's Notice of Proposed Rulemaking (NPRM) 14-28. They even did a sane thing and provided an email address you can write to rather than using their poorly designed web forums, but PC Magazine published relatively complete instructions for other ways. The deadline isn't for a while yet, but it's worth getting it done so you don't forget. Below is my letter in case anyone is interested.

    Dear FCC Commissioners,

    I am writing in response to NPRM 14-28 — your request for comments regarding the “Open Internet”.

    I am a trained computer scientist and I work in the technology industry. (I'm a software developer and software freedom activist.) I have subscribed to home network services since 1989, starting with the Prodigy service, and switching to Internet service in 1991. Initially, I used a PSTN single-pair modem and eventually upgraded to DSL in 1999. I still have a DSL line, but it's sadly not much faster than the one I had in 1999, and I explain below why.

    In fact, I've watched the situation get progressively worse, not better, since the Telecommunications Act of 1996. While my download speeds are little bit faster than they were in the late 1990s, I now pay substantially more for only small increases of upload speeds, even in a major urban markets. In short, it's become increasingly more difficult to actually purchase true Internet connectivity service anywhere in the USA. But first, let me explain what I mean by “true Internet connectivity”.

    The Internet was created as a peer-to-peer medium where all nodes were equal. In the original design of the Internet, every device has its own IP address and, if the user wanted, that device could be addressed directly and fully by any other device on the Internet. For its part, the network in between the two nodes were intended to merely move the packets between those nodes as quickly as possible — treating all those packets the same way, and analyzing those packets only with publicly available algorithms that everyone agreed were correct and fair.

    Of course, the companies who typically appeal to (or even fight) the FCC want the true Internet to simply die. They seek to turn the promise of a truly peer-to-peer network of equality into a traditional broadcast medium that they control. They frankly want to manipulate the Internet into a mere television broadcast system (with the only improvement to that being “more stations”).

    Because of this, the three following features of the Internet — inherent in its design — that are now extremely difficult for individual home users to purchase at reasonable cost from so-called “Internet providers” like Time Warner, Verizon, and Comcast:

    • A static IP address, which allows the user to be a true, equal node on the Internet. (And, related: IPv6 addresses, which could end the claim that static IP addresses are a precious resource.)
    • An unfiltered connection, that allows the user to run their own webserver, email server and the like. (Most of these companies block TCP ports 80 and 25 at the least, and usually many more ports, too).
    • Reasonable choices between the upload/download speed tradeoff.

    For example, in New York, I currently pay nearly $150/month to an independent ISP just to have a static, unfiltered IP address with 10 Mbps down and 2 Mbps up. I work from home and the 2 Mbps up is incredibly slow for modern usage. However, I still live in the Slowness because upload speeds greater than that are extremely price-restrictive from any provider.

    In other words, these carriers have designed their networks to prioritize all downloading over all uploading, and to purposely place the user behind many levels of Network Address Translation and network filtering. In this environment, many Internet applications simply do not work (or require complex work-arounds that disable key features). As an example: true diversity in VoIP accessibility and service has almost entirely been superseded by proprietary single-company services (such as Skype) because SIP, designed by the IETF (in part) for VoIP applications, did not fully anticipate that nearly every user would be behind NAT and unable to use SIP without complex work-arounds.

    I believe this disastrous situation centers around problems with the Telecommunications Act of 1996. While the ILECs are theoretically required to license network infrastructure fairly at bulk rates to CLECs, I've frequently seen — both professional and personally — wars waged against CLECs by ILECs. CLECs simply can't offer their own types of services that merely “use” the ILECs' connectivity. The technical restrictions placed by ILECs force CLECs to offer the same style of service the ILEC offers, and at a higher price (to cover their additional overhead in dealing with the CLECs)! It's no wonder there are hardly any CLECs left.

    Indeed, in my 25 year career as a technologist, I've seen many nasty tricks by Verizon here in NYC, such as purposeful work-slowdowns in resolution of outages and Verizon technicians outright lying to me and to CLEC technicians about the state of their network. For my part, I stick with one of the last independent ISPs in NYC, but I suspect they won't be able to keep their business going for long. Verizon either (a) buys up any CLEC that looks too powerful, or, (b) if Verizon can't buy them, Verizon slowly squeezes them out of business with dirty tricks.

    The end result is that we don't have real options for true Internet connectivity for home nor on-site business use. I'm already priced out of getting a 10 Mbps upload with a static IP and all ports usable. I suspect within 5 years, I'll be priced out of my current 2 Mbps upload with a static IP and all ports usable.

    I realize the problems that most users are concerned about on this issue relate to their ability to download bytes from third-party companies like Netflix. Therefore, it's all too easy for Verizon to play out this argument as if it's big companies vs. big companies.

    However, the real fallout from the current system is that the cost for personal Internet connectivity that allows individuals equal existence on the network is so high that few bother. The consequence, thus, is that only those who are heavily involved in the technology industry even know what types of applications would be available if everyone had a static IP with all ports usable and equal upload and download speeds of 10 Mbs or higher.

    Yet, that's the exact promise of network connectivity that I was taught about as an undergraduate in Computer Science in the early 1990s. What I see today is the dystopian version of the promise. My generation of computer scientists have been forced to constrain their designs of Internet-enabled applications to fit a model that the network carriers dictate.

    I realize you can't possibly fix all these social ills in the network connectivity industry with one rule-making, but I hope my comments have perhaps given a slightly different perspective of what you'll hear from most of the other commenters on this issue. I thank you for reading my comments and would be delighted to talk further with any of your staff about these issues at your convenience.

    Sincerely,

    Bradley M. Kuhn,
    a citizen of the USA since birth, currently living in New York, NY.

    Posted on Wednesday 04 June 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

May

  • 2014-05-14: To Serve Users

    (Spoiler alert: spoilers regarding a 1950s science fiction short story that you may not have read appear in this blog post.)

    Mitchell Baker announced today that Mozilla Corporation (or maybe Mozilla Foundation? She doesn't really say…) will begin implementing proprietary software by default in Firefox at the behest of wealthy and powerful media companies. Baker argues this serves users: that Orwellian phrasing caught my attention most.

    image from Twilight Zone Episode, To Serve Man, showing the book with the alien title on the front and its translation.

    In the old science fiction story, To Serve Man (which later was adapted for the The Twilight Zone), aliens come to earth and freely share various technological advances, and offer free visits to the alien world. Eventually, the narrator, who remains skeptical, begins translating one of their books. The title is innocuous, and even well-meaning: To Serve Man. Only too late does the narrator realize that the book isn't about service to mankind, but rather — a cookbook.

    It's in the same spirit that Baker seeks to serve Firefox's users up on a platter to the MPAA, the RIAA, and like-minded wealthy for-profit corporations. Baker's only defense appears to be that other browser vendors have done the same, and cites specifically for-profit companies such as Apple, Google, and Microsoft.

    Theoretically speaking, though, the Mozilla Foundation is supposed to be a 501(c)(3) non-profit charity which told the IRS its charitable purpose was: to keep the Internet a universal platform that is accessible by anyone from anywhere, using any computer, and … develop open-source Internet applications. Baker fails to explain how switching Firefox to include proprietary software fits that mission. In fact, with a bit of revisionist history, she says that open source was merely an “approach” that Mozilla Foundation was using, not their mission.

    Of course, Mozilla Foundation is actually a thin non-profit shell wrapped around a much larger entity called the Mozilla Corporation, which is a for-profit company. I have always been dubious about this structure, and actions like this that make it obvious that “Mozilla” is focused on being a for-profit company, competing with other for-profit companies, rather than a charity serving the public (at least, in the way that I mean “serving”).

    Meanwhile, I greatly appreciate that various Free Software communities maintain forks and/or alternative wrappers around many web browser technologies, which, like Firefox, succumb easily to for-profit corporate control. This process (such as Debian's iceweasel fork and GNOME's ephiphany interface to Webkit) provide an nice “canary in the coalmine” to confirm there is enough software-freedom-respecting code still released to make these browsers usable by those who care about software freedom and reject the digital restrictions management that Mozilla now embraces. OTOH, the one item that Baker is right about: given that so few people oppose proprietary software, there soon may not be much of a web left for those of us who stand firmly for software freedom. Sadly, Mozilla announced today their plans to depart from curtailing that distopia and will instead help accelerate its onset.

    Related Links:

    Posted on Wednesday 14 May 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-05-10: Federal Appeals Court Decision in Oracle v. Google

    [ Update on 2014-05-13: If you're more of a listening rather than reading type, you might enjoy the Free as in Freedom oggcast that Karen Sandler and I recorded about this topic. ]

    I have a strange relationship with copyright law. Many copyright policies of various jurisdictions, the USA in particular, are draconian at best and downright vindictive at worst. For example, during the public comment period on ACTA, I commented that I think it's always wrong, as a policy matter, for copyright infringement to carry criminal penalties.

    That said, much of what I do in my work in the software freedom movement is enforcement of copyleft: assuring that the primary legal tool, which defends the freedom of the Free Software, functions properly, and actually works — in the real world — the way it should.

    As I've written about before at great length, copyleft functions primarily because it uses copyright law to stand up and defend the four freedoms. It's commonly called a hack on copyright: turning the copyright system which is canonically used to restrict users' rights, into a system of justice for the equality of users.

    However, it's this very activity that leaves me with a weird relationship with copyright. Copyleft uses the restrictive force of copyright in the other direction, but that means the greater the negative force, the more powerful the positive force. So, as I read yesterday the Federal Circuit Appeals Court's decision in Oracle v. Google, I had that strange feeling of simultaneous annoyance and contentment. In this blog post, I attempt to state why I am both glad for and annoyed with the decision.

    I stated clearly after Alsup's decision NDCA decision in this case that I never thought APIs were copyrightable, nor does any developer really think so in practice. But, when considering the appeal, note carefully that the court of appeals wasn't assigned the general job of considering whether APIs are copyrightable. Their job is to figure out if the lower court made an error in judgment in this particular case, and to discern any issues that were missed previously. I think that's what the Federal Circuit Court attempted to do here, and while IMO they too erred regarding a factual issue, I don't think their decision is wholly useless nor categorically incorrect.

    Their decision is worth reading in full. I'd also urge anyone who wants to opine on this decision to actually read the whole thing (which so often rarely happens in these situations). I bet most pundits out there opining already didn't read the whole thing. I read the decision as soon as it was announced, and I didn't get this post up until early Saturday morning, because it took that long to read the opinion in detail, go back to other related texts and verify some details and then write down my analysis. So, please, go ahead, read it now before reading this blog post further. My post will still be here when you get back. (And, BTW, don't fall for that self-aggrandizing ballyhoo some lawyers will feed you that only they can understand things like court decisions. In fact, I think programmers are going to have an easier time reading decisions about this topic than lawyers, as the technical facts are highly pertinent.)

    Ok, you've read the decision now? Good. Now, I'll tell you what I think in detail: (As always, my opinions on this are my own, IANAL and TINLA and these are my personal thoughts on the question.)

    The most interesting thing, IMO, about this decision is that the Court focused on a fact from trial that clearly has more nuance than they realize. Specifically, the Court claims many times in this decision that Google conceded that it copied the declaring code used in the 37 packages verbatim (pg 12 of the Appeals decision).

    I suspect the Court imagined the situation too simply: that there was a huge body of source code text, and that Google engineers sat there, simply cutting-and-pasting from Oracle's code right into their own code for each of the 7,000 lines or so of function declarations. However, I've chatted with some people (including Mark J. Wielaard) who are much more deeply embedded in the Free Software Java world than I am, and they pointed out it's highly unlikely anyone did a blatant cut-and-paste job to implement Java's core library API, for various reasons. I thus suspect that Google didn't do it that way either.

    So, how did the Appeals Court come to this erroneous conclusion? On page 27 of their decision, they write: Google conceded that it copied it verbatim. Indeed, the district court specifically instructed the jury that ‘Google agrees that it uses the same names and declarations’ in Android. Charge to the Jury at 10. So, I reread page 10 of the final charge to the jury. It actually says something much more verbose and nuanced. I've pasted together below all the parts where the Alsup's jury charge mentions this issue (emphasis mine):

    Google denies infringing any such copyrighted material … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. … The copyrighted Java platform has more than 37 API packages and so does the accused Android platform. As for the 37 API packages that overlap, Google agrees that it uses the same names and declarations but contends that its line-by-line implementations are different … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. Google states, however, that the elements it has used are not infringing … With respect to the API documentation, Oracle contends Google copied the English-language comments in the registered copyrighted work and moved them over to the documentation for the 37 API packages in Android. Google agrees that there are similarities in the wording but, pointing to differences as well, denies that its documentation is a copy. Google further asserts that the similarities are largely the result of the fact that each API carries out the same functions in both systems.

    Thus, in the original trial, Google did not admit to copying of any of Oracle's text, documentation or code (other than the rangeCheck thing, which is moot on the API copyrightability issue). Rather, Google said two separate things: (a) they did not copy any material (other than rangeCheck), and (b) admitted that the names and declarations are the same, not because Google copied those names and declarations from Oracle's own work, but because they perform the same functions. In other words, Google makes various arguments of why those names and declarations look the same, but for reasons other than “mundane cut-and-paste copying from Oracle's copyrighted works”.

    For we programmers, this is of course a distinction without any difference. Frankly, programmers, when we look at this situation, we'd make many obvious logical leaps at once. Specifically, we all think APIs in the abstract can't possibly be copyrightable (since that's absurd), and we work backwards from there with some quick thinking, that goes something like this: it doesn't make sense for APIs to be copyrightable because if you explain to me with enough detail what the API has to, such that I have sufficient information to implement, my declarations of the functions of that API are going to necessarily be quite similar to yours — so much so that it'll be nearly indistinguishable from what those function declarations might look like if I cut-and-pasted them. So, the fact is, if we both sit down separately to implement the same API, well, then we're likely going to have two works that look similar. However, it doesn't mean I copied your work. And, besides, it makes no sense for APIs, as a general concept, to be copyrightable so why are we discussing this again?0

    But this is reasoning a programmer can love but the Courts hate. The Courts want to take a set of laws the legislature passed, some precedents that their system gave them, along with a specific set of facts, and then see what happens when the law is applied to those facts. Juries, in turn, have the job of finding which facts are accurate, which aren't, and then coming to a verdict, upon receiving instructions about the law from the Court.

    And that's right where the confusion began in this case, IMO. The original jury, to start with, likely had trouble distinguishing three distinct things: the general concept of an API, the specification of the API, and the implementation of an API. Plus, they were told by the judge to assume API's were copyrightable anyway. Then, it got more confusing when they looked at two implementations of an API, parts of which looked similar for purely mundane technical reasons, and assumed (incorrectly) that textual copying from one file to another was the only way to get to that same result. Meanwhile, the jury was likely further confused that Google argued various affirmative defenses against copyright infringement in the alternative.

    So, what happens with the Appeals Court? The Appeals court, of course, has no reason to believe the finding of fact of the jury is wrong, and it's simply not the appeals court's job to replace the original jury's job, but to analyze the matters of law decided by the lower court. That's why I'm admittedly troubled and downright confused that the ruling from the Appeals court seems to conflate the issue of literal copying of text and similarities in independently developed text. That is a factual issue in any given case, but that question of fact is the central nuance to API copyrightiable and it seems the Appeals Court glossed over it. The Appeals Court simply fails to distinguish between literal cut-and-paste copying from a given API's implementation and serendipitous similarities that are likely to happen when two API implementations support the same API.

    But that error isn't the interesting part. Of course, this error is a fundamental incorrect assumption by the Appeals Court, and as such the primary ruling are effectively conclusions based on a hypothetical fact pattern and not the actual fact pattern in this case. However, after poring over the decision for hours, it's the only error that I found in the appeals ruling. Thus, setting the fundamental error aside, their ruling has some good parts. For example, I'm rather impressed and swayed by their argument that the lower court misapplied the merger doctrine because it analyzed the situation based on the decisions Google had with regard to functionality, rather than the decisions of Sun/Oracle. To quote:

    We further find that the district court erred in focusing its merger analysis on the options available to Google at the time of copying. It is well-established that copyrightability and the scope of protectable activity are to be evaluated at the time of creation, not at the time of infringement. … The focus is, therefore, on the options that were available to Sun/Oracle at the time it created the API packages.

    Of course, cropping up again in that analysis is that same darned confusion the Court had with regard to copying this declaration code. The ruling goes on to say: But, as the court acknowledged, nothing prevented Google from writing its own declaring code, along with its own implementing code, to achieve the same result.

    To go back to my earlier point, Google likely did write their own declaring code, and the code ended up looking the same as the other code, because there was no other way to implement the same API.

    In the end, Mark J. Wielaard put it best when he read the decision, pointing out to me that the Appeals Court seemed almost angry that the jury hung on the fair use question. It reads to me, too, like Appeals Court is slyly saying: the right affirmative defense for Google here is fair use, and that a new jury really needs to sit and look at it.

    My conclusion is that this just isn't a decision about the copyrightable of APIs in the general sense. The question the Court would need to consider to actually settle that question would be: “If we believe an API itself isn't copyrightable, but its implementation is, how do we figure out when copyright infringement has occurred when there are multiple implementations of the same API floating around, which of course have declarations that look similar?” But the court did not consider that fundamental question, because the Court assumed (incorrectly) there was textual cut-and-paste copying. The decision here, in my view, is about a more narrow, hypothetical question that the Court decided to ask itself instead: “If someone textually copies parts of your API implementation, are merger doctrine, scènes à faire, and de minimis affirmative defenses like to succeed?“ In this hypothetical scenario, the Appeals Court claims “such defenses rarely help you, but a fair use defense might help you”.

    However, on this point, in my copyleft-defender role, I don't mind this decision very much. The one thing this decision clearly seems to declare is: “if there is even a modicum of evidence that direct textual copying occurred, then the alleged infringer must pass an extremely high bar of affirmative defense to show infringement didn't occur”. In most GPL violation cases, the facts aren't nuanced: there is always clearly an intention to incorporate and distribute large textual parts of the GPL'd code (i.e., not just a few function declarations). As such, this decision is probably good for copyleft, since on its narrowest reading, this decision upholds the idea that if you go mixing in other copyrighted stuff, via copying and distribution, then it will be difficult to show no copyright infringement occurred.

    OTOH, I suspect that most pundits are going to look at this in an overly contrasted way: NDCA said API's aren't copyrightable, and the Appeals Court said they are. That's not what happened here, and if you look at the situation that way, you're making the same kinds of oversimplications that the Appeals Court seems to have erroneously made.

    The most positive outcome here is that a new jury can now narrowly consider the question of fair use as it relates to serendipitous similarity of multiple API function declaration code. I suspect a fresh jury focused on that narrow question will do a much better job. The previous jury had so many complex issues before them, I suspect that they were easily conflated. (Recall that the previous jury considered patent questions as well.) I've found that people who haven't spent their lives training (as programmers and lawyers have) to delineate complex matters and separate truly unrelated issues do a poor job at such. Thus, I suspect the jury won't hang the second time if they're just considering the fair use question.

    Finally, with regard to this ruling, I suspect this won't become immediate, frequently cited precedent. The case is remanded, so a new jury will first sit down and consider the fair use question. If that jury finds fair use and thus no infringement, Oracle's next appeal will be quite weak, and the Appeals Court likely won't reexamine the question in any detail. In that outcome, very little has changed overall: we'll have certainty that API's aren't copyrightable, as long as any textual copying that occurs during reimplementation is easily called fair use. By contrast, if the new jury rejects Google's fair use defense, I suspect Google will have to appeal all the way to SCOTUS. It's thus going to be at least two years before anything definitive is decided, and the big winners will be wealthy litigation attorneys — as usual.


    0This is of course true for any sufficiently simple programming task. I used to be a high-school computer science teacher. Frankly, while I was successful twice in detecting student plagiarism, it was pretty easy to get false positives sometimes. And certainly I had plenty of student programmers who wrote their function declarations the same for the same job! And no, those weren't the students who plagiarized.

    Posted on Saturday 10 May 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2014-04-03: Open Source as Last Resort

    “Open Source as Last Resort” appears to be popular this week. First, Canonical, Ltd. will finally liberate UbuntuOne server-side code, but only after abandoning it entirely. Second, Microsoft announced a plan to release its .NET compiler platform, Roslyn, under the Apache License spinning it into an (apparent, based on description) 501(c)(6) organization called the Dot Net Foundation.

    This strategy is pretty bad for software freedom. It gives fodder to the idea that “open source doesn't work”, because these projects are likely to fail (or have already failed) when they're released. (I suspect, although I don't know of any studies on this, that) most software projects, like most start-up organizations, fail in the first five years. That's true if they're proprietary software projects or not.

    But, using code liberation as a last straw attempt to gain interest in a failing codebase only gives a bad name to the licensing and community-oriented governance that creates software freedom. I therefore think we should not laud these sorts of releases, even though they liberate more code. We should call them for what they are: too little, too late. (I said as much in the five year old bug ticket where community members have been complaining that UbuntuOne server-side is proprietary.)

    Finally, a note on using a foundation to attempt to bolster a project community in these cases:

    I must again point out that the type of organization matters greatly. Those who are interested in the liberated .NET codebase should be asking Microsoft if they're going to form a 501(c)(6) or a 501(c)(3) (and I suspect it's the former, which bodes badly).

    I know some in our community glibly dismiss this distinction as some esoteric IRS issue, but it really matters with regard to how the organization treats the community. 501(c)(6) organizations are trade associations who serve for-profit businesses. 501(c)(3)'s serve the public at large. There's a huge difference in their behavior and activities. While it's possible for a 501(c)(3) to fail to serve all the public's interest, it's corruption when they so fail. When 501(c)(6)'s serve only their corporate members' interest, possibly at the detriment to the public, those 501(c)(6) organizations are just doing the job they are supposed to do — however distasteful it is.


    Note: I said “open source” on purpose in this post in various places. I'm specifically saying that term because it's clear these companies actions are not in the spirit of software freedom, nor even inspired therefrom, but are pure and simple strategy decisions.

    Posted on Thursday 03 April 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

March

  • 2014-03-31: The Change in My Role at Conservancy

    Today, Conservancy announced the addition of Karen Sandler to our management team. This addition to Conservancy's staff will greatly improve Conservancy's ability to help Conservancy's many member projects.

    This outcome is one I've been working towards for a long time. I've focused for at least a year on fundraising for Conservancy in hopes that we could hire a third full-time staffer. For the last few years, I've been doing basically two full-time jobs, since I've needed to give my personal attention to virtually everything Conservancy does. This obviously doesn't scale, so my focus has been on increasing capacity at Conservancy to serve more projects better.

    I (and the entire Board of Directors of Conservancy) have often worried if I were to disappear, leave Conservancy (or otherwise just drop dead), Conservancy might not survive without me. Such heavy reliance on one person is a bug, not a feature, in an organization. That's why I worked so hard to recruit Karen Sandler as Conservancy's new Executive Director. Admittedly, she helped create Conservancy and has been involved since its inception. But, having her full-time on staff is a great step forward: there's no single point of failure anymore.

    It's somewhat difficult for me to relinquish some of my personal control over Conservancy. I have been mostly responsible for building Conservancy from a small unstaffed “thin” fiscal sponsor into a “full-service” fiscal sponsor that provides virtually any work that a Free Software project requests. Much of that has been thanks to my work, and it's tough to let someone else take that over.

    However, handing off the Executive Director position to Karen specifically made this transition easy. Put simply, I trust Karen, and I recruited her personally to take over (one of) my job(s). She really believes in software freedom in the way that I do, and she's taught me at least half the things I know about non-profit organizational management. We've collaborated on so many projects and have been friends and colleagues — through both rough and easy times — for nearly a decade. While I think I'm justified in saying I did a pretty good job as Conservancy's Executive Director, Karen will do an even better job than I did.

    I'm not stepping aside completely from Conservancy management, though. I'm continuing in the role of President and I remain on the Board of Directors. I'll be involved with all strategic decisions for the organization, and I'll be the primary manager for a few of Conservancy's program activities: including at least the non-profit accounting project and Conservancy's license enforcement activities. My primary staff role, however, will now be under the title “Distinguished Technologist” — a title we borrowed from HP. The basic idea behind this job at Conservancy is that my day-to-day work helps the organization understand the technology of Free Software and how it relates to Conservancy's work. As an initial matter, I suspect that my focus for the next few years is going to be the non-profit accounting project, since that's the most urgent place where Free Software is inadequately providing technological solutions for Conservancy's work. (Now, more than ever, I urge you to donate to that campaign, since it will become a major component of funding my day-to-day work. :)

    I'm somewhat surprised that, even in the six hours since this announcement, I've already received emails from Conservancy member project representatives worded as if they expect they won't hear from me anymore. While, indeed, I'll cease to be the front-line contact person for issues related to Conservancy's work, Conservancy and its operations will remain my focus. Karen and I plan a collaborative management style for the organization, so I suspect for many things, Karen will brief me about what's going on and will seek my input. That said, I'm looking forward to a time very soon when most Conservancy management decisions won't primarily be mine anymore. I'm grateful for Karen, as I know that the two of us running Conservancy together will make a great working environment for both of us, and I really believe that she and I as a management team are greater than the sum of our parts.

    Related Links

    Posted on Monday 31 March 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2014-01-26: GCC, LLVM, Copyleft, Companies, and Non-Profits

    [ Please keep in mind in reading this post that while both FSF and Conservancy are mentioned, and that I have leadership roles at both organizations, these opinions on ebb.org, as always, are my own and don't necessarily reflect the view of FSF and/or Conservancy. ]

    Most people know I'm a fan of RMS' writing about Free Software and I agree with most (but not all) of his beliefs about software freedom politics and strategy. I was delighted to read RMS' post about LLVM on the GCC mailing list on Friday. It's clear and concise, and, as usual, I agree with most (but not all) of it, and I encourage people to read it. Meanwhile, upon reading comments on LWN on this post, I felt the need to add a few points to the discussion.

    Firstly, I'm troubled to see so many developers, including GCC developers, conflating various social troubles in the GCC community with the choice of license. I think it's impossible to deny that culturally, the GCC community faces challenges, like any community that has lasted for so long. Indeed, there's a long political history of GCC that even predates my earliest involvement with the Free Software community (even though I'm now considered an old-timer in Free Software in part because I played a small role — as a young, inexperienced FSF volunteer — in helping negotiate the EGCS fork back into the GCC mainline).

    But none of these politics really relate to GCC's license. The copyleft was about ensuring that there were never proprietary improvements to the compiler, and AFAIK no GCC developers ever wanted that. In fact, GCC was ultimately the first major enforcement test of the GPL, and ironically that test sent us on the trajectory that led to the current situation.

    Specifically, as I've spoken about in my many talks on GPL compliance, the earliest publicly discussed major GPL violation was by NeXT computing when Steve Jobs attempted and failed (thanks to RMS' GPL enforcement work) to make the Objective C front-end to GCC proprietary. Everything for everyone involved would have gone quite differently if that enforcement effort had failed.

    As it stands, copyleft was upheld and worked. For years, until quite recently (in context of the history of computing, anyway), Apple itself used and relied on the Free Software GCC as its primary and preferred Objective C compiler, because of that enforcement against NeXT so long ago. But, that occurrence also likely solidified Jobs' irrational hatred of copyleft and software freedom, and Apple was on a mission to find an alternative compiler — but writing a compiler is difficult and takes time.

    Meanwhile, I should point out that copyleft advocates sometimes conflate issues in analyzing the situation with LLVM. I believe most LLVM developers when they say that they don't like proprietary software and that they want to encourage software freedom. I really think they do. And, for all of us, copyleft isn't a religion, or even a belief — it's a strategy to maximize software freedom, and no one (AFAICT) has said it's the only viable strategy to do that. It's quite possible the strategy of LLVM developers of changing the APIs quickly to thwart proprietarization might work. I really doubt it, though, and here's why:

    I'll concede that LLVM was started with the best of academic intentions to make better compiler technology and share it freely. (I've discussed this issue at some length with Chris Lattner directly, and I believe he actually is someone who wants more software freedom in the world, even if he disagrees with copyleft as a strategy.) IMO, though, the problem we face is exploitation by various anti-copyleft, software-freedom-unfriendly companies that seek to remove every copyleft component from any software stack. Their reasons for pursuing that goal may or may not be rational, but its collateral damage has already become clear: it's possible today to license proprietary improvements to LLVM that aren't released as Free Software. I predict this will become more common, notwithstanding any technical efforts of LLVM developers to thwart it. (Consider, by way of historical example, that proprietary combined works with Apache web server continue to this very day, despite Apache developers' decades of we'll break APIs, so don't keep your stuff proprietary claims.)

    Copyleft is always a trade-off between software freedom and adoption. I don't admonish people for picking the adoption side over the software freedom side, but I do think as a community we should be honest with ourselves that copyleft remains the best strategy to prevent proprietary improvements and forks and no other strategy has been as successful in reaching that goal. And, those who don't pick copyleft have priorities other than software freedom ranked higher in their goals.

    As a penultimate point, I'll reiterate something that Joe Buck pointed out on the LWN thread: a lot of effort was put in to creating a licensing solution that solved the copyleft concerns of GCC plugins. FSF's worry for more than a decade (reaching back into the late 1990s) was that a GCC plugin architecture would allow writing to an output file GCC's intermediate representation, which would, in turn, allow a wholly separate program to optimize the software by reading and writing that file format, and thus circumvent the protections of copyleft. The GCC Runtime Library Exception (GCC RTL Exception) is (in my biased opinion) an innovative licensing solution that solves the problem — the ironic outcome: you are only permitted to perform proprietary optimization with GCC on GPL'd software, but not on proprietary software.

    The problem was that the GCC RTL Exception came too late. While I led the GCC RTL Exception drafting process, I don't take the blame for delays. In fact, I fought for nearly a year to prioritize the work when FSF's outside law firm was focused on other priorities and ignored my calls for urgency. I finally convinced everyone, but the work got done far too late. (IMO, it should have been timed for release in parallel with GPLv3 in June 2007.)

    Finally, I want to reiterate that copyleft is a strategy, not a moral principle. I respect the LLVM developers' decision to use a different strategy for software freedom, even if it isn't my preferred strategy. Indeed, I respect it so much that I supported Conservancy's offer of membership to LLVM in Software Freedom Conservancy. I still hope the LLVM developers will take Conservancy up on this offer. I think that regardless of a project's preferred strategy for software freedom — copyleft or non-copyleft — that it's important for the developers to have a not-for-profit charity as a gathering place for developers, separate from their for-profit employer affiliations.

    Undue for-profit corporate influence is the biggest problem that software freedom faces today. Indeed, I don't know a single developer in our community who likes to see their work proprietarized. Developers, generally speaking, want to share their code with other developers. It's lawyers and business people with dollar signs in their eyes who want to make proprietary software. Those people sometimes convince developers to make trade-offs (which I don't agree with myself) to work on proprietary software (— usually in exchange for funding some of their work time on upstream Free Software). Meanwhile, those for-profit-corporate folks frequently spread lies and half-truths about the copyleft side of the community — in an effort to convince developers that their Free Software projects “won't survive” if those developers don't follow the exact plan The Company proposes. I've experienced these manipulations myself — for example, in April 2013, a prominent corporate lawyer with an interest in LLVM told me to my face that his company would continue spreading false rumors that I'd use LLVM's membership in Conservancy to push the LLVM developers toward copyleft, despite my public statements to the contrary. (Again, for the record, I have no such intention and I'd be delighted to help LLVM be led in a non-profit home by its rightful developer leaders, whichever Open Source and Free Software license they chose.)

    In short, the biggest threat to the future of software has always been for-profit companies who wish to maximize profits by exploiting the code, developers and users while limiting their software freedom. Such companies try every trick in pursuit of that goal. As such, I prefer copyleft as a strategy. However, I don't necessarily admonish those who pick a different strategy. The reason that I encourage membership of non-copylefted projects in Conservancy (and other 501(c)(3) charities) is to give those projects the benefits of a non-profit home that maximize software freedom using the project's chosen strategy, whatever it may be.

    Posted on Sunday 26 January 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2014-01-24: Choosing Software Freedom Costs Money Sometimes

    Apparently, the company that makes my hand lotion brand uses coupons.com for its coupons. The only way to print a coupon is to use a proprietary software browser plugin called “couponprinter.exe” (which presumably implements some form of “coupon DRM).

    So, as for, I actually have a price, in dollars, that it cost me to avoid proprietary software. Standing up for software freedom cost me $1.50 today. :) I suppose there are some people who would argue in this situation that they have to use proprietary software, but of course I'm not one of them.

    The interesting thing is that this program has a OS X and Windows version, but nothing for iOS and Android/Linux. Now, if they had the latter, it'd surely be proprietary software anyway.

    That said, coupons.com does have a send a paper copy to a postal address option, and I have ordered the coupon to be sent to me. But it expires 2014-03-31 and I'm out of hand lotion today; thus whether or not I get to use the coupon before expiration is an open question.

    I'm curious to try to order as many copies as possible of this coupon just to see if they implement ARM properly.

    ARM is of course not a canonical acronym to mean what I mean here. I mean “Analog Restrictions Management”, as opposed to the DRM (“Digital Restrictions Management”) that I was mentioned above. I doubt ARM will become a standard acronym for this, given the obvious overloading of ARM TLA, which is already quite overloaded.

    Posted on Friday 24 January 2014 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2013

December

  • 2013-12-05: Considerations on a non-profit home for your project

    [ This post of mine is cross-posted from Conservancy's blog.]

    I came across this email thread this week, and it seems to me that Node.js is facing a standard decision that comes up in the life of most Open Source and Free Software projects. It inspired me to write some general advice to Open Source and Free Software projects who might be at a similar crossroads0. Specifically, at some point in the history of a project, the community is faced with the decision of whether the project should be housed at a specific for-profit company, or have a non-profit entity behind it instead. Further, project leaders must consider, if they persue the latter, whether the community should form its own non-profit or affiliate with one that already exists.

    Choosing a governance structure is a tough and complex decision for a project — and there is always some status quo that (at least) seems easier. Thus, there will always be a certain amount of acrimony in this debate. I have my own biases on this, since I am the Executive Director of Conservancy, a non-profit home for Open Source and Free Software projects, and because I have studied the issue of non-profit governance for Open Source and Free Software for the last decade. I have a few comments based on that experience that might be helpful to projects who face this decision.

    The obvious benefit of a project housed in a for-profit company is that they'll usually always have more resources to put toward the project — particularly if the project is of strategic importance to their business. The downside is that the company almost always controls the trademark, perhaps controls the copyright to some extent (e.g., by being the sole beneficiary of a very broad CLA or ©AA), and likely has a stronger say in the technical direction of the project. There will also always be “brand conflation” when something happens in the project (Did the project do it, or did the company?), and such is easily observable in the many for-profit-controlled Open Source and Free Software projects.

    By contrast, while a for-profit entity only needs to consider the interests of its own shareholders, a non-profit entity is legally required to balance the needs of many contributors and users. Thus, non-profits are a neutral home for activities of the project, and a neutral place for the trademark to live, perhaps a neutral place to receive CLAs (if the community even wants a CLA, that is), and to do other activities for the project. (Conservancy, for its part, has a list of what services it provides.)

    There's also difference among non-profit options. The primary two USA options for Open Source and Free Software are 501(c)(3)'s (public charities) and 501(c)(6)'s (trade associations). 501(c)(3) public charities must always act in the public good, while 501(c)(6) trade associations act in interest of its paying for-profit members. I'm a fan of the 501(c)(3)-style of non-profit, again, because I help run one. IMO, the choice between the two really depends on whether you want the project run and controlled by a consortium of for-profit businesses, or if you want the project to operate as a public charity focused on advancing the public good by producing better Open Source and Free Software. BTW, the big benefit, IMO, to a 501(c)(3) is that the non-profit only represents the interests of the project with respect to the public good, so IRS prohibits the charity from conflating its motives with any corporate interest (be they single or aggregate).

    If you decide you want a non-profit, there's then the decision of forming your own non-profit or affiliating with an existing non-profit. Folks who say it's easy to start a new non-profit are (mostly) correct; the challenge is in keeping it running. It's a tremendous amount of work and effort to handle the day-to-day requirements of non-profit management, which is why so many Open Source and Free Software projects choose to affiliate or join with an existing non-profit rather than form their own. I'd suggest strongly that the any community look into joining an existing home, in part because many non-profit umbrellas permit the project to later “spin off” to form your own non-profit. Thus, joining an existing entity is not always a permanent decision.

    Anyway, as you've guessed, thinking about these questions is a part of what I do for a living. Thus, I'd love to talk (by email, phone or IRC) with anyone in any Open Source and Free Software community about joining Conservancy specifically, or even just to talk through all the non-profit options available. There are many options and existing non-profits, all with their own tweaks, so if a given community decides it'd like a non-profit home, there's lots to chose from and a lot to consider.

    I'd note finally that the different tweaks between non-profit options deserve careful attention. I often see people commenting that structures imposed by non-profits won't help with what they need. However, not all non-profits have the same type of structures, and they focus on different things. For example, Conservancy doesn't dictate anything regarding specific CLA rules, licensing, development models, and the like. Conservancy generally advises about all the known options, and help the community come to the conclusions it wants and implement them well. The only place Conservancy has strict rules is with regard to the requirements and guidelines the IRS puts forward on 501(c)(3) status. Meanwhile, other non-profits do have strict rules for development models, or CLAs, and the like, which some projects prefer for various reasons.

    Update 2013-12-07: I posted a follow up on Node.js mailing list in the original discussion that inspired me to write the above.


    0BTW, I don't think how a community comes to that crossroads matters that much, actually. At some point in a project's history, this issue is raised, and, at that moment, a decision is before the project.

    Posted on Thursday 05 December 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2013-11-13: The Trade-offs of Unpaid Free Software Labor

    I read with interest Ashe Dryden's blog post entitled The Ethics of Unpaid Labor and the OSS Community0, and I agree with much of it. At least, I agree with Dryden much more than I agree with Hanson's blog post that inspired Dryden's, since Hanson's seems almost completely unaware of the distinctions between Free Software funding in non-profit and for-profit settings, and I think Dryden's criticism that Hanson's view is narrowed by “white-male in a wealthy country” privilege is quite accurate. I think Dryden does understand the distinctions of non-profit vs. for-profit Free Software development, and Dryden's has an excellent discussion on how wealthy and powerful individuals by default have more leisure time to enter the (likely fictional) Free Software development meritocracy via pure volunteer efforts.

    However, I think two key points remain missing in the discussions so far on this topic. Specifically, (a) the issue of license design as it relates to non-monetary compensation of volunteer efforts and (b) developers' goals in using volunteer Free Software labor to bootstrap employment. The two issues don't interrelate that much, so I'll discuss them separately.

    Copyleft Requirements as “Compensation” For Volunteer Contribution

    I'm not surprised that this discussion about volunteer vs. paid labor is happening completely bereft of reference to the licenses of the software in question. With companies and even many individuals so rabidly anti-copyleft recently, I suspect that everyone in the discussion is assuming that the underlying license structure of these volunteer contributions is non-copyleft.

    Strong copyleft's design, however, deals specifically with the problems inherent in uncompensated volunteer labor. By avoiding the possibility of proprietary derivatives, copyleft ensures that volunteer contributions do have, for lack of a better term, some strings attached: the requirement that even big and powerful companies that use the code treat the lowly volunteer contributor as a true equal.

    Companies have resources that allows them to quickly capitalize on improvements to Free Software contributed by volunteers, and thus the volunteers are always at an economic disadvantage. Requiring that the companies share improvements with the community ensures that the volunteers' labor don't go entirely uncompensated: at the very least, the volunteer contributor has equal access to all improvements.

    This phenomenon is in my opinion an argument for why there is less risk and more opportunity for contributors to copylefted codebases. Copyleft allows for some level of opportunity to the volunteer contributor that doesn't necessarily exist with non-copylefted codebases (i.e., the contributor is assured equal access to later improvements), and certainly doesn't exist with proprietary software.

    Volunteer Contribution As Employment Terms-Setting

    An orthogonal issue is this trend that employers use Free Software contribution as a hiring criterion. I've frankly found this trend disturbing for a wholly different reason than those raised in the current discussed. Namely, most employers who hire based on past Free Software contribution don't employ these developers to work on Free Software!

    Free Software is, frankly, in a state of cooption. (Open Source itself, as a concept, is part of that cooption.) As another part of that cooption, teams of proprietary software (or non-released, secret software) developers use methodologies and workflows that were once unique to Free Software. Therefore, these employers want to know if job candidates know those workflows and methodologies so that the employer can pay the developer to stop using those techniques for the good of software freedom and instead use them for proprietary and/or secretive software development.

    When I was in graduate school, one of the reasons I keenly wanted to be a core contributor to Free Software was not to just get paid for any software development, but specifically to gain employment writing software that would be Free Software. In those days, you picked a codebase you liked because you wanted to be employed to work on that upstream codebase. In fact, becoming a core contributor for a widely used copylefted codebase was once commonly a way to ensure you'd have your pick of jobs being paid to work on that codebase.

    These days, most developers, even though they are required to use some Free Software as part of their jobs, usually are assigned work on some non-Free Software that interacts with that Free Software. Thus, the original meme, that began in the early 1990s, of volunteer for a Free Software codebase so you can later get paid to work on it, has recently morphed into volunteer to work on Free Software so you can get a job working on some proprietary software. That practice is a complete corruption and cooption of the Free Software culture.


    All that said, I do agree with Dryden that we should do more funding at the entry-level of Free Software development, and the internships in particular, such as those through the OPW are, as Dryden writes, absolutely essential to solve the obvious problem of under-representation by those with limited leisure time for volunteer contribution. I think such funding is best when it's done as part of a non-profit rather than a for-profit settings, for reasons that would require yet another blog post to explain.


    0Please note that I haven't seen any of the comments on Dryden's blog post or many of the comments that spawned it, because as near as I can tell, I can't use Disqus without installing proprietary software on my computer, through its proprietary Javascript. If someone can tell me how to read Disqus discussions without proprietary Javascript, I'd appreciate it.

    Posted on Wednesday 13 November 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2013-11-08: Canonical, Ltd.'s Trademark Aggression

    I was disturbed to read that Canonical, Ltd.'s trademark aggression, which I've been vaguely aware of for some time, has reached a new height. And, I say this as someone who regularly encourages Free Software projects to register trademarks, and to occasionally do trademark enforcement and also to actively avoid project policies that might lead to naked licensing. Names matter, and Free Software projects should strive to strike a careful balance between assuring that names mean what they are supposed to mean, and also encourage software sharing and modification at the same time.

    However, Canonical, Ltd.'s behavior shows what happens when lawyers and corporate marketing run amok and fail to strike that necessary balance. Specifically, Canonical, Ltd. sent a standard cease and desist (C&D) letter to Micah F. Lee, for running fixubuntu.com, a site that clearly to any casual reader is not affiliated with Canonical, Ltd. or its Ubuntu® project. In fact, the site is specifically telling you how to undo some anti-privacy stuff that Canonical, Ltd. puts into its Ubuntu, so there is no trademark-governed threat to its Ubuntu branding. Lee fortunately got legal assistance from the EFF, who wrote a letter explaining why Canonical, Ltd. was completely wrong.

    Anyway, this sort of bad behavior is so commonplace by Canonical, Ltd. that I'd previously decided to stop talking about when it reached the crescendo of Mark Shuttleworth calling me a McCarthyist because of my Free Software beliefs and work. But, one comment on Micah's blog inspired me to comment here. Specifically, Jono Bacon, who leads Ubuntu's PR division under the dubious title of Community Manager, asks this insultingly naïve question as a comment on Micah's blog: Did you raise your concerns the team who sent the email?.

    I am sure that Jono knows well what a C&D letter is and what one looks like. I also am sure that he knows that any lawyer would advise Micah to not engage with an adverse party on his own over an issue of trademark dispute without adequate legal counsel. Thus, for Jono to suggest that there is some Canonical, Ltd. “team” that Micah should be talking to not only pathetically conflates Free Software community operations with corporate legal aggression, but also seem like a Canonical, Ltd. employee subtly suggesting that those who receive C&D's from Canonical, Ltd.'s legal departments should engage in discussion without seeking their own legal counsel.

    Free Software projects should get trademarks of their own. Indeed, I fully support that and I encourage for folks interested in this issue to listen to Pam Chestek's excellent talk on the topic at FOSDEM 2013 (which Karen Sandler and I broadcast on Free as in Freedom). However, true Free Software communities don't try to squelch Free Speech that criticizes their projects. It's deplorable that Canonical, Ltd. has an organized campaign between their lawyers and their public relations folks like Jono to (a) send aggressive C&D letters to Free Software enthusiasts who criticize Ubuntu and (b) follow up on those efforts by subtly shaming those who lawyer-up upon receiving that C&D.

    I should finally note that Canonical, Ltd. has an inappropriate and Orwellian predilection for coopting words our community (including the word “community” itself, BTW). Most people don't know that I myself registered the domain name canonical.org back on 1999-08-06 (when Shuttleworth was still running Thawte) for a group of friends who liked to use the word canonical in the canonical way, and still do so today. However, thanks to Shuttleworth, it's difficult to use canonical in the canonical way anymore in Free Software circles, because Shuttleworth coopted the term and brand-markets on top of it. Ubuntu, for its part, is a word meaning human kindness that Shuttleworth has also coopted for his often unkind activities.


    Update at 16:17 on 2013-11-08: Canonical, Ltd. has posted a response regarding their enforcement action, which claims that their trademark policy is unusually permissive. This is true if the universe is “all trademark policies in the world”, but it is false if the universe is “Open Source and Free Software trademark policies”. Of course, like any good spin doctors, Canonical, Ltd. doesn't actually say this explicitly.

    Similarly, Canonical, Ltd. restates the oft-over-simplified claim that in trademark law a mark owner is expected to protect the authenticity of a trademark otherwise they risk losing the mark. What they don't tell you is why they believe failure to enforce in this specific instance against fixubuntu.com had specific risk. Why didn't they tell us that?: because it doesn't. I suspect they could have simply asked for the disclaimer that Micah gave them willingly, and that would have satisfied the aforementioned risk adequately.

    Posted on Friday 08 November 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

October

  • 2013-10-07: Using Perl PayPal API on Debian wheezy

    I recently upgraded to Debian wheezy. On, Debian squeeze, I had no problem using the stock Perl module Business::PayPal::API to import PayPal transactions for Software Freedom Conservancy, via the Debian package libbusiness-paypal-api-perl.

    After the wheezy upgrade, something goes wrong and it doesn't work. I reviewed some similar complaints, that seem to relate to this resolved bug, but that wasn't my problem, I don't think.

    I ran strace to dig around and see what was going on. The working squeeeze install did this:

    select(8, [3], [3], NULL, {0, 0})       = 1 (out [3], left {0, 0})
    write(3, "SOMEDATA"..., 1365) = 1365
    rt_sigprocmask(SIG_BLOCK, [ALRM], [], 8) = 0
    rt_sigaction(SIGALRM, {SIG_DFL, [], 0}, {SIG_DFL, [], 0}, 8) = 0
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    rt_sigprocmask(SIG_BLOCK, [ALRM], [], 8) = 0
    rt_sigaction(SIGALRM, {0xxxxxx, [], 0}, {SIG_DFL, [], 0}, 8) = 0
    rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
    alarm(60)                               = 0
    read(3, "SOMEDATA", 5)               = 5
    

    But the same script on wheezy did this at the same point:

    select(8, [3], [3], NULL, {0, 0})       = 1 (out [3], left {0, 0})
    write(3, "SOMEDATA"..., 1373) = 1373
    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
    select(0, NULL, NULL, NULL, {0, 100000}) = 0 (Timeout)
    read(3, 0xxxxxxxx, 5)                   = -1 EAGAIN (Resource temporarily unavailable)
    

    I was pretty confused, and basically I still am, but then I noticed this in the documentation for Business::PayPal::API, regarding SOAP::Lite:

    if you have already loaded Net::SSLeay (or IO::Socket::SSL), then Net::HTTPS will prefer to use IO::Socket::SSL. I don't know how to get SOAP::Lite to work with IO::Socket::SSL (e.g., Crypt::SSLeay uses HTTPS_* environment variables), so until then, you can use this hack: local $IO::Socket::SSL::VERSION = undef;

    That hack didn't work, but I did confirm via strace that on wheezy, IO::Socket::SSL was getting loaded instead of Net::SSL. So, I did this, which was a complete and much worse hack:

    use Net::SSL;
    use Net::SSLeay;
    $ENV{'PERL_LWP_SSL_VERIFY_HOSTNAME'} = 0;
    # Then:
    use Business::PayPal::API qw(GetTransactionDetails TransactionSearch);
    

    … And this incantation worked. This isn't the right fix, but I figured I should publish this, as this ate up three hours, and it's worth the 15 minutes to write this post, just in case someone else tries to use Business::PayPal::API on wheezy.

    I used to be a Perl expert once upon a time. This situation convinced me that I'm not. In the old days, I would've actually figured out what was wrong.

    Posted on Monday 07 October 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2013-09-23: The Dangers VC-Backed “Open Source”

    I'm thankful for Christopher Allan Webber for pointing me at this interesting post from Guillaume Lesniak, the developer of Focal (a once fully GPL'd camera application for Android/Linux), and how he was (IMO) pressured to give a proprietary license to the new CyanogenMod, Inc.

    I mostly think Guillaume's post speaks for itself, and I encourage readers of my blog to read it as well. When I read it, I couldn't help thinking about how this is what Free Software often becomes in the world of “Open Source”. Specifically, VCs, and the companies they back, just absolutely love to say they're doing “Open Source”, but it just goes to show the clear difference between “doing Open Source” and giving users software freedom. These VC-backed companies don't really want to share freedoms with their users: they want to exploit Free Software licenses to market more proprietary software.

    Years ago, I helped get the Replicant project started. I haven't been an active contributor to the project, but I hope that folks can see this is an actual, community-oriented, volunteer-run Free Software alternative firmware based on Android/Linux. In my opinion, any project controlled primarily by one company will likely never be all those things. I urge Cyanogenmod users to switch to Replicant today!

    Posted on Monday 23 September 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

June

  • 2013-06-26: Congratulations to Harald Welte on Another One

    I'd like to congratulate Harald Welte on yet another great decision in the Berlin court, this time regarding a long-known GPL violator called Fantec. There are so many violations of this nature that are of course so trivially easy to find; it's often tough to pick which one to take action on. Harald has done a great job being selective to make good examples of violators.

    Just as a bit of history, I first documented and confirmed the Fantec violation in January 2009, based on this email sent to the BusyBox mailing list. I discovered that the product didn't seem to be regularly on sale in the USA, so it wasn't ultimately part of the lawsuit that Conservancy and Erik Andersen filed in late 2009.

    However, since Fantec products were on sale mostly in Germany, it was a great case for Harald to pursue. I'm not surprised in the least that even three years after I confirmed the violation, gpl-violations.org found Fantec still out of compliance and was able to take action at that point. It's not surprising either that it took an entire year thereafter to get it resolved. My reaction to that was actually: Darn, that Berlin Court acts fast compared to Courts in the USA. :)

    Posted on Wednesday 26 June 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2013-06-23: Matthew Garrett on Mir

    Matthew Garrett has a good blog post regarding Mir and Canonical, Ltd.'s CLA. I encourage folks to read it; I added a comment there.

    Posted on Sunday 23 June 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2013-04-06: The Punditocracy of Unelected Technocrats

    All this past week, people have been emailing and/or pinging me on IRC to tell me to read the article, The Meme Hustler by Evgeny Morozov. The article is quite long, and while my day-job duties left me TL;DR'ing it for most of the week, I've now read it, and I understand why everyone kept sending me the article. I encourage you not to TL;DR it any longer yourself.

    Morozov centers his criticisms on Tim O'Reilly, but that's not all the article is about. I spend my days walking the Free Software beat as a (self-admitted) unelected politician, and I've encounter many spin doctors, including O'Reilly — most of whom wear the trappings of advocates for software freedom. As Morozov points out, O'Reilly isn't the only one; he's just the best at it. Morozov's analysis of O'Reilly can help us understand these P.T. Barnum's in our midst.

    In 2001, I co-wrote Freedom or Power? with RMS in response to O'Reilly's very Randian arguments (which Morozov discusses). I remember working on that essay for (literally) days with RMS, in-person at the FSF offices (and at his office at MIT), while he would (again, literally) dance around the room, deep in thought, and then run back to the screen where I was writing to suggest a new idea or phrase to add. We both found it was really difficult to craft the right rhetoric to refute O'Reilly's points. (BTW, most people don't know that there were two versions of my and RMS' essay; the original one was published as a direct response to O'Reilly on his own website. One of the reasons RMS and I redrafted as a stand-alone piece was that we saw our original published response actually served to increase uptake of O'Reilly's position. We decided the issue was important enough it needed a piece that would stand on its own indefinitely to defend that key position.)

    Meanwhile, I find it difficult to express more than a decade later how turbulent that time was for hard-core Free Software advocates, and how concerted the marketing campaign against us was. While we were in the middle of the Microsoft's attacks that GPL was an unAmerican cancer, we also had O'Reilly's the freedom that matters is the freedom to pick one's own license meme propagating fast. There were dirty politics afoot at the time, too: this all occurred during the same three-month period when Eric Raymond called me an inmate taking over the asylum. In other words, the spin doctors were attacking software freedom advocates from every side! Morozov's article captures a bit of what it feels like to be on the wrong side of a concerted, organized PR campaign to manipulate public opinion.

    However, I suppose what I like most about Morozov's article is it's the first time I've seen discussed publicly and coherently a rhetorical trick that spin doctors use. Notice when you listen to a pundit at their undue sense of urgency; they invariably act as if what's happening now is somehow (to use a phrase the pundits love): “game changing”. What I typically see is such folks use urgency as a reason to make compromises quickly. Of course, the real goal is a get-rich-(or-famous)-quick scheme for themselves — not a greater cause. The sense of urgency leaves many people feeling that if they don't follow the meme, they'll be left in the dust. A colleague of mine once described this entrancing effect as dream-like, and that desire to stay asleep and keep dreaming is what lets the hustlers keep us under their spell.

    I've admittedly spent more time than I'd like refuting these spin doctors (or, as Morozov also calls them, meme hustlers). Such work seems unfortunately necessary because Free Software is in an important, multi-decade (but admittedly not urgent :) battle of cooption (which, BTW, every social justice movement throughout history has faced). The tide of cooption by spin doctors can be stemmed only with constant vigilance, so I practice it.

    Still, this all seems a cold, academic way to talk about the phenomenon. For these calculating Frank Luntz types, winning is enough; rhetoric, to them, is almost an end in itself (which I guess one might dub “Cicero 2.0”). For those of us who believe in the cause, the “game for the game's sake” remains distasteful because there are real principles at stake for us. Meanwhile, the most talented of these meme hustlers know well that what's a game to them matters emotionally to us, so they use our genuine concern against us at every turn. And, to make it worse, there's more of them out there than most people realize — usually carefully donning the trappings of allies. Kudos to Morozov for reminding us how many of these emperors have no clothes.

    Posted on Saturday 06 April 2013 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2012

December

  • 2012-12-18: Perl is Free Software's COBOL, and That's Ok!

    In 1991, I'd just gotten my first real programming job for two reasons: nepotism, and a willingness to write code for $12/hour. I was working as a contractor to a blood testing laboratory, where the main development job was writing custom software to handle, process, and do statistical calculations on blood testing results, primarily for paternity testing.

    My father had been a software developer since the early 1970s, and worked as a contractor at this blood lab since the late 1970s. As the calendar had marched toward the early 1990s, technology cruft had collected. The old TI mainframe, once the primary computer, now only had one job left: statistical calculation for paternity testing, written in TI's Pascal. Slowly but surely, the other software had been rewritten and moved to an AT&T 3B2/600 running Unix System VR3.2.3. That latter machine was the first access I had to a real computer, and certainly the first time I had access to Usenet. This changed my life.

    Ironically, even on that 3B2, the accounting system software was written in COBOL. This seemed like “more cruft” to me, but fortunately there was a third-party vendor who handled that software, so I didn't have to program in COBOL.

    I had the good fortune, actually, to help with the interesting problems, which included grokking data from a blood testing machine that dumped a bunch of data in some weird reporting format onto its RS-232 port at the end of every testing cycle. We had to pull the data of that RS-232 interface and load the data in the database. Perl, since it treated regular expressions as first-class citizens, and had all the Unix block device fundamentals baked in as native (for the RS-232 I/O), was the obvious choice.

    After that project, I was intrigued by this programming language that had made the job so easy. My father gave me a copy of the Camel book — which was, at that point, almost hot off the presses. I read it over a weekend and I decided that I didn't really want to program in any other language again. Perl was just 4 years old then; it was a young language — Perl 4 had just been released. I started trying to embed Perl into our database system, but it wasn't designed for embedding into other systems as a scripting language. So, I ended up using Tcl instead for the big project of rewriting the statical calculation software to replace the TI mainframe. After a year or two writing tens of thousands of lines of Tcl, I was even more convinced that I'd rather be writing in Perl. When Perl 5 was released, I switched back to Perl and never really looked back.

    Perl ultimately became my first Free Software community. I lurked on perl5-porters for years, almost always a bit too timid to post, or ever send in a patch. But, as I finished my college degree and went to graduate school, I focused my thesis work on Perl and virtual machines. I went to the Perl conference every year. I was even in the room for the perl5-porters meeting the day after Jon Orwant's staged tantrum, which was the catalyst for the Perl 6 effort. I wrote more than a few RFC's during the Perl 6 specification process. And, to this day, even though I've since done plenty of Python development, too, when I need to program to do something, I open an Emacs buffer and start typing #!/usr/bin/perl.

    Meanwhile, I never did learn COBOL. But, I was amazed to hear that multiple folks who graduated with me eventually got jobs at a health insurance company. The company trained them in COBOL, so that they could maintain COBOL systems all day. Everyone once in a while, I idly search a job site for COBOL. Today, that search is returning 2,338 open jobs. Most developers never hear about it, of course. It's far from the exciting new technology, but it's there, it's needed and it's obviously useful to someone. Indeed, the COBOL standard was just updated 10 years ago, in 2002!

    I notice these days, though, that when I mentioned having done a lot of Perl development in my life, the average Javascript, Python, or Haskell developer looks at me like I looked at my dad when he told me that accounting system was written in COBOL. I'd bet they'd have my same sigh of relief when told that “someone else” maintains that code and they won't have to bother with it.

    Yet, I still know people heavily immersed in the Perl community. Indeed, there is a very active Perl community out there, just like there's an active COBOL community. I'm not active in Perl like I once was, but it's a community of people, who write new code and maintain old code in Perl, and that has value. More importantly, though, (and unlike COBOL), Perl was born on Usenet, and was released as Free Software from the day of its first release, twenty-five years ago today. Perl was born as part of Free Software culture, and it lives on.

    So, I get it now. I once scoffed at the idea that anyone would write in COBOL anymore, as if the average COBOL programmer was some sort of second-class technology citizen. COBOL programmers in 1991, and even today, are surely good programmers — doing useful things for their jobs. The same is true of Perl these days: maybe Perl is finally getting a bit old fashioned — but there are good developers, still doing useful things with Perl. Perl is becoming Free Software's COBOL: an aging language that still has value.

    Perl turns 25 years old today. COBOL was 25 years old in 1984, right at the time when I first started programming. To those young people who start programming today: I hope you'll learn from my mistake. Don't scoff at the Perl programmers. 25 years from now, you may regret scoffing at them as much as I regret scoffing at the COBOL developers. Programmers are programmers; don't judge them because you don't like their favorite language.

    Update (2013-04-12): I posted a comment on Allison Randal's blog about similar issues of Perl's popularity.

    Posted on Tuesday 18 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-12-14: The Symmetry of My UnAmerican McCarthyist Cancer

    In mid-2001, after working for FSF part-time for the prior year and a half, I'd actually just started working at FSF full-time. I'd recently relocated to Cambridge, MA to work on-site at the FSF offices. The phone started ringing. The aggressive Microsoft attacks had started; the press wanted to know FSF's response. First, Ballmer'd said the GPL was a cancer. Then, Allchin said it was unAmerican1. Then, Bill Gates added (rather pointlessly and oddly) that it was a pac-man that eats up your business. Microsoft even shopped weird talking-points to the press as part of their botched political axe-job on FSF.

    FSF staffing levels have always been small, but FSF was even smaller then. I led a staff of four to respond to the near constant press inquiries for the entire summer. We coordinated speaking engagements for RMS related to the attacks, and got transcripts published. We did all the stuff that you do when the wealthiest corporation in the world decides it wants to destroy a small 501(c)(3) charity that publishes a license that fosters software sharing. From my point of view, I'll admit now that I was, back then, in slightly over my head: this was my first-ever non-software-development job. I was new to politics, new to management, new to just about everything that I needed to do to lead the response to something like that. I learned fast; hopefully it was fast enough.

    The experience made a huge impression on me. I got quickly comfortable to the idea that, if you work for a radical social justice cause, there's always someone powerful attacking your political positions, but if you believe your cause is just and what you're doing is right, you'll survive. I found that good non-profit work is indeed something that just one of us can do against all that money and power trying to crush us into roaches0. Non-profit work really was the dream career I'd always wanted.

    Still, the experience left me permanently distrustful of Microsoft. I've tried to kept an open mind, and watch for potential change in behavior. I admittedly don't think Microsoft became a friend to Free Software in the 11 years since they put me through the wringer during what was almost literally my first day on the job as FSF's Executive Director (a position I ultimately held until 2005). But, I am now somewhat sure Microsoft's executives aren't hatching new plans to kill copyleft every morning anymore. Indeed, I was excited this week to see that my colleagues at the Samba Project acknowledged Microsoft's help in creating documentation that allowed Samba to implement compatibility with Active Directory. Even I have to admit that companies do change, and sometimes a little bit for the better.

    But, companies don't always change for the better. Over an even shorter period, I've watched another company get worse at almost the same rate as Microsoft's improving.

    Specifically, this week, Mark Shuttleworth of Canonical, Ltd. said that those of us who stand strongly against proprietary software device drivers are insecure McCarthyists. I wonder if Mark realized the irony of using the term McCarthyism to refer to the same people who Microsoft called unAmerican just a decade ago.

    I marvel at these shifting winds of politics. These days, the guy out there slurring against copyleft advocates claims to be the biggest promoter of Free Software himself, and in fact built most of his product on the Free Software that is often defended by the people he claims are on a witch-hunt.

    I wrote many blog posts in 2010 critical of Canonical, Ltd. and its policies. Someone asked me in October if I'd stopped because Canonical, Ltd. got better, or if they'd just bought me off. I answered simply, saying, First of all, Mark hasn't shared any of his unfathomable financial wealth with me. But, more importantly, Mark is making enough bad decisions that Canonical, Ltd.'s behavior is now widely criticized, even by the tech press. Others are doing a good enough job pointing out the problems now; I don't have to. Indeed, I'm supportive of RMS' recent comments about Canonical, Ltd. and its Ubuntu project (and RMS surely has a larger microphone than I do, since he's famous). I've also got nothing to add to his well-argued points, so I simply endorse them.

    Nevertheless, I just couldn't let the situation go without commenting. This week, I watched Microsoft (who once ran a campaign to kill FSF's flagship license) do something helpful to Free Software, while also watching Canonical, Ltd. (who has helped write a lot of GPL'd software) pull a page from Microsoft's old playbook to attack GPL advocates. That's got an intriguing symmetry to it. It's not “history repeating itself”, because all the details are different. But, one fact is still exactly the same: The Wealthy sure do like to call us names when it suits them.

    Update 2012-12-15: In addition to my usual identi.ca comment thread (which has been quite active on this post), there's also a comment thread on Hacker News and also one on reddit about this blog post.

    Update 2012-12-18: Karen Sandler and I discuss some of the issues related to Shuttleworth's comments on Free as in Freedom, Episode 0x36.


    0 Strangely, my head (somewhat-uselessly) still contains now, as it did then, verbatim copies of Dead Kennedys' lyric sheets, so I quoted that easily from memory. Fortunately, I am pretty sure verbatim copying something into your own brain isn't copyright infringement (yet).

    1I realized after reading some of the reddit comments that it might be useful to link here to the essay I wrote at the time of Allchin's comments, called The GNU GPL and the American Dream.

    Posted on Friday 14 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-12-09: Who Ever Thought APIs Were Copyrightable, Anyway?

    Back in the summer, there was a widely covered story about Judge Alsup's decision regarding copyrightablity in the Oracle v. Google case. Oracle has appealed the verdict so presumably this will enter the news again at some point. I'd been meaning to write a blog post about it since it happened, and also Karen Sandler and I had been planning an audcast to talk about it.

    Karen and I finally released last week our audcast on it, episode 0x35 of FaiF on the subject. Fact of the matter is, as Karen has been pointing out, there actually isn't much to say.

    Meanwhile, the upside in delay in commenting means that I can respond to some of the comments that I've seen in the wake of decision's publication. The most common confusion about Alsup's decision, in my view, comes from the imprecision of programmers' use of the term “API”. The API and the implementation of that API are different. Frankly, in the Free Software community, everyone always assumed APIs themselves weren't copyrightable. The whole idea of a clean-room implementation of something centers around the idea that the APIs aren't copyrighted. GNU itself depends on the fact that Unix's APIs weren't copyrighted; just the code that AT&T wrote to implement Unix was.

    Those who oppose copyleft keep saying this decision eviscerates copyleft. I don't really see how it does. For all this time, Free Software advocates have always reimplemented proprietary APIs from scratch. Even copylefted projects like Wine depend on this, after all.

    But, be careful here. Many developers use the phrase API to mean different things. Implementations of an API are still copyrightable, just like they always have been. Distribution of other people's code that implement APIs still requires their permission. What isn't copyrightable is general concepts like “to make things work, you need a function that returns an int and takes a string as an argument and that function must called Foo”.

    Note: This post has been about the copyright issues in the case. I previously wrote a blog post when Oracle v. Google started, which was mostly about the software patent issues. I think the advice in there for Free Software developers is still pretty useful.

    Posted on Sunday 09 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-12-03: FOSDEM Legal & Policy Issues DevRoom

    Richard Fontana, Tom Marble, Karen Sandler, and I will reprise our roles as co-coordinators of the Legal and Policy Issues DevRoom for FOSDEM 2013. The CFP for the FOSDEM 2013 Legal & Policy Issues DevRoom is now available, and the deadline for submission is 21 December 2012, about 18 days from now.

    I want to put a very specific call out to a group of people who may not have considered submitting a talk to a track like this before. In particular, if you are a Free Software developer who has ideas about the policy/licensing decisions for your project, then you should consider submitting a proposal.

    The problem we have is that we often hear from lawyers, or licensing pundits like me on these types of tracks. We all have a lot to say about issue of policy or licensing. But, it's the developers who lead these projects who know best what policy issues you face, and what is needed to address those issues.

    I also want to add something my graduate adviser once said to me: At the Master's level, it's sufficient for your thesis just to ask an important and complex question well. Only a PhD-level thesis has to propose answers to such questions. In my view, our track is at the Master's level: talks that ask complex licensing policy questions well, but don't necessarily have all the answers are just the kind of proposals we're seeking.

    Please share this CFP widely. We've got a two-day dev room so there are plenty of slots, and while we can't guarantee acceptance of any specific talk, your job as submitters is to make the job of the co-chairs difficult by having to choose between many excellent talks. We look forward to your submissions!

    Posted on Monday 03 December 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2012-11-29: If You've Got a Problem With Me, Please Contact Me!

    [ I usually write blog posts about high-minded software freedom concepts. This post isn't one of those; it's much more typical personal blog-fare, so please stop reading here if you're looking for a good software freedom essay; just move on to another one of my blog posts if that's what you want. ]

    I heard something really odd today. I was told that a relatively large group of people find me untrustworthy and refuse to work or collaborate with me because of it. I heard this second-hand, and I asked for more details, and the person who told me really doesn't want to be involved any further (and I don't blame that person, because the whole thing is admittedly rather silly, and I'd walk away too if it wasn't personally about me).

    There are people in the world I don't trust too, of course. I always tell them so to their face. I just operate my life in a really transparent way, so if I believe someone is my political opponent, I tell them so. I've written emails to people that say things like: Now that you work for Company Blah, I have to assume you're working against Free Software, because Company Blah has a history of doing so. If someone says something offensive to me, I tell them they've offended me. Sometimes, I clearly say that I am explicitly not forgiving the person, which thus makes it clear that there is a standing issue between us indefinitely. I do occasionally hold a grudge. (Frankly, I doubt people who claim they never hold a grudge, because everyone I've ever met seems to have a grudge against somebody for something.)

    I've been told that I'm not tactful. I always respond with: Of course, I'm not a tactful person. I've made a conscious choice not to change that behavior because, IMO, the other option is to leave people guessing about how you feel about their actions. If I think someone's action is wrong, I tell them I think it's wrong and why. If I think someone's action is good, I thank them for it and ask if I can help in the future. That's not a tactful way to live, I admit, but I believe it's nevertheless an honorable way to live. I'm grateful for the tactful people I know, because I realize they can accomplish things that I can't, but I also point out that there are things that the untactful can accomplish that the tactful can't. For example, only the tactless can point out emperors who wear no clothes.

    Meanwhile, the kinds of backroom (and seemingly tactful) politics that we sometimes see in Free Software have a way of descending into high school drama. I heard from Foo who heard from Bar that you won't be elected class president because nobody likes you. No, I can't say who Bar heard it from. No, I can't tell you exactly why. This immature behavior is, IMO, much worse than being tactless.

    I frankly think those who operate this way should be ashamed of themselves. I'm therefore putting out a public call (which is just a repeat of what I've said privately to people for years): if you have some problem with something I've done, or find my actions at any time untrustworthy, or wrong, or anything else negative, you're welcome to contact me. I get emails almost weekly anyway of people who have issues with something I've said on the Free as in Freedom audcast or somewhere else. I take the time to answer almost everyone who writes to me. I also always tell people that you can keep pinging me until I answer and I won't be offended if you do. Sometimes, I might just write back with the reasons why I decided not to answer you. But, I'll always at least tell you my opinions on what you've said, even if it's just a tactless: I don't think what you're writing about is a major priority and I can't schedule the time to think about it further right now. I challenge others in the Free Software community to also rise up to more transparency in their actions and statements.

    I want to be clear, BTW, there's a difference between being tactless and mean. I work really hard not to be mean; I sometimes fail, and I also work very hard to examine my actions to see if I've crossed the line. I send apologies to people when it becomes apparent that I've been not just tactless but also mean. I have to admit, though, there are plenty of mean people kicking around the Free Software world who owe a bunch of apologies (including some to me), but if you think I owe you an apology, I encourage you to write to me and ask for one. In my tactless style, I'll either give you an apology or tell you why I disagree about why you deserve one. :)

    Finally, I thought hard about whether to “name names” herein. It's surely obvious that a specific situation has inspired my words above, and those who know what this situation is will realize immediately; those that don't will sadly be left wondering what the hell is going on. Still, as disgusted as I am about the backroom politics I'm dealing with at the moment, I think public admonishment of the perpetrators here would cross the line from tactless to mean, so I decided not to cross the line.

    Posted on Thursday 29 November 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-11-22: Left Wondering Why VideoLan Relicensed Some Code to LGPL

    I first met the original group of VLC developers at the Solutions GNU/Linux conference in 2001. I had been an employee of FSF for about a year at the time, and I recall they were excited to tell the FSF about the project, and very proud that they'd used FSF's premier and preferred license (at the time): GPLv2-or-later.

    What a difference a decade makes. I'm admittedly sad that VLC has (mostly) finished its process of relicensing some of its code under LGPLv2.1-or-later. While I have occasionally supported relicensing from GPL to LGPL, every situation is different and I think it should be analyzed carefully. In this case, I don't support VideoLan's decision to relicense the libVLC code.

    The main reason to use the LGPL, as RMS put eloquently long ago, is for situations where there are many competitors and developers would face serious difficulty gaining adoption of a strong-copylefted solution. Another more recent reason that I've discovered to move to weaker licenses (and this was the case with Qt) is to normalize away some of the problems of proprietary relicensing. However, neither reason applies to libVLC.

    VLC is the most popular media player for desktop computers. I know many proprietary operating system users who love VLC and it's the first application they download to a new computer. It is the standard for desktop video viewing, and does a wonderful job advocating the value of software freedom to people who live in a primarily proprietary software world.

    Meanwhile, the VideoLan Organization's press statements have been quite vague on their reasons for changing, saying only that this change was motivated to match the evolution of the video industry and to spread the VLC engine as a multi-platform open-source multimedia engine and library. The only argument that I've seen discussed heavily in public for relicensing is ostensibly to address the widely publicized incompatibility of copyleft licensing with various App Store agreements. Yet, those incompatibilities still exist with the LGPL or, indeed, any true copyleft license. The incompatibilities of Apple's terms are so strict that they make it absolutely impossible to comply simultaneously with any copyleft and Apple's terms at the same time. Other similar terms aren't much better, even with Google's Play Store (— its terms are incompatible with any copyleft license if the project has many copyright holders)0.

    So, I'm left baffled: does the VLC community actually believes the LGPL would solve that problem? (To be clear, I haven't seen any official statement where the VideoLAN Organization claims that relicensing will solve that issue, but others speculate that it's the reason.) Regardless, I don't think it's a problem worth solving. The specters of “Application Store” terms and conditions are something to fight against wholly in an uncompromising way. The copyleft licensing incompatibilities with such terms are actually a signaling mechanism to show us that these stores are working against software freedom actively. I hope developers will reject deployment to these application stores entirely.

    Therefore, I'm left wondering what VLC seeks to do here. Do they want proprietary application interfaces that use their core libraries? If so, I'm left wondering why: VLC is already so popular that they could pull adopters toward software freedom by using the strong copyleft of GPL on libVLC. It seems to me they're making a bad trade-off to get only marginally more popular by allowing some proprietary derivatives. OTOH, I guess I should cut my losses on this point and be glad they stuck with any copyleft at all and didn't go all the way to a permissive license.

    Finally, I do think there's one valuable outcome shown by this relicensing effort (which Gerv pointed out first): it is possible to relicense a multi-copyright-held code based. It's a lot of work, but it can be done. It appears to me that VLC did a responsible and reasonable job on that part, even if I disagree strongly with the need for such a job here in the first place.

    Update (2012-11-30): It's been pointed out to me that VLC has relegated certain code from VLC into a library called libVLC, and that's the code that's been relicensed. I've made today changes to the post above to clarify that issue.


    0 If you want to hear more about my views and analysis of application store terms and conditions, please listen to the Application Stores Panel that I was on at FOSDEM 2012, which was broadcast on the audcast, Free as in Freedom.

    Posted on Thursday 22 November 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2012-09-17: GPL Violations Are Still Pretty Common, You Know?

    As I've written about before, I am always amazed when suddenly there is widespread interest in, excitement over, and focus on some particular GPL violation. I've spent most of my adult life working on copyleft compliance issues, so perhaps I've got an overly unique perspective. It's just that I've seen lots of GPL violations every single day since the late 1990s. Even now, copyleft compliance remains a regular part of my monthly work. Even though it's now only one task among many that I work on every day, I'm still never surprised nor shocked by some violation.

    When some GPL violation suddenly becomes a “big story”, it reminds me of celebrity divorces. There are, of course, every single day, hundreds (maybe even thousands) of couples facing the conclusion that their marriage has ended. It's a tragedy for their families, and they'll spend years recovering. The divorce impacts everyone they know: both their families, and all their friends, too. Everyone's life who touches the couple is impacted in some way or other.

    Of course, the same is true personally for celebrities when they divorce. The weird thing is, though, that people who don't even know these celebrities want to read about the divorce and know the details. It's exciting because the media tells us that we really want to know all the details and follow the drama every step of the way. It's disturbing that our culture sympathizes more with the pain of the rich and famous than the pain of our everyday neighbors.

    Like divorce, copyleft violations are very damaging, but failure to comply with the copyleft licenses impacts three specific sets of people who directly touch the issue: the people whose copyright are infringed, the people who infringed the copyrights, and the people who received infringing articles. Everyone else is just a spectator0.

    That said, my heart goes out to ever user who is sold software that they can't study, improve and share. I'm doubly concerned when those people were legally entitled to those rights, and an infringer snatched them away by failing to comply with copyleft licenses. I also have great sympathy for the individual copyright holders who licensed their works under GPL, yet find many infringers ignoring the rather simple and reasonable requirements of GPL.

    But, I don't think gawking has any value. My biggest post-mortem complaint about SCO was not the FUD: that was obviously wrong and we knew the community would prevail. The constant gawking took away time that we could have spent writing more Free Software and doing good work in the software freedom community. So, from time to time, I like to encourage everyone to avoid gawking. (Unless, of course, you're doing it with the GNU implementation of AWK. :)

    So, when you read GPL violation stories, even when they seem novel, remember that they're mundane tragedies. It's good someone's working on it, but they don't necessarily deserve the inordinate attention that they sometimes get.

    Update, morning of 2012-09-18: Someone asked me to state more clearly how I felt about Red Hat's GPL enforcement action against TwinPeaks1. I carefully avoided saying that above last night, but I suppose I'm going to get asked so often that I might as well say. Plus, the answer is actually quite simple: I simply don't know until the action completes. I only believe that GPL enforcement is morally legitimate if compliance with the GPL is paramount above all other goals. I have never seen Red Hat enforce the GPL before, so I don't know the pecking order of their goals. The proof of the pudding is in the eating, and the proof in the enforcement is whether compliance is obtained. In short, if I were the Magic 8-Ball of GPL compliance, I'd say “Reply hazy, ask again later”2.


    0 Obviously, there's a large negative impact that many seemingly “small” GPL violations, in aggregate, will together have on the entire software freedom community. But, I'm examining the point narrowly in the main text above. For example, imagine if the only GPL violation in the history of the world were done by one company, on one individual's copyrights, and only one customer ever purchased the infringing product. While I'd still value pursuit of that violation (and I would even help such a copyright holder pursue the matter), even I'd have to readily admit that the impact on the software freedom community of that one violation is rather limited.

    Indeed, the larger policy impact of violations comes from the aggregate effect. That's why I've long argued that it's important to deal with the giant volume of GPL violations rather than focus on any one specific matter, even if that matter looks like a “big one”. It's just too easy sometimes to think one particular copyright holder, or one particular program, or one particular product deserves an inordinate amount of attention, but such undue focus is likely an outgrowth of familiarity breeding a bit too much contempt. I occasionally temporarily fall into that trap, so it makes me sad when others do as well.


    1 What bugs me most is that I have yet to see a good Twin Peaks parody (ala Twin Beaks) of this whole court case. I suppose I'm just too old; I was in high school when the entire nation was obsessed with David Lynch's one hit TV series.

    2 cf15290cc2481dbeacef75a3b8a87014e056c256a1aa485e8684c8c5f4f77660

    Posted on Monday 17 September 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2012-07-23: I Received a 2012 O'Reilly Open Source Award

    On last Friday 20 July 2012, I received an O'Reilly Open Source Award, in appreciation for my decade of work in Free Software non-profit organizations, including my current daily work at the Software Freedom Conservancy, my work at the FSF (including starting FSF's associate membership program), and for my work creating and defending copyleft licensing, including such things as inventing the idea behind the Affero clause, helping draft AGPLv3, and, more generally, enforcing copyleft.

    I'm very proud of all this work. My obsession with software freedom goes back far into my past, when I downloaded my first copy of GNU Emacs in 1991 from Usenet and my first GNU/Linux distribution, SLS, in 1992, booting for the first time, on the first computer I ever owned, a copy of Linux 0.99pl12.

    I honestly have written a lot less Free Software than I wanted to. I've made a patch here and there over the years to dozens of projects. I was a co-maintainer of the AGPL'd PokerSource system for a while, and I made various (mostly mixed-success) attempts to build a better virtual machine for Perl, which now is done much better than I ever did by the Parrot project.

    Despite the fact that making better software was what enthralled me most, feeling the helplessness of supporting, using and writing proprietary software in my brief for-profit career convinced me that lack of adequate software freedom was the most dangerous social justice problem in the computing community. I furthermore realized that lots of people were ready and willing to write great Free Software, but that few wanted to do the (frankly more boring) work of running non-profit organizations to defend and advance software freedom. Thus, I devoted myself to helping FSF and Conservancy to be successful organizations that could assist in that regard. I'm privileged and proud to continue my service to both of these organizations.

    Being recognized for this work means a great deal to me. Awards have a special meaning for me, because financial success never really mattered much to me, but knowing that I've made a contribution to something greater than myself matters greatly. Receiving an award that indicates that I've succeeded in that regard invigorates me to do even more. So, at this moment of receiving this award, I'd like to thank all of you in the software freedom community who appreciate and support my work. It means a great deal to me that my work has made a positive impact.

    Posted on Monday 23 July 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

May

  • 2012-05-29: Conservancy's Coordinated Compliance Efforts

    As most readers might have guessed, my work at Software Freedom Conservancy has been so demanding in the last few months that I've been unable to blog, although I have kept up (along with my co-host Karen Sandler) releasing new episodes of the Free as in Freedom oggcast.

    Today, Karen and I released a special episode of FaiF (which is merely special because it was released during a week that we don't normally release a show). In it, Karen and I discuss in detail Conservancy's announcement today of its new coordinated compliance program that includes many copyright holders and projects.

    This new program is an outgrowth of the debate that happened over the last few months regarding Conservancy's GPL compliance efforts. Specifically, I noticed that, buried in the FUD over the last four months regarding GPL compliance, there was one key criticism that was valid and couldn't be ignored: Linux copyright holders should be involved in compliance actions on embedded systems. Linux is a central component of such work, and the BusyBox developers agreed wholeheartedly that having some Linux developers involved with compliance would be very helpful. Conservancy has addressed this issue by building a broad coalition of copyright holders in many different projects who seek to work on compliance with Conservancy, including not just Linux and BusyBox, but other projects as well.

    I'm looking forward in my day job to working collaboratively with copyright holders of many different projects to uphold the rights guaranteed by GPL. I'm also elated at the broad showing of support by other Conservancy projects. In addition to the primary group in the announcement (i.e., copyright holders in BusyBox, Samba and Linux), a total of seven other GPL'd and/or LGPL'd projects have chosen Conservancy to handle compliance efforts. It's clear that Conservancy's compliance efforts are widely supported by many projects.

    The funniest part about all this, though, is that while there has been no end of discussion of Conservancy's and other's compliance efforts this year, most Free Software users never actually have to deal with the details of compliance. Requirements of most copyleft licenses like GPL generally trigger on distribution of the software — particularly distribution of binaries. Since most users simply receive distribution of binaries, and run them locally on their own computer, rarely do they face complex issues of compliance. As the GPLv2 says, The act of running the Program is not restricted.

    Posted on Tuesday 29 May 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2012-02-11: Cutting Through The Anti-Copyleft Political Ruse

    I'd like to thank Harald Welte for his reasoned and clear blog post about GPL enforcement which I hope helps to clear up some of the confusions that I also wrote about recently.

    Harald and I appear to agree that all enforcement actions should request, encourage, and pressure companies toward full FLOSS compliance. Our only disagreement, therefore, is on a minor strategy point. Specifically, Harald believes that the “reinstatement of rights lever” shouldn't be used to require compliance on all FLOSS licenses when resolving a violation matter, and I believe such use of that lever is acceptable in some cases. In other words, Harald and I have only a minor disagreement on how aggressively a specific legal tools should be utilized. (I'd also note that given Harald's interpretation of German law, he never had the opportunity to even consider using that tool, whereas it's always been a default tool in the USA.) Anyway, other than this minor side point, Harald and I appear to otherwise be in full in agreement on everything else regarding GPL enforcement.

    Specifically, one key place where Harald and I are in total agreement is: copyright holders who enforce should approve all enforcement strategies. In every GPL enforcement action that I've done in my life, I've always made sure of that. Indeed, even while I'm a very minor copyright holder in BusyBox (just a few patches), I still nevertheless defer to Erik Andersen (who holds a plurality of the BusyBox copyrights) and Denys Vlasenko (who is the current BusyBox maintainer) about enforcement strategy for BusyBox.

    I hope that Harald's post helps to end this silly recent debate about GPL enforcement. I think the overflowing comment pages can be summarized quite succinctly: some people don't like copyleft and don't want it enforced. Others disagree, and want to enforce. I've written before that if you support copyleft, the only logically consistent position is to also support enforcement. The real disagreement here, thus, is one about whether or not people like copyleft: that's an age-old debate that we just had again.

    However, the anti-copyleft side used a more sophisticated political strategy this time. Specifically, copyleft opponents are attempting to scapegoat minor strategy disagreements among those who do GPL enforcement. I'm grateful to Harald for cutting through that ruse. Those of us that support copyleft may have minor disagreements about enforcement strategy, but we all support GPL enforcement and want to see it continue. Copyleft opponents will of course use political maneuvering to portray such minor disagreements as serious policy questions. Copyleft opponents just want to distract the debate away from the only policy question that matters: Is copyleft a good force in the world for software freedom? I say yes, and thus I'm going to keep enforcing it, until there are no developers left who want to enforce it.

    Posted on Saturday 11 February 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2012-02-01: Some Basic Thoughts on GPL Enforcement

    I've had the interesting pleasure the last 36 hours to watch people debate something that's been a major part of my life's work for the last thirteen years. I'm admittedly proud of myself for entirely resisting the urge to dive into the comment threads, and I don't think it would be all that useful to do so. Mostly, I believe my work stands on its own, and people can make their judgments and disagree if they like (as a few have) or speak out about how they support it (as even more did — at least by my confirmation-biased count, anyway :).

    I was concerned, however, that some of the classic misconceptions about GPL enforcement were coming up yet again. I generally feel that I give so many talks (including releasing one as an oggcast) that everyone must by now know the detailed reasons why GPL enforcement is done the way it is, and how a plan for non-profit GPL enforcement is executed.

    But, the recent discussion threads show otherwise. So, over on Conservancy's blog, I've written a basic, first-principles summary of my GPL enforcement philosophy and I've also posted a few comments on the BusyBox mailing list thread, too.

    I may have more to say about this later, but that's it for now, I think.

    Posted on Wednesday 01 February 2012 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

2011

December

  • 2011-12-16: FaiFCast Release, and Submit to FOSDEM Legal & Policy Issues DevRoom

    Today Karen Sandler and I released Episode 0x1E of the Free as in Freedom oggcast (available in ogg and mp3 formats). There are two important things discussed on that oggcast that I want to draw your attention to:

    Submit a proposal for the Legal & Policy Issues DevRoom CFP

    Tom Marble, Richard Fontana, Karen Sandler, and I are coordinating the Legal and Policy Issues DevRoom at FOSDEM 2012. The Call for Participation for the DevRoom is now available. I'd like to ask anyone reading this blog post who has an interest in policy and/or legal issues related to software freedom to submit a talk by Friday 30 December 2011, by emailing <fosdem-legal@faif.us>.

    We only have about six slots for speakers (it's a one-day DevRoom), so we won't be able to accept all proposals. I just wanted to let everyone know that so you don't flame me if you submit and get rejected. Meanwhile, note that our goal is to avoid the “this is what copyrights, trademarks and patents are” introductory talks. Our focus is on complex issues for those already informed about the basics. We really felt that the level of discourse about legal and policy issues at software freedom conferences needs to rise.

    There are, of course, plenty of secret membership clubs 0, even some with their own private conferences, where these sorts of important issues are discussed. I personally seek to move high-level policy discussion and debate out of the secret “old-boys” club backrooms and into a public space where the entire software freedom community can discuss openly important legal and policy questions in the community. I hope this DevRoom is a first step in that direction!

    Issues & Questions List for the Software Freedom Non-Profits Debate

    I've made reference recently to debates about the value of non-profit organizations for software freedom projects. In FaiFCast 0x1E, Karen and I discuss the debate in depth. As part of that, as you'll see in the show notes, I've made a list of issues that I think were fully conflated during the recent debates. I can't spare the time to opine in detail on them right now (although Karen and I do a bit of that in the oggcast itself), but I did want to copy the list over here in my blog, mainly to list them out as issues worth thinking about in a software freedom non-profit:

    • Should a non-profit home decide what technical infrastructure is used for a software freedom project? And if so, what should it be?
    • If the non-profit doesn't provide technological services, should non-profits allow their projects to rely on for-profits for technological or other services?
    • Should a non-profit home set political and social positions that must be followed by the projects? If so, how strictly should they be enforced?
    • Should copyrights be held by the non-profit home of the project, or with the developers, or a mix of the two?
    • Should the non-profit dictate licensing requirements on the project? If so, how many licenses and which licenses are acceptable?
    • Should a non-profit dictate strict copyright provenance requirements on their projects? If not, should the non-profit at least provide guidelines and recommendations?

    This list of questions is far from exhaustive, but I think it's a pretty good start.


    0 Admittedly, I've got a proverbial axe to grind about these secretive membership-only groups, since, for nearly all of them, I'm persona non grata. My frustration level in this reached a crescendo when, during a session at LinuxCon Europe recently, I asked for the criteria to join one such private legal issues discussions group, and I was told the criteria themselves were secret. I pointed out to the coordinators of the forum that this wasn't a particularly Free Software friendly way to run a discussion group, and they simply changed the subject. My hope is that this FOSDEM DevRoom can be a catalyst to start a new discussion forum for legal and policy issues related to software freedom that doesn't have this problem.

    BTW, just to clarify: I'm not talking about FLOSS Foundations as one of these secretive, members-only clubs. While the FLOSS Foundations main mailing list is indeed invite-only, it's very easy to join and the only requirement is: “if you repost emails from this list publicly, you'll probably be taken off the mailing list”. There is no “Chatham House Rule” or other silly, unenforceable, and spend-inordinate-amount-of-times-remembering-how-to-follow rules in place for FLOSS Foundations, but such silly rulesets are now common with these other secretive legal issues meeting groups.

    Finally, I know I haven't named publicly the members-only clubs I'm talking about here, and that's by design. This is the first time I've mentioned them at all in my blog, and my hope is that they'll change their behaviors soon. I don't want to publicly shame them by name until I give them a bit more time to change their behaviors. Also, I don't want to inadvertently promote these fora either, since IMO their very structure is flawed and community-unfriendly.

    Update: Some have claimed incorrectly that the text in the footnote above somehow indicates my unwillingness to follow the Chatham House Rule (CHR). I refuted that on identi.ca, noting that the text above doesn't say that, and those who think it does have simply misunderstood. My primary point (which I'll now state even more explicitly) is that CHR is difficult to follow, particularly when it is mis-applied to a mailing list. CHR is designed for meetings, which have a clear start time and a finish time. Mailing lists aren't meetings, so the behavior of CHR when applied to a mailing list is often undefined.

    I should furthermore note that people who have lived under CHR for a series of meetings also have similar concerns as mine. For example, Allison Randal, who worked under CHR on Project Harmony noted:

    The group decided to adopt Chatham House Rule for our discussions. … At first glance it seems quite sensible: encourage open participation by being careful about what you share publicly. But, after almost a year of working under it, I have to say I’m not a big fan. It’s really quite awkward sometimes figuring out what you can and can’t say publicly. I’m trying to follow it in this post, but I’ve probably missed in spots. The simple rule is tricky to apply.

    I agree with Allison.

    Posted on Friday 16 December 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2011-11-28: What's a Free Software Non-Profit For?

    Over on Conservancy's blog, I just published a blog post entitled What's a Free Software Non-Profit For?. It responds in part to what was written last week about non-profit homes for Free Software projects.

    Posted on Monday 28 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-11-24: No, You Won't See Me on Twitter, Facebook, Linkedin, Google Plus, Google Hangouts, nor Skype

    Most folks outside of technology fields and the software freedom movement can't grok why I'm not on Facebook. Facebook's marketing has reached most of the USA's non-technical Internet users. On the upside, Facebook gave the masses access to something akin to blogging. But, as with most technology controlled by for-profit companies, Facebook is proprietary software. Facebook, as a software application, is written in a mix of server-side software that no one besides Facebook employees can study, modify and share. On the client-side, Facebook is an obfuscated, proprietary software Javascript application, which is distributed to the user's browser when they access facebook.com. Thus, in my view, using Facebook is no different than installing a proprietary binary program on my GNU/Linux desktop.

    Most of the press critical of Facebook has focused on privacy, data mining of users' data on behalf of advertisers, and other types of data autonomy concerns. Such concerns remain incredibly important too. Nevertheless, since the advent of the software freedom community's concerns about network services a few years ago, I've maintained this simple principle, that I still find correct: While I can agree that merely liberating all software for an online application is not a sufficient condition to treat the online users well, the liberation of the software is certainly a necessary condition for the freedom of the users. Releasing freely all code for the online application the first step for freedom, autonomy, and privacy of the users. Therefore, I certainly don't give in myself to running proprietary software on my FaiF desktops. I simply refuse to use Facebook.

    Meanwhile, when Google Plus was announced, I didn't see any fundamental difference from Facebook. Of course, there are differences on the subtle edges: for example, I do expect that Google will respect data portability more than Facebook. However, I expect data mining for advertisers' behalf will be roughly the same, although Google will likely be more subtle with advertising tie-in than Facebook, and thus users will not notice it as much.

    But, since I'm firstly a software freedom activist, on the primary issue of my concern, there is absolutely no difference between Facebook and Google Plus. Google Plus' software is a mix of server-side trade-secret software that only Google employees can study, share, and modify, and a client-side proprietary Javascript application downloaded into the users' browsers when they access the website.

    Yet, in a matter of just a few months, much of the online conversation in the software freedom community has moved to Google Plus, and I've heard very few people lament this situation. It's not that I believe we'll succeed against proprietary software tomorrow, and I understand fully that (unlike me) most people in the software freedom community have important reasons to interact regularly with those outside of our community. It's not that I chastise software freedom developers and activist for maintaining a minimal presence on these services to interact with those who aren't committed to our cause.

    My actual complaint here is that Google Plus is becoming the default location for discussion of software freedom issues. I've noticed because I've recently discovered that I've missed a lot of community conversations that are only occurring on Google Plus. (I've similarly noticed that many of my Free Software contacts spam me to join Linkedin, so I assume something similar is occurring there as well.)

    What's more, I've received more pressure than ever before to sign up for not only Google Plus, but for Twitter, Linkedin, Google Hangout, Skype and other socially-oriented online communication services. Indeed, just in the last ten days, I've had three different software freedom development projects and/or organizations request that I sign up for a proprietary online communication service merely to attend a meeting or conference call. (Update on 2013-02-16: I still get such requests on a monthly basis.) Of course, I refused, but I've not felt peer pressure this strong since I was a teenager.

    Indeed, the advent of proprietary social networking software adds a new challenge to those of us who want to stand firm and resist proprietary software. As adoption of services like Facebook, Twitter, Google Plus, Skype, Linkedin and Google Hangouts increases, those of us who resist using proprietary software will come under ever-increasing peer pressure. Disturbingly, I've found that peer pressure comes not only from folks outside our community, but also from those who have, for years, otherwise been supporters of the software freedom movement.

    When I point out that I use only Free Software, some respond that Skype, Facebook, and Google Plus are convenient and do things that can't be done easily with Free Software currently. I don't argue that point. It's easy to resist Microsoft Windows, or Internet Explorer, or any other proprietary software that is substandard and works poorly. But proprietary software developers aren't necessarily stupid, nor untalented. In fact, proprietary software developers are highly paid to write easy-to-use, beautiful and enticing software (cross-reference Apple, BTW). The challenge the software freedom community faces is not merely to provide alternatives to the worst proprietary software, but to also replace the most enticing proprietary software available. Yet, if FaiF Software developers settle into being users of that enticing proprietary software, the key inspiration for development disappears.

    The best motivator to write great new software is to solve a problem that's not yet solved. To inspire ourselves as FaiF Software developers, we can't complacently settle into use of proprietary software applications as part of our daily workflow. That's why you won't find me on Google Plus, Google Hangout, Facebook, Skype, Linkedin, Twitter or any other proprietary software network service. You can phone with me with SIP, you can read my blog and identi.ca feed, and chat with me on IRC and XMPP, and those are the only places that I'll be until there's Free Software replacements for those other services. I sometimes kid myself into believing that I'm leading by example, but sadly few in the software freedom community seem to be following.

    Posted on Thursday 24 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-11-13: Just Ignore Him; He'll Go Away Eventually.

    One of my favorite verbal exchanges in an episode of The West Wing occurs in S03E08, The Women of Qumar. In the story, after President Bartlet said at a fundraiser: Everything has risks. Your car can drive into a lake and your seatbelt jams, but no one's saying don't wear your seat belt, someone had a car accident while not wearing a seatbelt and filed a lawsuit naming the President as a defendant. Sam, the Deputy Communications Director, thinks the White House should respond preemptively before the story. Toby, the Communication Director, instead ignores Sam and then has this wonderfully deadpan exchange with the President:

    BARTLET
    [Toby,] Come with me for a second, would you?
    TOBY
    Sir, it's possible you're going to hear some stuff about seatbelts today. I urge you to ignore it.
    BARTLET
    No problem. [changes topic] Are you straightening things out with the Smithsonian?

    I remember when I first watched this episode in late 2001. It expressed to me a cogent and concise fact of press relations: someone may be out there trying to get attention for themselves on a topic related to you with some sophistic argument, but you should sometimes just ignore it.

    With that, I say: Dear readers of my blog, you may have heard some stuff about Edward Naughton again this week. I urge you to ignore it.

    I hope you'll all walk in the shoes of President Bartlet and respond with a “No problem” and change the topic. If you really want to follow this story, just read what I've said before on it; nothing has changed.

    Meanwhile, while Naughton seems to be happy to selectively quote me to support his sophistry, he still hasn't gotten in touch with me to help actually enforce the GPL. It's obvious he doesn't care in the least about the GPL; he just wants to use it inappropriately to attack Android/Linux and Google. There are criticisms that Google and Android/Linux deserve, but none of them relate to the topic of GPL violations.

    Posted on Sunday 13 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-11-11: Last Four FaiF Episodes

    Those of you that follow my blog have probably wondered we're I've been. Quite frankly, there is just so much work going on at Conservancy that I have almost had no time to do anything but Conservancy work, eat and sleep. My output on this blog and on identi.ca surely shows that.

    The one thing that I've kept up with is the oggcast, Free as in Freedom that I co-host with Karen Sandler, and which is produced by Dan Lynch.

    Since I last made a blog post here, Karen, Dan and I released four oggcasts. I'll discuss them here in reverse chronological order:

    In Episode 0x1C, which was released today, we published Karen's interview with Adam Dingle of Yorba. IMO (which is undoubtedly biased), this episode is an important one since it relates to the issues of non-profit organizations in our community who waiting in the 501(c)(3) application queue. This is a detailed and specific follow-up to the issues that Karen and I discussed on FaiF's Episode 0x13.

    In Episode 0x1B, Karen and I discuss in some detail about the work that we've been up to. Both Karen and I are full-time Executive Directors, and the amount of work that job takes always seems insurmountable. Although, after we recorded the episode, I somewhat embarrassingly remembered the Bush/Kerry debate where George W. Bush kept saying his job as president is hard work. It's certainly annoying when a chief executive goes on and on about how hard his job is, so I apologize if I did a little too much of that in Episode 0x1B.

    In Episode 0x1A, Karen and I discussed in detail Steve Jobs' death and the various news coverage about it. The subject is a bit old news now that I write this, but I'm glad we did that episode, since it gave me an opportunity to say everything I wanted to stay about Steve Jobs' life and death.

    In Episode 0x19, we played Karen's interview with Jos Poortvliet, discussed the identi.ca upgrade, and Karen discussed GNOME 3.2.

    My plan is to at least keep the FaiF oggcast going, and I'm even bugging Fontana that he and I should start an oggcast too. Beyond that, I can't necessarily commit to any other activities outside of that (and my job at Conservancy and volunteer duties at FSF). BTW, I recently attended a few conferences (both LinxCon Europe and the Summer of Code Mentor Summit). At both of them, multiple folks asked me why I haven't been blogging more. I appreciate people's interest in what I'm writing, but at the moment, my day-job at Conservancy and volunteer work at FSF has had to take absolute priority.

    Based on the ebb and flow (yes, that's the first time I've actually used that phrase on my ebb.org blog :) of the Free Software community that I've gotten used to over the last decade and a half, I usually find that things slow down in mid-December until mid-January. Since Conservancy's work is based on the needs of its Free Software projects, I'll likely be able to return a “normal” 50 hour work week (instead of the 60-70 I've been doing lately) in December. Thus, I'll probably try to write some queued blog posts then to slowly push out over the few months that follow.

    Finally, I want to mention that Conservancy has an donation appeal up on its website. I hope you'll give generously to support Conservancy's work. On that, I'll just briefly mention my “hard work” again, to assure you that donors to Conservancy definitely get their money's worth when I'm on the job. Since I'm on the topic of that, I also thank everyone who has donated to FSF and Conservancy over the years. I've been fortunate to have worked full-time at both organizations, and I appreciate the community that has supported all that work over the years.

    Posted on Friday 11 November 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

August

  • 2011-08-21: Desktop Summit 2011

    I realize nearly ten days after the end of a conference is a bit late to blog about it. However, I needed some time to recover my usual workflow, having attended two conferences almost back-to-back, OSCON 2011 and Desktop Summit. (The strain of the back-to-back conferences, BTW, made it impossible for me to attend Linux Con North America 2011, although I'll be at Linux Con Europe. I hope next year's summer conference schedule is not so tight.)

    This was my first Desktop Summit, as I was unable to attend the first one in Grand Canaria two years ago. I must admit, while it might be a bit controversial to say so, that I felt the conference was still like two co-located conferences rather than one conference. I got a chance to speak to my KDE colleagues about various things, but I ended up mostly attending GNOME talks and therefore felt more like I was at GUADEC than at a Desktop Summit for most of the time.

    The big exception to that, however, was in fact the primary reason I was at Desktop Summit this year: to participate in a panel discussion with Mark Shuttleworth and Michael Meeks (who gave the panel a quick one-sentence summary on his blog). That was plenary session and the room was filled with KDE and GNOME developers alike, all of whom seemed very interested in the issue.

    Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

    The panel format was slightly frustrating — primarily due to Mark's insistence that we all make very long open statements — although Karen Sandler nevertheless did a good job moderating it and framing the discussion.

    I get the impression most of the audience was already pretty well informed about all of our positions, although I think I shocked some by finally saying clearly in a public forum (other than identi.ca) that I have been lobbying FSF to make copyright assignment for FSF-assigned projects optional rather than mandatory. Nevertheless, we were cast well into our three roles: Mark, who wants broad licensing control over projects his company sponsors so he can control the assets (and possibly sell them); Michael, who has faced so many troubles in the OpenOffice.org/LibreOffice debacle that he believes inbound=outbound can be The Only Way; and me, who believes that copyright assignment is useful for non-profits willing to promise to do the public good to enforce the GPL, but otherwise is a Bad Thing.

    Lydia tells me that the videos will be available eventually from Desktop Summit, and I'll update this blog post when they are so folks can watch the panel. I encourage everyone concerned about the issue of rights transfers from individual developers to entities (be they via copyright assignment or other broad CLA means) to watch the video once it's available. For the moment, Jake Edge's LWN article about the panel is a pretty good summary.

    My favorite moment of the panel, though, was when Shuttleworth claimed he was but a distant observer of Project Harmony. Karen, as moderator, quickly pointed out that he was billed as Project Harmony's originator in the panel materials. It's disturbing that Shuttleworth thinks he can get away with such a claim: it's a matter of public record, that Amanda Brock (Canonical, Ltd.'s General Counsel) initiated Project Harmony, led it for most of its early drafts, and then Canonical Ltd. paid Mark Radcliffe (a lawyer who represents companies that violate the GPL) to finish the drafting. I suppose Shuttleworth's claim is narrowly true (if misleading) since his personal involvement as an individual was only tangential, but his money and his staff were clearly central: even now, it's led by his employee, Allison Randal. If you run the company that runs a project, it's your project: after all, doesn't that fit clearly with Shuttleworth's suppositions about why he should be entitled to be the recipient of copyright assignments and broad CLAs in the first place?

    The rest of my time at Desktop Summit was more as an attendee than a speaker. Since I'm not desktop or GUI developer by any means, I mostly went to talks and learned what others had to teach. I was delighted, however, that no less than six people came up to me and said they really liked this blog. It's always good to be told that something you put a lot of volunteer work into is valuable to at least a few people, and fortunately everyone on the Internet is famous to at least six people. :)

    Sponsored by the GNOME Foundation!

    Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to Desktop Summit 2011, as they did last year for GUADEC 2010. Given my own work and background, I'm very appreciative of a non-profit with limited resources providing travel funding for conferences. It's a big expense, and I'm thankful that the GNOME Foundation has funded my trips to their annual conference.

    BTW, while we await the videos from Desktop Summit, there's some “proof” you can see that I attended Desktop Summit, as I appear in the group photo, although you'll need to view the hi-res version and scroll to the lower right of the image, and find me. I'm in the second/third (depending on how you count) row back, 2-3 from the right, and two to the left from Lydia Pintscher.

    Finally, I did my best to live dent from the Desktop Summit 2011. That might be of interest to some as well, for example, if you want to dig back and see what folks said in some of the talks I attended. There was also a two threads after the panel that may be of interest.

    Posted on Sunday 21 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-08-18: Will Nokia Ever Realize Open Source Is Not a Panacea?

    I was pretty sure there was something wrong with the whole thing in fall of 2009, when they first asked me. A Nokia employee contacted me to ask if I'd be willing to be a director of the Symbian Foundation (or so I thought that's what they were asking — read on). I wrote them a thoughtful response explaining my then-current concerns about Symbian:

    • the poor choice of the Eclipse Public License for the eventual code,
    • the fact that Symbian couldn't be built in any software freedom system environment, and
    • that the Symbian source code that had been released thus far didn't actually run on any existing phones.

    I nevertheless offered to serve as a director for one year, and I would resign at that point if the problems that I'd listed weren't resolved.

    I figured that was quite a laundry list. I also figured that they probably wouldn't be interested anyway once they saw my list. Amusingly, they still were. But then, I realized what was really going on.

    In response to my laundry list, I got back a rather disturbing response that showed a confusion in my understanding. I wasn't being invited to join the board of the Symbian Foundation. They had asked me instead to serve as a Director of a small USA entity (that they heralded as Symbian DevCo) that would then be permitted one Representative of the Symbian Foundation itself, which was, in turn, a trade association controlled by dozens of proprietary software companies.

    In fact, this Nokia employee said that they planned to channel all individual developers toward this Symbian DevCo in the USA, and that would be the only voice these developers would have in the direction of Symbian. It would be one tiny voice against dozens of proprietary software company who controlled the real Symbian Foundation, a trade association.

    Anyone who has worked in the non-profit sector, or even contributed to any real software freedom project can see what's deeply wrong there. However, my response wasn't to refuse. I wrote back and said clearly why this was failing completely to create a software freedom community that could survive vibrantly. I pointed out the way the Linux community was structured: whereby the Linux Foundation is a trade association for companies — and, while they do fund Linus' salary, they don't control his or any other activities of developers. Meanwhile, the individual Linux developers have all the real authority: from community structure, to licensing, to holding copyrights, to technical decision-making. I pointed out if they wanted Symbian to succeed, they should emulate Linux as much as they could. I suggested Nokia immediately change the whole structure to have developers in charge of the project, and have a path for Symbian DevCo to ultimately be the primary organization in charge of the codebase, while Symbian Foundation could remain the trade association, roughly akin to the Linux Foundation. I offered to help them do that.

    You might guess that I never got a reply to that email. It was thus no surprise to me in the least what happened to Symbian after that:

    So, within 17 months of Symbian Foundation's inquiry to ask me to help run Symbian DevCo, the (Open Source) Symbian project was canceled entirely, the codebase was now again proprietary (with a few of the old codedumps floating around on other sites), and the Symbian Foundation consists only of a single webpage filled with double-speak.

    Of course, even if Nokia had tried its hardest to build an actual software freedom community, Symbian still had a good chance of failing, as I pointed out in March 2010. But, if Nokia had actually tried to release control and let developers have some authority, Symbian might have had a fighting chance as Free Software. As it turned out, Nokia threw some code over the wall, gave all the power to decide what happens to a bunch of proprietary software companies, and then hung it all out to dry. It's a shining example of how to liberate software in a way that will guarantee its deprecation in short order.

    Of course, we now know that during all this time, Nokia was busy preparing a backroom deal that would end its always-burgeoning-but-never-complete affiliation with software freedom by making a deal with Microsoft to control the future of Nokia. It's a foolish decision for software freedom; whether it's a good business decision surely isn't for me to judge. (After all, I haven't worked in the for-profit sector for fifteen years for a reason.)

    It's true that I've always given a hard time to Maemo (and to MeeGo as well). Those involved from inside Nokia spent the last six months telling me that MeeGo is run by completely different people at Nokia, and Nokia did recently launch yet another MeeGo based product. I've meanwhile gotten the impression that Nokia is one of those companies whose executives are more like wealthy Romans who like to pit their champions against each other in the arena to see who wins; Nokia's various divisions appear to be in constant competition with each other. I imagine someone running the place has read too much Ayn Rand.

    Of course, it now seems that MeeGo hasn't, in Nokia's view, “survived as the fittest”. I learned today (thanks to jwildeboer) that, In Elop's words, there is no returning to MeeGo, even if the N9 turns out to be a hit. Nokia's commitment to Maemo/MeeGo, while it did last at least four years or so, is now gone too, as they begin their march to Microsoft's funeral dirge. Yet another FLOSS project Nokia got serious about, coordinated poorly, and yet ultimately gave up.

    Upon considering Nokia's bad trajectory, it led me to think about how Open Source companies tend to succeed. I've noticed something interesting, which I've confirmed by talking to a lot of employees of successful Open Source companies. The successful ones — those that get something useful done for software freedom while also making some cash (i.e., the true promise of Open Source) — let the developers run the software projects themselves. Such companies don't relegate the developers into a small non-profit that has to lobby dozens of proprietary software companies to actually make an impact. They don't throw code over the wall — rather, they fund developers who make their own decisions about what to do in the software. Ultimately, smart Open Source companies treat software freedom development like R&D should be treated: fund it and see what comes out and try to build a business model after something's already working. Companies like Nokia, by contrast, constantly put their carts in front of all the horses and wonder why those horses whinny loudly at them but don't write any code.

    Open Source slowly became a fad during the DotCom era, and it strangely remains such. A lot of companies follow fads, particularly when they can't figure what else to do. The fad becomes a quick-fix solution. Of course, for those of us that started as volunteers and enthusiasts in 1991 or earlier, software freedom isn't some new attraction at P. T. Barnum's circus. It's a community where we belong and collaborate to improve society. Companies are welcomed to join us for the ride, but only if they put developers and users in charge.

    Meanwhile, my personal postscript to my old conversation with Nokia arrived in my inbox late in May 2011. I received a extremely vague email from a lawyer at Nokia. She wanted really badly to figure out how to quickly dump some software project — and she wouldn't tell me what it was — into the Software Freedom Conservancy. Of course, I'm sure this lawyer knows nothing about the history of the Symbian project wooing me for directorship of Symbian DevCo and all the other history of why “throwing code over the wall” into a non-profit is rarely known to work, particularly for Nokia. I sent her a response explaining all the problems with her request, and, true to Nokia's style, she didn't even bother to respond to me thanking me for my time.

    I can't wait to see what project Nokia dumps over the wall next, and then, in another 17 months (or if they really want to lead us on, four years), decides to proprietarize or abandon it because, they'll say, this open-sourcing thing just doesn't work. Yet, so many companies make money with it. The short answer is: Nokia, you keep doing it wrong!

    Update (2011-08-24): Boudewijn Rempt argued another side of this question. He says the Calligra suite is a counterexample of Nokia getting a FLOSS project right. I don't know enough about Calligra to agree or disagree.

    Posted on Thursday 18 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-08-15: If Only They'd Actually Help Enforce GPL

    Unfortunately, Edward Naughton is at it again, and everyone keeps emailing me about, including Brian Proffitt, who quoted my email response to him this morning in his article.

    As I said in my response to Brian, I've written before on this issue and I have nothing much more to add. Naughton has not identified a GPL violation that actually occurred, at least with respect to Google's own distribution of Android, and he has completely ignored my public call for him to make such a formal report to the copyright holders of GPL violations for which he has evidence (if any).

    Jon Corbet of LWN has also picked up the story, mostly pontificating on what it would mean if loss of distribution rights under GPLv2§4 are used nefariously instead of the honorable way it has been hitherto used to defend software freedom. I commented on the LWN post.

    I think Jon's right to raise that specific concern, and that's a good reason for projects to upgrade to GPLv3. But, nevertheless, this whole thing is not even relevant until someone actually documents a real GPL violation that has occurred. As I previously mentioned, I'm aware of plenty of documented violations (thanks to Matthew Garrett), and I'd love if more people were picking up and act on these violations to enforce the GPL. I again tell Naughton: if you are seriously concerned about enforcing GPL, then volunteer your time as a lawyer to help. But we all know that's not really what interests you: rather, your job is to spread FUD.

    Posted on Monday 15 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-08-05: You're Living in the Past, Dude!

    At the 2000 Usenix Technical Conference (which was the primary “generalist” conference for Free Software developers in those days), I met Miguel De Icaza for the third time in my life. In those days, he'd just started Helix Code (anyone else remember what Ximian used to be called?) and was still president of the GNOME Foundation. To give you some context: Bonobo was a centerpiece of new and active GNOME development then.

    Out of curiosity and a little excitement about GNOME, I asked Miguel if he could show me how to get the GNOME 1.2 running on my laptop. Miguel agreed to help, quickly taking control of the keyboard and frantically typing and editing my sources.list.

    Debian potato was the just-becoming-stable release in those days, and of course, I was still running potato (this was before my experiment with running things from testing began).

    After a few minutes hacking on my keyboard, Miguel realized that I wasn't running woody, Debian's development release. Miguel looked at me, and said: You aren't running woody; I can't make GNOME run on this thing. There's nothing I can do for you. You're living in the past, dude!. (Those who know Miguel IRL can imagine easily how he'd sound saying this.)

    So, I've told that story many times for the last eleven years. I usually tell it for laughs, as it seems an equal-opportunity humorous anecdote. It pokes some fun at Miguel, at me, at Debian for its release cycle, and also at GNOME (which has, since its inception, tried to never live in the past, dude).

    Fact is, though, I rather like living in the past, at least with regard to my computer setup. By way of desktop GUIs, I used twm well into the late 1990s, and used fvwm well into the early 2000s. I switched to sawfish (then sawmill) during the relatively brief period when GNOME used it as its default window manager. When Metacity became the default, I never switched because I'd configured sawfish so heavily.

    In fact, the only actual parts of GNOME 2 that I ever used on a daily basis have been (a) a small unobtrusive panel, (b) dbus (and its related services), and (c) the Network Manager applet. When GNOME 3 was released, I had no plans to switch to it, and frankly I still don't.

    I'm not embarrassed that I consistently live in the past; it's sort of the point. GNOME 3 isn't for me; it's for people who want their desktop to operate in new and interesting ways. Indeed, it's (in many ways) for the people who are tempted to run OSX because its desktop is different than the usual, traditional, “desktop metaphor” experience that had been standard since the mid-1990s.

    GNOME 3 just wasn't designed with old-school Unix hackers in mind. Those of us who don't believe a computer is any good until we see a command line aren't going to be the early adopters who embrace GNOME 3. For my part, I'll actually try to avoid it as long as possible, continue to run my little GNOME 2 panel and sawfish, until slowly, GNOME 3 will seep into my workflow the way the GNOME 2 panel and sawfish did when they were current, state-of-the-art GNOME technologies.

    I hope that other old-school geeks will see this distinction: we're past the era when every Free Software project is targeted at us hackers specifically. Failing to notice this will cause us to ignore the deeper problem software freedom faces. GNOME Foundation's Executive Director (and my good friend), Karen Sandler, pointed out in her OSCON keynote something that's bothered her and me for years: the majority computer at OSCON is Apple hardware running OSX. (In fact, I even noticed Simon Phipps has one now!) That's the world we're living in now. Users who actually know about “Open Source” are now regularly enticed to give up software freedom for shiny things.

    Yes, as you just read, I can snicker as quickly as any old-school command-line geek (just as Linus Torvalds did earlier this week) at the pointlessness of wobbly windows, desktop cubes, and zoom effects. I could also easily give a treatise on how I can get work done faster, better, and smarter because I have the technology of years ago that makes every keystroke matter.

    Notwithstanding that, I'd even love to have the same versatility with GNOME 3 that I have with sawfish. And, if it turns out GNOME 3's embedded Javascript engine will give me the same hackability I prefer with sawfish, I'll adopt GNOME 3 happily. But, no matter what, I'll always be living in the past, because like every other human, I hate changing anything, unless it's strictly necessary or it's my own creation and derivation. Humans are like that: no matter who you are, if it wasn't your idea, you're always slow to adopt something new and change old habits.

    Nevertheless, there's actually nothing wrong with living in the past — I quite like it myself. However, I'd suggest that care be taken to not admonish those who make a go at creating the future. (At this risk of making a conclusion that sounds like a time travel joke,) don't forget that their future will eventually become that very past where I and others would prefer to live.

    Posted on Friday 05 August 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2011-07-29: GNU Emacs Developers Will Fix It; Please Calm Down

    fabsh was the first to point me at a slashdot story that is (like most slashdot stories) sensationalized.

    The story, IMO, makes the usual mistake of considering a GPL violation as an earth-shattering disaster that has breached the future of software freedom. GPL violations vary in degree of the problems they create; most aren't earth-shattering.

    Specifically, the slashdot story points to a thread on the emacs-devel mailing list about a failure to include some needed bison grammar in the complete and corresponding sources for Emacs in a few Emacs releases in the last year or two. As you can see there, RMS quickly responded to call it a grave problem … [both] legally and ethically, and he's asked the Emacs developers to help clear up the problem quickly.

    I wrote nearly two years ago that one shouldn't jump to conclusions and start condemning those who violate the GPL without investigating further first. Most GPL violations are mistakes, as this situation clearly was, and I suspect it will be resolved within a few news cycles of this blog post.

    And please, while we all see the snickering-inducing irony of FSF and its GNU project violating the GPL, keep in mind that this is what I've typically called a “community violation”. It's a non-profit volunteer project that made an honest mistake and is resolving it quickly. Meanwhile, I've a list of hundreds of companies who are actively violating the GPL, ignoring users who requested source, and have apparently no interest in doing the right thing until I open an enforcement action against them. So, please keep perspective about what how bad any given violation is. Not all GPL violations are of equal gravity, but all should be resolved, of course. The Emacs developers are on it.

    Posted on Friday 29 July 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-07-07: Project Harmony (and “Next Generation Contributor Agreements”) Considered Harmful

    Update on 2014-06-10:While this article is about a specific series of attempts to “unify” CLAs and ©AAs into a single set of documents, the issues raised below cover the gamut of problems that are encountered in many CLAs and ©AAs in common use today in FLOSS projects. Even though it appears that both Project Harmony and its reincarnation Next Generation Contributor Agreements have both failed, CLAs and ©AAs are increasing in popularity among FLOSS projects, and developers should begin action to oppose these agreements for their projects.

    Update on 2013-09-05: Project Harmony was recently relaunched under the name the Next Generation of Contributor Agreements. AFAICT, it's been publicly identified as the same initiative, and its funding comes from the same person. I've verified that everything I say below still applies to their current drafts available from the Contributor Agreements project. I also emailed this comments to the leaders of that project before it started, but they wouldn't respond to my policy questions.


    Much advertising is designed to convince us to buy or use of something that we don't need. When I hear someone droning on about some new, wonderful thing, I have to worry that these folks are actually trying to market something to me.

    Very soon, you're likely to see a marketing blitz for this thing called Project Harmony (which just released its 1.0 version of document templates). Even the name itself is marketing: it's not actually descriptive, but is so named to market a “good feeling” about the project before even knowing what it is. (It's also got serious namespace collision, including with a project already in the software freedom community.)

    Project Harmony markets itself as fixing something that our community doesn't really consider broken. Project Harmony is a set of document templates, primarily promulgated and mostly drafted by corporate lawyers, that entice developers to give control of their software work over to companies.

    My analysis below is primarily about how these agreements are problematic for individual developers. An analysis of the agreements in light of companies or organizations using them between each other may have the same or different conclusions; I just haven't done that analysis in detail so I don't know what the outcome is.

    [ BTW, I'm aware that I've failed to provide a TL;DR version of this article. I tried twice to write one and ultimately decided that I can't. Simply put, these issues are complex, and I had to draw on a decade of software freedom licensing, policy, and organizational knowledge to fully articulate what's wrong with the Project Harmony agreements. I realize that sounds like a It was hard to write — it should be hard to read justification, but I just don't know how to summarize these Gordian problems in a pithy way. I nevertheless hope developers will take the time to read this before they sign a Project Harmony agreement, or — indeed — any CLA or ©AA. ]

    Copyright Assignment That Lacks Real Assurances

    First of all, about half of Project Harmony is copyright assignment agreements ( ©AAs). Assigning copyright completely gives the work over to someone else. Once the ©AA is signed, the work ceases to belong to the assignor. It's as if that work was done by the assignee. There is admittedly some value to copyright assignment, particularly if developers want to ensure that the GPL or other copyleft is enforced on their work and they don't have time to do it themselves. (Although developers can also designate an enforcement agent to do that on their behalf even if they don't assign copyright, so even that necessity is limited.)

    One must immensely trust an assignee organization. Personally, I've only ever assigned some of my copyrights to one organization in my life: the Free Software Foundation, because FSF is the only organization I ever encountered that is institutionally committed to DTRT'ing with copyrights in a manner similar to my personal moral beliefs.

    First of all, as I've written about before, FSF's ©AA make all sorts of promises back to the assignor. Second, FSF is institutionally committed to the GPL and enforcing GPL in a way that advances FSF's non-profit advocacy mission for software freedom. All of this activity fits my moral principles, so I've been willing to sign FSF's ©AAs.

    Yet, I've nevertheless met many developers who refuse to sign FSF's ©AAs. While many of such developers like the GPL, they don't necessarily agree with the FSF's moral positions. Indeed, in many cases, developers are completely opposed to assigning copyright to anyone, FSF or otherwise. For example, Linus Torvalds, founder of Linux, has often stated on record that he never wanted to do copyright assignments, for several reasons: [he] think[s] they are nasty and wrong personally, and [he]'d hate all the paperwork, and [he] thinks it would actually detract from the development model.

    Obviously, my position is not as radical as Linus'; I do think ©AAs can sometimes be appropriate. But, I also believe that developers should never assign copyright to a company or to an organization whose moral philosophy doesn't fit well with their own.

    FSF, for its part, spells out its moral position in its ©AA itself. As I've mentioned elsewhere, and as Groklaw recently covered in detail, FSF's ©AA makes various legally binding promises to developers who sign it. Meanwhile, Project Harmony's ©AAs, while they put forward a few options that look vaguely acceptable (although they have problems of their own discussed below), make no such promises mandatory. I have often times pointed Harmony's drafters to the terms that FSF has proposed should be mandatory in any for-profit company's ©AA, but Harmony's drafters have refused to incorporate these assurances as a required part of Harmony's agreements. (Note that such assurances would still be required for the CLA options as well; see below for details why.)

    Regarding ©AAs, I'd like to note finally that FSF does not require ©AAs for all GNU packages. This confusion is so common that I'd like to draw attention to it, even thought it's only a tangential point in this context. FSF's ©AA is only mandatory, to my knowledge, on those GNU packages where either (a) FSF employees developed the first versions or (b) the original developers themselves asked to assign copyright to FSF, upon their project joining GNU. In all other cases, FSF assignment is optional. Some GNU projects, such as GNOME, have their own positions regarding ©AAs that differ radically from FSF's. I seriously doubt that companies who adopt Project Harmony's agreement will ever be as flexible on copyright assignment as FSF, nor will any of the possible Project Harmony options be acceptable to GNOME's existing policy.

    Giving Away Rights to Give Companies Warm Fuzzies?

    Project Harmony, however, claims that the important part isn't its ©AA, but its Contributor License Agreement (CLA). To briefly consider the history of Free Software CLAs, note that the Apache CLA was likely the first CLA used in the Free Software community. Apache Software Foundation has always been heavily influenced by IBM and other companies, and such companies have generally sought the “warm fuzzies” of getting every contributor to formally assent to a complex legal document that asserts various assurances about the code and gives certain powers to the company.

    The main point of a CLA (and a somewhat valid one) is to ensure that the developers have verified their right to contribute the code under the specified copyright license. Both the Apache CLA and Project Harmony's CLA go to great length and verbosity to require developers to agree that they know the contribution is theirs. In fact, if a developer signs one of these CLA's, the developer makes a formal contract with the entity (usually a for-profit company) that the developer knows for sure that the contribution is licensed under the specified license. The developer then takes on all liability if that fact is in any way incorrect or in dispute!

    Of course, shifting away all liability about the origins of the code is a great big “warm fuzzy” for the company's lawyers. Those lawyers know that they can now easily sue an individual developer for breach of contract if the developer was wrong about the code. If the company redistributes some developer's code and ends up in an infringement suit where the company has to pay millions of dollars, they can easily come back and sue the developer0. The company would argue in court that the developer breached the CLA. If this possible outcome doesn't immediately worry you as an individual developer signing a Project Harmony CLA for your FLOSS contribution, it should.

    “Choice of Law” & Contractual Arrangement Muddies Copyright Claims

    Apache's CLA doesn't have a choice of law clause, which is preferable in my opinion. Most lawyers just love a “choice of law” clause for various reasons. The biggest reason is that it means the rules that apply to the agreement are the ones with which the lawyers are most familiar, and the jurisdiction for disputes will be the local jurisdiction of the company, not of the developer. In addition, lawyers often pick particular jurisdictions that are very favorable to their client and not as favorable to the other signers.

    Unfortunately, all of Project Harmony's drafts include a “choice of law” clause1. I expect that the drafters will argue in response that the jurisdiction is a configuration variable. However, the problem is that the company decides the binding of that variable, which almost always won't be the binding that an individual developer prefers. The term will likely be non-negotiable at that point, even though it was configurable in the template.

    Not only that, but imagine a much more likely scenario about the CLA: the company fails to use the outbound license they promised. For example, suppose they promised the developers it'd be AGPL'd forever (although, no such option actually exists in Project Harmony, as described below!), but then the company releases proprietarized versions. The developers who signed the CLA are still copyright holders, so they can enforce under copyright law, which, by itself, would allow the developers to enforce under the laws in whatever jurisdiction suits them (assuming the infringement is happening in that jurisdiction, of course).

    However, by signing a CLA with a “choice of law” clause, the developers agreed to whatever jurisdiction is stated in that CLA. The CLA has now turned what would otherwise be a mundane copyright enforcement action operating purely under the developer's local copyright law into a contract dispute between the developers and the company under the chosen jurisdiction's laws. Obviously that agreement might include AGPL and/or GPL by reference, but the claim of copyright infringement due to violation of GPL is now muddied by the CLA contract that the developers signed, wherein the developers granted some rights and permission beyond GPL to the company.

    Even worse, if the developer does bring action in a their own jurisdiction, their own jurisdiction is forced to interpret the laws of another place. This leads to highly variable and confusing results.

    Problems for Individual Copyright Enforcement Against Third-Parties

    Furthermore, even though individual developers still hold the copyrights, the Project Harmony CLAs grant many transferable rights and permissions to the CLA recipient (again, usually a company). Even if the reasons for requiring that were noble, it introduces a bundle of extra permissions that can be passed along to other entities.

    Suddenly, what was once a simple copyright enforcement action for a developer discovering a copyleft violation becomes a question: Did this violating entity somehow receive special permissions from the CLA-collecting entity? Violators will quickly become aware of this defense. While the defense may not have merit (i.e., the CLA recipient may not even know the violator), it introduces confusion. Most legal proceedings involving software are already confusing enough for courts due to the complex technology involved. Adding something like this will just cause trouble and delays, further taxing our already minimally funded community copyleft enforcement efforts.

    Inbound=Outbound Is All You Need

    Meanwhile, the whole CLA question actually is but one fundamental consideration: Do we need this? Project Harmony's answer is clear: its proponents claim that there is mass confusion about CLAs and no standardization, and therefore Project Harmony must give a standard set of agreements that embody all the options that are typically used.

    Yet, Project Harmony has purposely refused to offer the simplest and most popular option of all, which my colleague Richard Fontana (a lawyer at Red Hat who also opposes Project Harmony) last year dubbed inbound=outbound. Specifically, the default agreement in the overwhelming majority of FLOSS projects is simply this: each contributor agrees to license each contribution using the project's specified copyright license (or a license compatible with the project's license).

    No matter what way you dice Project Harmony, the other contractual problems described above make true inbound=outbound impossible because the CLA recipient is never actually bound formally by the project's license itself. Meanwhile, even under its best configuration, Project Harmony can't adequately approximate inbound=outbound. Specifically, Project Harmony attempts to limit outbound licensing with its § 2.3 (called Outbound License). However, all the copyleft versions of this template include a clause that say: We [the CLA recipient] agree to license the Contribution … under terms of the … licenses which We are using on the Submission Date for the Material. Yet, there is no way for the contributor to reliably verify what licenses are in use privately by the entity receiving the CLA. If the entity is already engaged in, for example, a proprietary relicensing business model at the Submission Date, then the contributor grants permission for such relicensing on the new contribution, even if the rest of § 2.3 promises copyleft. This is not a hypothetical: there have been many cases where it was unclear whether or not a company was engaged in proprietary relicensing, and then later it was discovered that they had been privately doing so for years. As written, therefore, every configuration of Project Harmony's § 2.3 is useless to prevent proprietarization.

    Even if that bug were fixed, the closest Project Harmony gets to inbound=outbound is restricting the CLA version to “FSF's list of ‘recommended copyleft licenses’”. However, this category makes no distinction between the AGPL and GPL, and furthermore ultimately grants FSF power over relicensing (as FSF can change its list of recommended copylefts at will). If the contributors are serious about the AGPL, then Project Harmony cannot assure their changes stay AGPL'd. Furthermore, contributors must trust the FSF for perpetuity, even more than already needed in the -or-later options in the existing FSF-authored licenses. I'm all for trusting the FSF myself in most cases. However, because I prefer plain AGPLv3-or-later for my code, Project Harmony is completely unable to accommodate my licensing preferences to even approximate an AGPL version of inbound=outbound (even if I ignored the numerous problems already discussed).

    Meanwhile, the normal, mundane, and already widely used inbound=outbound practice is simple, effective, and doesn't mix in complicated contract disputes and control structures with the project's governance. In essence, for most FLOSS projects, the copyright license of the project serves as the Constitution of the project, and doesn't mix in any other complications. Project Harmony seeks to give warm fuzzies to lawyers at the expense of offloading liability, annoyance, and extra hoop-jumping onto developers.

    Linux Hackers Ingeniously Trailblazed inbound=outbound

    Almost exactly 10 years ago today, I recall distinctly attending the USENIX 2001 Linux BoF session. At that session, Ted Ts'o and I had a rather lively debate; I claimed that FSF's ©AA assured legal certainty of the GNU codebase, but that Linux had no such assurance. (BTW, even I was confused in those days and thought all GNU packages required FSF's ©AA.) Ted explained, in his usual clear and bright manner, that such heavy-handed methods shouldn't be needed to give legal certainty to the GPL and that the Linux community wanted to find an alternative.

    I walked away skeptically shaking my head. I remember thinking: Ted just doesn't get it. But I was wrong; he did get it. In fact, many of the core Linux developers did. Three years to the month after that public conversation with Ted, the Developer's Certificate of Origin (DCO) became the official required way to handle the “CLA issue” for Linux and it remains the policy of Linux today. (See item 12 in Linux's Documentation/SubmittingPatches file.)

    The DCO, in fact, is the only CLA any FLOSS project ever needs! It implements inbound=outbound in a simple and straightforward way, without giving special powers over to any particular company or entity. Developers keep their own copyright and they unilaterally attest to their right to contribute and the license of the contribution. (Developers can even sign a ©AA with some other entity, such as the FSF, if they wish.) The DCO also gives a simple methodology (i.e., the Signed-off-by: tag) for developers to so attest.

    I admit that I once scoffed at the (what I then considered naïve) simplicity of the DCO when compared to FSF's ©AA. Yet, I've been since convinced that the Linux DCO clearly accomplishes the primary job and simultaneously fits how most developers like to work. ©AA's have their place, particularly when the developers find a trusted organization that aligns with their personal moral code and will enforce copyleft for them. However, for CLAs, the Linux DCO gets the important job done and tosses aside the pointless and pro-corporate stuff.

    Frankly, if I have to choose between making things easy for developers and making them easy for corporate lawyers, I'm going to chose the former every time: developers actually write the code; while, most of the time, company's legal departments just get in our way. The FLOSS community needs just enough CYA stuff to get by; the DCO shows what's actually necessary, as opposed to what corporate attorneys wish they could get developers to do.

    What about Relicensing?

    Admittedly, Linux's DCO does not allow for relicensing wholesale of the code by some single entity; it's indeed the reason a Linux switch to GPLv3 will be an arduous task of public processes to ensure permission to make the change. However, it's important to note that the Linux culture believes in GPLv2-only as a moral foundation and principle of their community. It's not a principle I espouse; most of my readers know that my preferred software license is AGPLv3-or-later. However, that's the point here: inbound=outbound is the way a FLOSS community implements their morality; Project Harmony seeks to remove community license decision-making from most projects.

    Meanwhile, I'm all for the “-or-later” brand of relicensing permission; GPL, LGPL and AGPL have left this as an option for community choice since GPLv1 was published in late 1980s. Projects declare themselves GPLv2-or-later or LGPLv3-or-later, or even (GPLv1-or-later|Artistic) (ala Perl 5) to identify their culture and relicensing permissions. While it would sometimes be nice to have a broad post-hoc relicensing authority, the price for that's expensive: abandonment of community clarity regarding what terms define their software development culture.

    An Anti-Strong-Copyleft Bias?

    Even worse, Project Harmony remains biased against some of the more fine-grained versions of copyleft culture. For example, Allison Randal, who is heavily involved with Project Harmony, argued on Linux Outlaws Episode 204 that Most developers who contribute under a copyleft license — they'd be happy with any copyleft license — AGPL, GPL, LGPL. Yet there are well stated reasons why developers might pick GPL rather than LGPL. Thus, giving a for-profit company (or non-profit that doesn't necessarily share the developers' values) unilateral decision-making power to relicense GPL'd works under LGPL or other weak copyleft licenses is ludicrous.

    In its 1.0 release, Project Harmony attempted to add a “strong copyleft only” option. It doesn't actually work, of course, for the various reasons discussed in detail above. But even so, this solution is just one option among many, and is not required as a default when a project is otherwise copylefted.

    Finally, it's important to realize that the GPLv3, AGPLv3, and LGPLv3 already offer a “proxy option”; projects can name someone to decide the -or-later question at a later time. So, for those projects that use any of the set { LGPLv3-only, AGPLv3-only, GPLv3-only, GPLv2-or-later, GPLv1-or-later, or LGPLv2.1-or-later }, the developers already have mechanisms to move to later versions of the license with ease — by specifying a proxy. There is no need for a CLA to accomplish that task in the GPL family of licenses, unless the goal is to erode stronger copylefts into weaker copylefts.

    This is No Creative Commons, But Even If It Were, Is It Worth Emulation?

    Project Harmony's proponents love to compare the project to Creative Commons, but the comparison isn't particularly apt. Furthermore, I'm not convinced the FLOSS community should emulate the CC license suite wholesale, as some of the aspects of the CC structure are problematic when imported back into FLOSS licensing.

    First of all, Larry Lessig (who is widely considered a visionary) started the CC licensing suite to bootstrap a Free Culture movement that modeled on the software freedom movement (which he spent a decade studying). However, Lessig made some moral compromises in an attempt to build a bridge to the “some rights reserved” mentality. As such, many of the CC licenses — notably those that include the non-commercial (NC) or no-derivatives (ND) terms — are considered overly restrictive of freedom and are therefore shunned by Free Culture activists and software freedom advocates alike.

    Over nearly decade, such advocates have slowly begun to convince copyright holders to avoid CC's NC and ND options, but CC's own continued promulgation of those options lend them undue legitimacy. Thus, CC and Project Harmony make the same mistake: they act amorally in an attempt to build a structure of licenses/agreements that tries to bridge a gulf in understanding between a FaiF community and those only barely dipping their toe in that community. I chose the word amoral, as I often do, to note a situation where important moral principles exist, but the primary actors involved seek to remove morality from the considerations under the guise of leaving decision-making to the “magic of the marketplace”. Project Harmony is repeating the mistake of the CC license suite that the Free Culture community has spent a decade (and counting) cleaning up.

    Conclusions

    Please note that IANAL and TINLA. I'm just a community- and individual-developer- focused software freedom policy wonk who has some grave concerns about how these Project Harmony Agreements operate. I can't give you a fine-grained legal analysis, because I'm frankly only an amateur when it comes to the law, but I am an expert in software freedom project policy. In that vein — corporate attorney endorsements notwithstanding — my opinion is that Project Harmony should be abandoned entirely.

    In fact, the distinction between policy and legal expertise actually shows the root of the problem with Project Harmony. It's a system of documents designed by a committee primarily comprised of corporate attorneys, yet it's offered up as if it's a FLOSS developer consensus. Indeed, Project Harmony itself was initiated by Amanda Brock, a for-profit corporate attorney for Canonical, Ltd, who is remains involved in its drafting. Canonical, Ltd. later hired Mark Radcliffe (a big law firm attorney, who has defended GPL violators) to draft the alpha revisions of the document, and Radcliffe remains involved in the process. Furthermore, the primary drafting process was done secretly in closed meetings dominated by corporate attorneys until the documents were almost complete; the process was not made publicly open to the FLOSS community until April 2011. The 1.0 documents differ little from the drafts that were released in April 2011, and thus remain to this day primarily documents drafted in secrecy by corporate attorneys who have only a passing familiarity with software freedom culture.

    Meanwhile, I've asked Project Harmony's advocates many times who is in charge of Project Harmony now, and no one can give me a straight answer. One is left to wonder who decides final draft approval and what process exists to prevent or permit text for the drafts. The process which once was in secrecy appears to be now in chaos because it was opened up too late for fundamental problems to be resolved.

    A few developers are indeed actively involved in Project Harmony. But Project Harmony is not something that most developers requested; it was initiated by companies who would like to convince developers to passively adopt overreaching CLAs and ©AAs. To me, the whole Project Harmony process feels like a war of attrition to convince developers to accept something that they don't necessarily want with minimal dissent. In short, the need for Project Harmony has not been fully articulated to developers.

    Finally, I ask, what's really broken here? The industry has been steadily and widely adopting GNU and Linux for years. GNU, for its part, has FSF assignments in place for much of its earlier projects, but the later projects (GNOME, in particular) have either been against both ©AA's and CLA's entirely, or are mostly indifferent to them and use inbound=outbound. Linux, for its part, uses the DCO, which does the job of handling the urgent and important parts of a CLA without getting in developers' way and without otherwise forcing extra liabilities onto the developers and handing over important licensing decisions (including copyleft weakening ones) to a single (usually for-profit) entity.

    In short, Project Harmony is a design-flawed solution looking for a problem.

    Further Reading


    0Project Harmony advocates will likely claim to their § 5, “Consequential Damage Waiver” protects developers adequately. I note that it explicitly leaves out, for example, statutory damages for copyright infringement. Also, some types of damages cannot be waived (which is why that section shouts at the reader TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW). Note my discussion of jurisdictions in the main text of this article, and consider the fact that the CLA recipient will obviously select a jurisdiction where the fewest possible damages can be waived. Finally, note that the OR US part of that § 5 is optionally available, and surely corporate attorneys will use it, which means that if they violate the agreement, there's basically no way for you to get any damages from them, even if they their promise to keep the code copylefted and fail.

    1Note: Earlier versions of this blog post conflated slightly “choice of venue” with “choice of law”. The wording has been cleared up to address this problem. Please comment or email me if you believe it's not adequately corrected.

    Posted on Thursday 07 July 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-07-04: Identi.ca Weekly Summary

    Identi.ca Summary, 2011-06-26 through 2011-07-04

    Posted on Monday 04 July 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

May

  • 2011-05-31: Should a Power-User Key Mapping Change Be This Difficult?

    It's been some time since X made me hate computing, but it happened again today (well, yesterday into the early hours of today, actually.

    I got the stupid idea to upgrade to squeeze from lenny yesterday. I was at work, but it was actually a holiday in the USA, and I figured it would be a good time to do some sysadmin work instead of my usual work.

    I admittedly had some things to fix that were my fault: I had backports and other mess installed, but upon removing, the upgrade itself was more-or-less smooth. I faced only a minor problem with my MD device for /boot not starting properly, but the upgrade warned me that I needed to switch to properly using the UUIDs for my RAID arrays, and once I corrected that, all booted fine, even with GRUB2 on my old hardware.

    Once I was in X, things got weird, keyboard-wise. My meta and alt keys weren't working. BTW, I separate Alt from Meta, making my actual Alt key into a meta key, while my lower control is set to an Alt (ala Mod2), since I throw away caps lock and make it a control. (This is for when I'm on the laptop keyboard rather than the HHKB.)

    I've used the same xmodmap for two decades to get this done:

    keycode 22 = BackSpace
    
    clear Mod1
    clear Mod2
    clear Lock
    clear Control
    
    keycode 66  = Control_L
    
    keycode 64 = Meta_L
    keycode 113 = Meta_R
    keycode 37 = Alt_L
    keycode 109 = Alt_R
    
    add Control = Control_L
    
    add Mod1 = Meta_L
    add Mod1 = Meta_R
    
    add Mod2 = Alt_L
    add Mod2 = Alt_R
    

    This just “doesn't work” in squeeze (or presumably any Xorg 7.5 system). Instead, it just gives this error message:

    X Error of failed request:  BadValue (integer parameter out of range for operation)
      Major opcode of failed request:  118 (X_SetModifierMapping)
      Value in failed request:  0x17
      Serial number of failed request:  21
      Current serial number in output stream:  21
    
    … and while my Control key ends up fine, it leaves me with no Mod1 nor Mod2 key.

    There appear to be at least two Debian bugs (564327 and 432011), which were against squeeze before it was released. In retrospect, I sure wish they'd have been release-critical!. (There's also an Ubuntu bug, which of course just punts to the upstream Debian bug.) There are also two further upstream bugs at freedeskop (20145 and 11822), although Daniel Stone thinks the main problem might be fixed upstream.

    I gather that many people “in the know” believe xmodmap to be deprecated, and we all should have switched to xkb years ago. I even got snarky comments to that effect. (Update:) However, after I made this first post, quite angry after 8 hours of just trying to make my Alt key DTRT, I was elated to see Daniel Stone indicate that xmodmap should be backwards compatible. It's always true that almost every time I get pissed off about some Free Software not working, a developer often shows up and tells me they want to fix it. This is in some ways just as valuable as the thing being fixed: knowing that the developer doesn't want the bug to be there — it means it'll be fixed eventually and only patience is required.

    However, the bigger problem really is that xkb appears to lack good documentation. If any exists, I can't find it. madduck did this useful blog post (and, later, vinc17 showed me some docs he was working on too). These are basically the only things I could find that were real help on the issue, and they were sparse. I was able to learn, after hours, that this should be the rough equivalent to my old modmap:

    partial modifier_keys
    xkb_symbols "thinkpad" {
        replace key <CAPS>  {  [ Control_L, Control_L ] };
        modifier_map  Control { <CAPS> };
        replace key <LALT>  {  [ Meta_L ] };
        modifier_map Mod1   { Meta_L, Meta_R };
        key <LCTL> { [ Alt_L ] };
        modifier_map Mod2 { Alt_L };
    };
    

    But, you can't just load that with a program! No, it must be placed in a file called /path/symbols/bkuhn, which it is then loaded with an incantation like this:

    xkb_keymap {
            xkb_keycodes  { include "evdev+aliases(qwerty)" };
            xkb_types     { include "complete"      };
            xkb_compat    { include "complete"      };
            xkb_symbols   { include "pc+us+inet(evdev)+bkuhn(thinkpad)"     };
            xkb_geometry  { include "pc(pc105)"     };
    };
    

    …which, in turn, requires to be fed into: xkbcomp -I/path - $DISPLAY as stdin. Oh, did I mention you have to get the majority of that stuff above by running setxkbmap -print, then modify it to add the bkuhn(thinkpad) part? I'm impressed that madduck figured this all out. I mean, I know xmodmap was arcane incantations and all, but this is supposed to be clearer and better for users wanting to change key mappings? WTF!?!

    Oh, so, BTW, my code in /path/symbols/bkuhn didn't work. I tried every incantation I could think of, but I couldn't get it to think about Alt and Meta as separate Mod2 and Mod1 keys. I think it's actually a bug, because weird things happened when I added lines like:

        modifier_map Mod5 { <META> };
    
    Namely, when I added the above line to my /path/symbols/bkuhn, the Mod2 was then picked up correctly (magically!), but then both LCTL and LALT acted like a Mod2, and I still had no Mod1! Frankly, I was too desperate to get back to my 20 years of keystroke memory to try to document what was going on well enough for a coherent bug report. (Remember, I was doing all this on a laptop where my control key kept MAKING ME SHOUT INSTEAD OF DOING ITS JOB.)

    I finally got the idea to give up entirely on Mod2 and see if i could force the literal LCTL key to be a Mod3, hopefully allowing Emacs to again see my usual Mod1 Meta expectations for LALT. So, I saw what some of the code in /usr/share/X11/xkb/symbols/altwin did to handle Mod3, and I got this working (although it required a sawfish change to expect Mod3 instead of Mod2, of course, but that part was 5 seconds of search and replace). Here's what finally worked as contents of /path/symbols/bkuhn:

    partial modifier_keys
    xkb_symbols "thinkpad" {
        modifier_map  Control { <CAPS> };
        replace key <LALT>  {  [ Meta_L ] };
        modifier_map Mod1   { Meta_L };
        key <LCTL> { type[Group1] = "ONE_LEVEL",
                     symbols[Group1] = [ Super_L ] };
        modifier_map Mod3 { Super_L };
    };
    

    So, is all this really less arcane than xmodmap? Was the eight hours of my life spent learning xkb was somehow worth it, because now I know a better tool than xmodmap? I realize I'm a power user, but I'm not convinced that it should be this hard even for power users. I felt reminiscent of days when I had to use Eric Raymond's mode timings howto to get X working. That was actually easier than this!

    Even though spot claimed this is somehow Debian's fault, I don't believe him. I bet I would run into the same problem on any system using Xorg 7.5. There are clearly known bugs in xmodmap, and I think there is probably a subtle bug I uncovered that exist xkbd, but I am not sure I can coherently report it without revisiting this horrible computing evening again. Clearly, that first thing I tried should have not made two keys be a Mod2, but only when I moved META into Mod5, right?

    BTW, If you're looking for me online tomorrow early, you hopefully know where I am. I'm going to bed two hours before my usual waketime. Ugh. (Update: tekk later typo'ed xmodmap as ’xmodnap‘ on identi.ca. Quite fitting; after working on that all night, I surely needed an xmodnap!

    Update on 2013-04-03: I want to note that the X11 and now Wayland developer named Daniel Stone took an interest in this bug and actually followed up with me two years later giving me a report. It is apparently really hard to fix without a lot of effort, and I've switched to xkb (which I think is even more arcane), but mostly works, except when I'm in Xnest. But my main point is that Daniel stuck with the problem and while he didn't get resolution, he kept me posted. That's a dedicated Free Software developer; I'm just a random user, after all!

    Posted on Tuesday 31 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-26: Choosing A License

    Brett Smith of the FSF has announced a new tutorial available on the GNU website that gives advice about picking a license for your project.

    I'm glad that Brett wrote this tutorial. My typical answer when someone asks me which license to chose is to say: Use AGPLv3-or-later unless you can think of a good reason not to. That's a glib answer that is rarely helpful to questioner. Brett's article is much better and more useful.

    For me, the particularly interesting outcome of the tutorial is how it finishes a the turbulent trajectory of the FSF's relationship with Apache's license. Initially, there was substantial acrimony between the Apache Software Foundation and the FSF because version 2.0 of the Apache License is incompatible with the GPLv2, a point on which the Apache Software Foundation has long disagreed with the FSF. You can even find cases where I was opining in the press about this back when I was Executive Director of the FSF.

    An important component of GPLv3 drafting was to reach out and mend relationships with other useful software freedom licenses that had been drafted in the time since GPLv2 was released. Brett's article published yesterday shows the culmination of that fence-mending: Apache-2.0 is now not only compatible with the GPLv3 and AGPLv3, but also the FSF's recommended permissive license!

    Posted on Thursday 26 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-19: Clarification on Android, its (Lack of) Copyleft-ness, and GPL Enforcement

    I'm grateful to Brian Proffitt for clarifying some of these confusions about Android licensing. In particular, I'm glad I'm not the only one who has cleared up the confusions that Edward J. Naughton keeps spreading regarding the GPL.

    I noted that Naughton even commented on Proffitt's article; the comment spreads even more confusion about the GPL. In particular, Naughton claims that most BusyBox GPL violations are on unmodified versions of BusyBox. That's just absolutely false, if for no other reason that a binary is a modified version of the source code in the first place, and nearly all BusyBox GPL violations involve a binary-only version distributed without any source (nor an offer therefor).

    Mixed in with Naughton's constant confusions about what the GPL and LGPL actually requires, he does have a possible valid point lurking: there are a few components in Android/Linux that are under copyleft licenses, namely Linux (GPL) and Webkit (LGPL). Yet, in all of Naughton's screeching about this issue, I haven't seen any clear GPL or LGPL violation reports — all I see is speculation about what may or may not be a violation without any actual facts presented.

    I'm pretty sure that I've spent more time reading and assessing the veracity of GPL violation reports than anyone on the planet. I don't talk about this part of it much: but there are, in fact, a lot of false alarms. I get emails every week from users who are confused about what the GPL and LGPL actually require, and I typically must send them back to collect more details before I can say with any certainty a GPL or LGPL violation has occurred.

    Of course, as a software freedom advocate, I'm deeply dismayed that Google, Motorola and others haven't seen fit to share a lot of the Android code in a meaningful way with the community; failure to share software is an affront to what the software freedom movement seeks to accomplish. However, every reliable report that I've seen indicates that there are no GPL nor LGPL violations present. Of course, if someone has evidence to the contrary, they should send it to those of us who do GPL enforcement. Meanwhile, despite Naughton's public claims that there are GPL and LGPL violations occurring, I've received no contact from him. Don't you think if he was really worried about getting a GPL or LGPL violation resolved, he'd contact the guy in the world most known for doing GPL enforcement and see if I could help?

    Of course, Naughton hasn't contacted me because he isn't really interested in software freedom. He's interested in getting press for himself, and writing vague reports about Android copyrights and licensing is a way to get lots of press. I put out now a public call to anyone who believes they haven't received source code that they were required to get under GPL or LGPL to get in touch with me and I'll try to help, or at the very least put you in touch with a copyright holder who can help do some enforcement with you. I don't, however, expect to see a message in my inbox from Naughton any time soon, nor do I expect him to actually write about the wide-spread GPL violations related to Android/Linux that Matthew Garrett has been finding. Garrett's findings are the real story about Android/Linux compliance, but it's presumably not headline-getting enough for Naughton to even care.

    Finally, Naughton is a lawyer. He has the skills at hand to actually help resolve GPL violations. If he really cared about GPL violations, he'd offer his pro bono help to copyright holders to assist in the overwhelming onslaught of GPL violations. I've written and spoken frequently about how I and others who enforce the GPL are really lacking in talented person-power to do more enforcement. Yet, again, I haven't received an offer from Naughton or these other lawyers who are opining about GPL non-compliance to help me get some actual GPL compliance done. I await their offers, but I'm certainly not expecting they'll be forthcoming.

    (BTW, you'll notice that I don't link to Naughton's actual article myself; I don't want to give him any more linkage than he's already gotten. I'm pretty aghast at the Huffington Post for giving a far-reaching soapbox to such shoddy commentary, but I suppose that I shouldn't expect better from a company owned by AOL.)

    Posted on Thursday 19 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-18: Germany Trip: Samba XP Keynote and LinuxTag Keynote

    I just returned a few days ago to the USA after one week in Germany. I visited Göttingen for my keynote at Samba XP (which I already blogged about). Attending Samba XP was an excellent experience, and I thank SerNet for sponsoring my trip there. Since going full-time at Conservancy last year, I have been trying to visit the conferences of each of Conservancy's member projects. It will probably take me years to do this, but given that Samba is one of Conservancy's charter members, it's good that I have finally visited Samba's annual conference. It was even better that they asked me to give a keynote talk at Samba XP.

    I must admit that I didn't follow the details many of the talks other than Tridge's Samba 4 Status Report talk and Jeremy's The Death of File Protocols. This time I really mean it! talk. The rest, unsurprisingly, were highly specific and detailed about Samba, and since I haven't been a regular Samba user myself since 1996, I didn't have the background information required to grok the talks fully. But I did see a lot of excited developers, and it was absolutely wonderful to meet the entire Samba Team for the first time after exchanging email with them for so many years.

    It's funny to see how different communities tend to standardize around the same kinds of practices with minor tweaks. Having visited a lot of project-specific conferences for Conservancy's members, I'm seeing how each community does their conference, and one key thing all projects have in common is the same final conference session: a panel discussion with all the core developers.

    The Samba Team has their own little tweak on this. First, John Terpstra asks all speakers at the conference (which included me this year) to join the Samba Team and stand up in front of the audience. Then, the audience can ask any final questions of all speakers (this year, the attendees had none). Then, the Samba Team stands up in front of the crowd and takes questions.

    The Samba tweak on this model is that the Samba Team is not permitted to sit down during the Q&A. This year, it didn't last that long, but it was still rather amusing. I've never seen a developers' panel before where the developers couldn't sit down!

    After Samba XP, I headed “back” to Berlin (my flight had landed there on Saturday and I'd taken the Deutsche Bahn ICE train to Göttingen for Samba XP), and arrived just in time to attend LinuxNacht, the LinuxTag annual party. (WARNING: name dropping follows!) It was excellent to see Vincent Untz, Lennart Poettering, Michael Meeks and Stefano Zacchiroli at the party (listed in order I saw them at the party).

    The next day I attended Vincent's talk, which was about cross-distribution collaboration. It was a good talk, although, I think Vincent glossed over too much the fact that many distributions (Fedora, Ubuntu, and OpenSUSE, specifically) are controlled by companies and that cross-distribution collaboration has certain complications because of this corporate influence. I talked with Vincent in more detail about this later, and he argued that the developers at the companies in question have a lot of freedom to operate, but I maintain there are subtle (and sometimes, not so subtle) influences that cause problems for cross-distribution collaboration. I also encouraged Vincent to listen to Richard Fontana's talk, Open Source Projects and Corporate Entanglement, that Karen and I released as an episode of the FaiF oggcast.

    I also attended Martin Michlmayr's talk on SPDX. I kibitzed more than I should have from the audience, pointing out that while SPDX is a good “first start”, it's a bit of a “too little, too late” attempt to address and prevent the flood of GPL violations that are now all too common. I believe SPDX is a great tool for those who already are generally in compliance, but it isn't very likely to impact the more common violations, wherein the companies just ignore their GPL obligations. A lively debate ensued on this topic. I frankly hope to be proved wrong on this; if SPDX actually ends or reduces GPL violations, I'll be happy to work on something else instead.

    On Friday afternoon, I gave my second keynote of the week, which was an updated version of my talk, 12 Years of GPL Compliance: A Historical Perspective. It went well, although I misunderstood and thought I had a full hour slot, but only actually had a 50 minute slot, so I had to rush a bit at the end. I really do hate rushing at the end when speaking primarily to a non-native-English-speaking audience, as I know I'm capable of speaking English way too fast (a problem that I am constantly vigilant about under normal public speaking circumstances).

    The talk was nevertheless pretty well received, and afterward, I was surrounded by a gaggle of interested copyleft enthusiasts, who, as always, were asking what more can be done to enforce the GPL. My talks on enforcement always tend to elicit this reaction, since my final slides are a bit depressing with regard to the volume of GPL enforcement that's currently occurring.

    Meanwhile, I also decided I should also start putting up my slides from talks in a more accessible fashion. Since I use S5 (although I hope to switch to jQuery S5 RSN), my slides are trivially web-publishable anyway. While I've generally published the source code to my slides, it makes sense to also make compiled, quickly viewable versions of my slides on my website too. Finally, I realized I should also put my upcoming public speaking events on my frontpage and have done so.

    After a late lunch on Friday, I saw only the very end of Lennart's talk on systemd, and then I visited for a while with Claudia Rauch, Business Manager of KDE, e.V. in the KDE booth. Claudia kindly helped me practice my German a bit by speaking slowly enough that I could actually parse the words.

    I must admit I was pretty frustrated all week that my German is now so poor. I studied German for two years in High School and one semester in college. I even participated in a three-week student exchange trip to a Gymnasium (the German term for college-prep high school) in Munich in 1990. Yet, German speaking skills are just a degraded version of what they once were.

    Meanwhile, I did rather like Berlin's Tegel airport (TXL). It's a pretty small airport, but I really like its layout. Because of its small size, each check-in area is attached to a security checkpoint, which is then directly connected to the gate. While this might seem a bit tight, it makes it very easy to check-in, go through security, and then be right at your gate. I can understand why an airport this small would have to be closed (it's slated for closure in 2012), but I am glad that I got a chance to travel to it (and probably again, for the Desktop Summit) before it closes.

    Posted on Wednesday 18 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-10: Samba XP Keynote, Jeremy's GPLv3 talk, & GPLv2/LGPLv3

    This morning, I gave the keynote talk at Samba XP. I was really honored to be invited to speak to Samba XP (the Samba Developers and Users Conference).

    My talk, entitled Samba, GPL Enforcement, and the GPLv3 was about GPL enforcement, and how it relates to the Samba project and embedded devices. I've pushed my slides to my gitorious “talks” project. That's of course just the source code of the slides. Previously, some folks have complained that they have trouble building the slides because they don't have pandoc or other such dependencies installed. (I do, BTW, believe that my Installation Information is adequate, even though the talk isn't GPLv3'd, but it does have some dependencies :). Anyway, I've put up an installed version of my Samba XP slides as well.

    Some have asked if there's a recording of the talk. I see video cameras and the like here at Samba XP, and I will try to get the audio for a future FaiF Cast.

    Speaking of FaiFCast, Karen and I timed it (mostly by luck) so that, while I'm at Samba XP, we'd release FaiF 0x0F, which includes audio from Jeremy's Linux Collaboration Summit talk about why Samba chose to switch to GPLv3. BTW, I'm sorry I didn't do show notes this week, but because of being at Samba XP the last few days, I wasn't able to write detailed show notes. However, the main thing you need are Jeremy's slides, which are linked to from the show notes section.

    Later this week, I'm giving the Friday keynote at Linux Tag, also on GPL enforcement (It's at 13:00 on Friday 2011-05-13). I hope those of you who can come to Berlin will come see my talk!

    Finally, Ivo de Decker in the audience at Samba XP asked about LGPLv3/GPLv2 incompatibility. In my answer to the question, I noted the GPL Compatibility Matrix on the GNU site. Also, regarding the specific LGPLv3 compatibility issue, I mentioned post I made last year on the GNOME desktop-devel-list about the LGPLv3/GPLv2 issue. I promised that I'd also quote that post here in my blog, so that there was a stable URL that discussed the issue. I therefore quote the relevant parts of that email here:

    The most important point [about GPLv2-only/LGPLv3-or-later incompatibility], I'd like to make is to suggest a possible compromise. Specifically, I suggest disjunctive licensing, (GPLv2|LGPLv3-or-later), which could be implemented like this:

    This program's license gives you software freedom; you can copy, modify, convey, propagate, and/or redistribute this software under the terms of either:

    • the GNU Lesser General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
    • OR
    • the GNU General Public License, version 2 only, as published by the Free Software Foundation.

    In addition, when you convey, distribute, and/or propagate this software and/or modified versions thereof, you may also preserve this notice so that recipients of such distributions will also have both licensing options described above.

    A good moniker for this license is (GPLv2|LGPLv3-or-later). It actually gives 3+ licensing options to downstream: they can continue under the full (GPLv2|LGPLv3-or-later), or they can use GPLv2-only, or they can use LGPLv3 (or any later version of the LGPL).

    Some folks will probably note this isn't that different from LGPLv2.1-or-later. The key difference, though, is that it removes LGPLv2.1 from the mix. If you've read the LGPLv2.1 lately, you've seen that it really shows its age. LGPLv3 is a much better implementation of the weak copyleft idea. If any license needs deprecation, it's LGPLv2.1. I thus personally believe upgrade to (GPLv2|LGPLv3-or-later) is something worth doing right away.

    I note, BTW, that existing code licensed LGPLv2.1-or-later has also already given permission to migrate to the license (GPLv2|LGPLv3-or-later). Specifically, it's permitted by LGPLv2.1 to license the work under GPLv2 if you want to. Furthermore, LGPLv2.1-or-later permits you to license LGPLv3-or-later. Therefore, LGPLv2.1-or-later can, at anyone's option, be upgraded to (GPLv2|LGPLv3-or-later).

    Note the incompatibility exists on both [GPLv2-only and LGPLv3] sides (it proverbially takes two to tango), but the incompatibility centers primarily around the strong copyleft on the GPLv2 side, not the weak copyleft on the LGPLv3 side. Specifically, GPLv2 requires that:

    You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License.
    and
    You may not impose any further restrictions on the recipients' exercise of the rights granted herein.

    This is part of the text that creates copyleft: making sure that other terms can't be imposed.

    The problem occurs in interaction with another copyleft license (even a weak one). Usually, no two copyleft implementations are isomorphic and therefore there are different requirements in the details. LGPLv3, for its part, doesn't care much about additional restrictions imposed by another license (hence its weak copyleft nature). However, from the point of view of the GPLv2-side observer, any additional requirement, even minor ones imposed by LGPLv3, are merely “further restrictions”.

    This is why copyleft licenses, when they want compatibility, have to explicitly permit relicensing (as LGPLv2 does for GPLv2/GPLv3 and as LGPLv3 does for GPLv3), by allowing you to “upgrade” to the another copyleft from the current copyleft. To be clear, from the point of view the LGPLv3 observer, it has no qualms about “upgrading” from LGPLv3 to GPLv2. The problem occurs from the GPLv2 side, specifically because the (relatively) minor things that LGPLv3 requires are written differently from the similar things asked for in GPLv2.

    It's a common misconception that LGPL has no licensing requirements whatsoever on “works that use the library” (LGPLv2) or the “Application” (LGPLv3). That's not completely true; for example, in LGPLv3 § 4+5 (and LGPLv2.1 § 6+7), you find various requirements regarding licensing of such works. Those requirements aren't strict and are actually very easy to comply with. However, from GPLv2's point of view, they are “further restrictions” since they are not written exactly in the same fashion in GPLv2.

    (BTW, note that LGPLv2.1's compatibility with GPLv2 and/or GPLv3 comes explicitly from LGPLv2.1's Section 3, which allows direct upgrade to GPLv2 or GPLv3, or to any later version published by FSF).

    I hope the above helps some to clarify the GPLv2/LGPLv3 incompatibility.

    Posted on Tuesday 10 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-05-03: Mono Developers Losing Jobs Isn't Good

    Both RMS and I have been critical of Mono, which is an implementation of Microsoft's C# language infrastructure for GNU/Linux systems. (Until recently, at Novell, Miguel De Icaza has led a team of developers working on Mono.)

    Most have probably heard that the Attachmate acquisition of Novell completed last week, and that reports of who will be fired because of the acquisition have begun to trickle. This evening, it's been reported that the developers working on Mono will be among those losing their jobs.

    In the last few hours, I've seen some folks indicating that this is a good outcome. I worry that this sort of response is somehow inspired by the criticisms and concerns about Mono that software freedom advocates like myself raised. I thus seek to clarify the concerns regarding Mono, and point out why it's unfortunate that these developers won't work on Mono anymore.

    First of all, note that the concerns about Mono are that many Microsoft software patents likely read on any C# implementation, and Microsoft's so-called “patent promise” is not adequate to defend the software freedom community. Anyone who uses Mono faces software patent danger from Microsoft. This is precisely why using Mono to write new applications, targeted for GNU/Linux and other software freedom systems, should be avoided.

    Nevertheless, Mono should exist, for at least one important reason: some developers write lots and lots of new code on Microsoft systems in C#. If those developers decide they want to abandon Microsoft platforms tomorrow and switch to GNU/Linux, we don't want them to change their minds and decide to stay with Microsoft merely because GNU/Linux lacks a C# implementation. Obviously, I'd support convincing those developers to learn another language system so they won't write more code in C#, but initially, the lack of Free Software C# implementation might impede their switch to Free Software.

    This is a really subtle point that has been lost in the anti-Mono rhetoric. I am not aware of any software freedom advocate who wants Mono to cease to exist. The problem that I and others point out is this: it's dangerous to write new code that relies on technology that's likely patented by Microsoft — a company that's known to shake down or even sue Free-Software-using companies over patents. But the value of Mono (while much more limited than its strongest proponents claim) is still apparent and real: it has a good chance to entice developers living in a purely Microsoft environment to switch to a software freedom environment. It was therefore valuable that Novell was funding developers to work on Mono; it's a bad outcome for software freedom that those developers will lose their jobs. Finally, while perhaps some of those developers might get jobs working on more urgent Free Software tasks, many will likely end up in jobs doing proprietary software development. And developers switching from Free Software work to proprietary software work is surely always a loss for software freedom.

    Update (2011-05-04): ciarang pointed out to me that Mono for Android is proprietary software. As such, it's certainly better if no one is working on that proprietary project anymore. However, I would make an educated guess that most of the employed Mono developers at Novell were working on the Free Software components, so the above analysis in the main blog post still likely applies in most cases.

    Posted on Tuesday 03 May 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2011-04-29: Hopefully My Voice Will Hold Out

    Those of you that follow me on identi.ca already know that I caught a rhinovirus, and was very sick while at the 2011 Linux Collaboration Summit (LCS). Unfortunately, the illness got worse since I “worked through” it while at LCS, and I was too sick to work the entire week afterward (the week of 2011-04-11).

    I realized thereafter that, before the conference, I forgot to even mention online that I was speaking and chairing the legal track at LCS. I can't blame that on the illness, since I should have noted it on my blog the week before.

    So, just barely, I'm posting ahead of time about my appearances this weekend at LinuxFest Northwest (LFNW). I have been asked to give four (!) talks in two days; and unfortunately three are scheduled almost right in a row in one day (I begged the organizers to fix it so I was giving two each day, but they'd already locked in the schedule, and even though I told them within hours of the schedule going up, they weren't able to change it.)

    It's a rather amusing story how I ended up giving four talks. Most of you that go to many conferences (and particularly those that speak at them) know that the hardest part of speaking is preparing a new talk. I learned in graduate school that you must practice talks to keep the quality high, and if a talk is new, I usually try to practice twice. That's a pretty large time investment, not to mention the research that has to go into a talk.

    So, what I typically do is have between three and five talks that are “active” on my playlist. I'll keep a talk in rotation for about ten to eighteen months and then discontinue it (unless there's new at least 40% new material that I can cycle into, which I sort of consider more-or-less a new talk).

    Often, I'll submit up to four active talks to a given conference. I do this for a couple of reasons. The first and foremost reason is to give choice to the program chairs. If I'm prepared to speak on an array of topics, I'd rather offer up what I can to the chairs so that they can pick the best fit for the track they wish to construct. The second reason is, quite frankly, is for when I really want to go to a conference. My employer only funds my travel if I am speaking at a conference, so sometimes, if I really want to go, I have to increase my odds as much as possible that a talk will be accepted. Multiple submissions usually help in this regard (although I can imagine it may hurt one's chances in some rare cases).

    Now, something happened with LFNW that's never happened to me before: the organizers accepted three of my four talk submissions, and wait-listed one of them! I wrote to them immediately telling them I was honored they wanted so many of my talks, and that I was of course happy to give all of them if they really wanted me to. Then, I happened to be working on my talks last weekend when the LFWN organizers were updating the schedule, and suddenly, I reloaded the page and saw they'd added the fourth talk as well!

    So, in the next two days, I'm giving four talks at LFNW! Most of them are talks I've given before (or at least, given substantially similar talks), so I am not worried about preparation (although I may have to skip any social events on Saturday night to practice the three-in-row for Sunday). What I'm worried about is that my voice has just recovered in the last few days from that long-lasting illness, and I am a bit afraid it won't hold out through all four. So, if you're at LFNW and notice I'm more quiet than usual in the hallway conversations (I'm not known for my silence, after all ;), it's because I'm saving my voice for my talks!

    Anyway, here's the run down of my LFWN talks:

    If you're not able to attend LFNW, I'll try to live-dent as much as I can (when I'm not speaking, which will actually be almost half the conference ;). Watch my identi.ca stream for the #lfnw tag. In particular, I'm really looking forward to Tom “spot” Callaway's talk. I really want to understand his reasoning for not signing the Chromium CLA, since, as Fontana suggests, it might illuminate the reasoning why developers might oppose CLAs for permissively licensed projects.

    By way of previews of what conferences I'll be at soon (I'll try to blog more fully about them a week before they start), I'll be giving keynotes at both Samba XP and LinuxTag in a few weeks (both about GPL compliance). I'll also be speaking about GPL compliance at OSCON in late July, and I might be on a panel at the Desktop Summit. I hope to see many of you at one of these events.

    I should also apologize to the excellent folks who run RMLL (aka the Libre Software Meeting) in France each year. When I came back so ill from LCS and lost that whole week of work because of it, I took a hard look at my 2011 travel schedule and I just had to cut something. I'm sorry it had to be RMLL, but I hope to make it up to them in a future year. (I actually had to do something similar to the LFNW guys in 2010, which I'm about to make up for this weekend!)

    Posted on Friday 29 April 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

March

  • 2011-03-18: Questioning The Original Analysis On The Bionic Debate

    I was hoping to avoid having to comment further on this problematic story. I figured a comment as a brief identi.ca statement was enough when it was just a story on the Register. But, it's now hit a major tech news outlet, and I feel that, given that I'm typically the first person everyone in the Free Software world comes to ask if something is a GPL violation, I'm going to get asked about this soon, so I might as well preempt the questions with a blog post, so I can answer any questions about it with this URL.

    In short, the question is: Does Bionic (the Android/Linux default C library developed by Google) violate the GPL by importing “scrubbed” headers from Linux? For those of you seeking TL;DR version: You can stop now if you expect me to answer this question; I'm not going to. I'm just going to show that the apparent original analysis material that started this brouhaha is a speculative hypothesis which would require much more research to amount to anything of note.

    Indeed, the kind of work needed to answer these questions typically requires the painstaking work of a talented developer working very closely with legal counsel. I've done analysis like this before for other projects. The only one I can easily talk about publicly is the ath5k situation. (If you want to hear more on that, you can listen to an old oggcast where I discussed this with Karen Sandler or read papers that were written on the subject back where I used to work.)

    Anyway, most of what's been written about this subject of the Linux headers in Bionic has been poorly drafted speculation. I suppose some will say this blog post is no better, since I am not answering any questions, but my primary goal here is to draw attention that absolutely no one, as near as I can tell, has done the incredibly time consuming work to figure out anything approaching a definitive answer! Furthermore, the original article that launched this debate (Naughton's paper, The Bionic Library: Did Google Work Around the GPL?) is merely a position paper for a research project yet to be done.

    Naughton's full paper gives some examples that would make a good starting point for a complete analysis. It's disturbing, however, that his paper is presented as if it's a complete analysis. At best, his paper is a position statement of a hypothesis that then needs the actual experiment to figure things out. That rigorous research (as I keep reiterating) is still undone.

    To his credit, Naughton does admit that only the kind of analysis I'm talking about would yield a definitive answer. You have to get almost all the way through his paper to get to:

    Determining copyrightability is thus a fact-specific, case-by-case exercise. … Certainly, sorting out what is and isn’t subject to GPLv2 in Bionic would require at least a file-by-file, and most likely line-by-line, analysis of Bionic — a daunting task[.]
    Of course, in that statement, Naughton makes the mistake of subtly including an assumption in the hypothesis: he fails to acknowledge clearly that it's entirely possible the set of GPLv2-covered work found in Bionic could be the empty set; he hasn't shown it's not the empty set (even notwithstanding his very cursory analysis of a few files).

    Yet, even though Naughton admits full analysis (that he hasn't done) is necessary, he nevertheless later makes sweeping conclusions:

    The 750 Linux kernel header files … define a complex overarching structure, an application programming interface, that is thoughtfully and cleverly designed, and almost assuredly protected by copyright.
    Again, this is a hypothesis, that would have be tested and proved with evidence generated by the careful line-by-line analysis Naughton himself admits is necessary. Yet, he doesn't acknowledge that fact in his conclusions, leaving his readers (and IMO he's expecting to dupe lots of readers unsophisticated on these issues) with the impression he's shown something he hasn't. For example, one of my first questions would be whether or not Bionic uses only parts of Linux headers that are required by specification to write POSIX programs, a question that Naughton doesn't even consider.

    Finally, Naughton moves from the merely shoddy analysis to completely alarmist speculation with:

    But if Google is right, if it has succeeded in removing all copyrightable material from the Linux kernel headers, then it has unlocked the Linux kernel from the restrictions of GPLv2. Google can now use the “clean” Bionic headers to create a non-GPL’d fork of the Linux kernel, one that can be extended under proprietary license terms. Even if Google does not do this itself, it has enabled others to do so. It also has provided a useful roadmap for those who might want to do the same thing with other GPLv2-licensed [sic] programs, such as databases.

    If it turns out that Google has succeeded in making sure that the GPLv2 does not apply to Bionic, then Google's success is substantially more narrow. The success would be merely the extraction of the non-copyrightable facts that any C library needs to know about Linux to make a binary run when Linux happens to be the kernel underneath. Now, it should be duly noted that there already exist two libraries under the LGPL that have already implemented that (namely, glibc, and uClibc — the latter of which Naughton's cursory research apparently didn't even turn up). As it stands, anyone who wants to write user-space applications on a Linux-based system already can; there are multiple C library choices available under the weak copyleft license, LGPL. Google, for its part, believes they've succeed at is to make a permissively licensed third alternative, which is an outcome that would be no surprise to us who have seen something like it done twice before.

    In short, everyone opining here seems to be conflating a lot of issues. There are many ways to interface with Linux. Many people, including me, believe quite strongly that there is no way to make a subprogram in kernel space (such as a device driver) without the terms of the GPLv2 applying to it. But writing a device driver is a specialized task that's very different from what most Linux users do. Most developers who “use Linux” — by which they typically mean write a user space program that runs on a GNU/Linux operating system — have (at most) weak copyleft (LGPL) terms to follow due to glibc or uClibc. I admit that I sometimes feel chagrin that proprietary applications can be written for GNU/Linux (and other Linux-based) systems, but that was a strategic decision that RMS made (correctly) at the start of the GNU project one that the Linux project, for its part, has also always sought.

    I'm quite sure no one — including hard-core copyleft advocates like me — expects nor seeks the GPLv2 terms to apply to programs that interface with Linux solely as user-space programs that runs on an operating system that uses Linux as its kernel. Thus, I'd guess that even if it turned out that Google made some mistakes in this regard for Bionic, we'd all work together to rectify those mistakes so that the outcome everyone intended could occur.

    Moreover, to compare the specifics of this situation to other types of so-called “copyleft circumvention techniques” is just link-baiting that borders on trolling. Google wasn't seeking to circumvent the GPL at all; they were seeking to write and/or adapt a permissively licensed library that replaced an LGPL'd one. I'm of course against that task on principle (I think Google should have just used glibc and/or uClibc and required LGPL-compliance by applications). But, to deny that it's possible to rewrite a C library for Linux under a license that isn't GPLv2 would also imply immediately the (incorrect) conclusion that uClibc and glibc are covered by the GPLv2, and we are all quite sure they aren't; even Naughton himself admits that (regarding glibc).

    Google may have erred; no one actually knows for sure at this time. But the task they sought to do has been done before and everyone intended it to be permitted. The worst mistake of which we might ultimately accuse Google is inadvertently taking a copyright-infringing short-cut. If someone actually does all the research to prove that Google did so, I'd easily offer a 1,000-to-1 bet to anyone that such a copyright infringement could be cleared up easily, that Bionic would still work as a permissively licensed C library for Linux, and the implications of the whole thing wouldn't go beyond: “It's possible to write your own C library for Linux that isn't covered by the GPLv2” — a fact which we've all known for a decade and a half anyway.

    Update (2011-03-20): Many people, including slashdot, have been linking to this comment by RMS on LKML about .h files. It's important to look carefully at what RMS is saying. Specifically, RMS says that sometimes #include'ing a .h file creates a copyright derivative work, and sometimes it doesn't; it depends on the details. Then, RMS goes to talk on some rules of thumb that can help determine the outcome of the question. The details are what matters; and those are, as I explain in the main post above, what requires careful analysis done jointly and in close collaboration between a developer and a lawyer. There is no general rule of thumb that always immediately leads one to the right answer on this question.

    Posted on Friday 18 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-03-11: Thoughts On GPL Compliance of Red Hat's Linux Distribution

    Today, I was interviewed by Sam Varghese about whether Red Hat's current distribution policies for the kernel named Linux are GPL-compliant. You can read there that AFAICT they are, and have been presented with no evidence to the contrary.

    Last week, when the original story broke, I happened to be at the Linux Foundation's End User Summit, and I had a rather extensive discussion with attendees there about this issue, including Jon Corbet, who wrote an article about it. In my mind, the issue was settled after that discussion, and I had actually put out of my mind, until I realized (when Varghese contacted me for an interview) that people had conflated my previous blog post from last weekend as being a comment specifically on the kernel distribution issue. (I'd been otherwise busy this week, and thus hadn't yet seen Jake Edge's follow-up article on LWN (to which I respond to in detail below).)

    (BTW, on this issue please note that my analysis below is purely a GPLv2 analysis. GPLv3 analysis may be slightly different here, but since, for the moment, the issue relates to the kernel named Linux which is currently licensed GPLv2-only, discussing GPLv3 in this context is a bit off-topic.)

    Preferred Form For Modification

    I have been a bit amazed to watch that so much debate on this has happened around the words of preferred form of the work for making modifications to it from GPLv2§3. In particularly, I can't help chuckling at the esoteric level to which many people believe they can read these words. I laugh to myself and think: not a one of these people commenting on this has ever tried in their life to actually enforce the GPL.

    To be a bit less sardonic, I agree with those who are saying that the preferred form of modification should be the exact organization of the bytes as we would all like to have them to make our further work on the software as easy as possible. But I always look at GPL with an enforcers' eye, and have to say this wish is one that won't be fulfilled all the time.

    The way preferred form for modification ends up working out in GPLv2 enforcement is something more like: you must provide complete sources that a sufficiently skilled software developer can actually make use of it without any reverse engineering. Thus, it does clearly prohibit things like source on cuneiform tablet that Branden mentions. (BTW, I wonder if Branden knows we GPL geeks started using that as an example circa 2001.) GPLv2 also certainly prohibits source obfuscation tools that Jake Edge mentions. But, suppose you give me a nice .tar.bz2 file with all the sources organized neatly in mundane ASCII files, which I can open up with tar xvf, cd in, type make and get a binary out of those sources that's functional and feature-equivalent to your binaries, and then I can type make install and that binary is put into the right place on the device where your binary runs. I reboot the device, and I'm up and running with my newly compiled version rather than the binary you gave me. I'd call that scenario easily GPLv2 compliant.

    Specifically, ease of upstream contribution has almost nothing to do with GPL compliance. Whether you get some software in a form the upstream likes (or can easily use) is more or less irrelevant to the letter of the license. The compliance question always is: did their distribution meet the terms required by the GPL?

    Now, I'm talking above about the letter of the license. The spirit of the license is something different. GPL exists (in part) to promote collaboration, and if you make it difficult for those receiving your distributions to easily share and improve the work with a larger community, it's still a fail (in a moral sense), but not a failure to comply with the GPL. It's a failure to treat the community well. Frankly, no software license can effectively prevent annoying and uncooperative behavior from those who seek to only follow the exact letter of the rules.

    Prominent Notices of Changes

    Meanwhile, what people are actually complaining about is that Red Hat RHEL customers have access to better meta-information about why various patches were applied. Some have argued (quite reasonably) that this information is required under GPLv2§2(a), but usually that section has been interpreted to allow a very terse changelog. Corbet's original article mentioned that the Red Hat distribution of the kernel named Linux contains no changelog. I see why he said that, because it took me some time to find it myself (and an earlier version of this very blog post was therefore incorrect on that point), but the src.rpm file does have what appears to be a changelog embedded in the kernel.spec file. There's also a simple summary as well that in release notes found in a separate src.rpm (in the file called kernel.xml). This material seems sufficient to me to meet the letter-of-the-license compliance for GPLv2§2(a) requirements. I, too, wish the log were a bit more readable and organized, but, again, the debate isn't about whether there's optimal community cooperation going on, but rather whether this distribution complies with the GPL.

    Relating This to the RHEL Model

    My previous blog post, which, while it was focused on answering the question of whether or not Fedora is somehow inappropriately exploited (via, say, proprietary relicensing) to build the RHEL business model, also addressed the issue whether RHEL's business model is GPL-compliant. I didn't think about that blog post in connection with the distribution of the kernel named Linux issue, but even considering that now, I still have no reason to believe RHEL's business model is non-compliant. (I continue to believe it's unfriendly, of course.)

    Varghese directly asked me if I felt the if you exercise GPL rights, then your money's no good here business model is an additional restriction under GPLv2. I don't think it is, and said so. Meanwhile, I was a bit troubled by the conclusions Jake Edge came to regarding this. First of all, I haven't forgotten about Sveasoft (geez, who could?), but that situation came up years after the RHEL business model started, so Jake's implication that Sveasoft “tried this model first” would be wrong even if Sveasoft had an identical business model.

    However, the bigger difficulty in trying to use the Sveasoft scenario as precedent (as Jake hints we should) is not only because of the “link rot” Jake referenced, but also because Sveasoft frequently modified their business model over a period of years. There's no way to coherently use them as an example for anything but erratic behavior.

    The RHEL model, by contrast, AFAICT, has been consistent for nearly a decade. (It was once called the “Red Hat Advanced Server”, but the business model seems to be the same). Notwithstanding Red Hat employees themselves, I've never talked to anyone who particularly likes the RHEL business model or thinks it is community-friendly, but I've also never received a report from someone that showed a GPL violation there. Even the “report” that first made me aware of the RHEL model, wherein someone told me: I hired a guy to call Red Hat for service all day every day for eight hours a day and those jerks at Red Hat said they were going to cancel my contract didn't sound like a GPL violation to me. I'd cancel the guy's contract, too, if his employee was calling me for eight hours a day straight!

    More importantly, though, I'm troubled that Jake indicates the RHEL model requires people to trade their GPL rights for service, because I don't think that's accurate. He goes further to say that terminat[ing] … support contract for users that run their own kernel … is another restriction on exercising GPL rights; that's very inaccurate. Refusing to support software that users have modified is completely different from restricting their right to modify. Given that the GPL was designed by a software developer (RMS), I find it particularly unlikely that he would have intended GPL to require distributors to provide support for any conceivable modification. What software developers want a license that puts that obligation hanging over their head?

    The likely confusion here is using the word “restriction” instead of “consequence”. It's undeniable that your support contractors may throw up their hands in disgust and quit if you modify the software in some strange way and still expect support. It might even be legitimately called a consequence of choosing to modify your software. But, you weren't restricted from making those modifications — far from it.

    As I've written about before, I think most work should always be paid by the hour anyway, which is for me somewhat a matter of personal principle. I therefore always remain skeptical of any software business model that isn't structured around the idea of a group of people getting paid for the hours that they actually worked. But, it's also clear to me that the GPL doesn't mandate that “hourly work contracts” are the only possible compliant business model; there are clearly others that are GPL compliant, too. Meanwhile, it's also trivial to invent a business model that isn't GPL compliant — I see such every day, on my ever-growing list of GPL violating companies who sell binary software with no source (nor offer therefor) included. I do find myself wishing that the people debating whether the exact right number of angels are dancing on the head of this particular GPL pin would instead spend some time helping to end the flagrant, constant, and obvious GPL violations with which I spent much time dealing time each week.

    On that note, if you ever think that someone is violating the GPL, (either for an esoteric reason or a mundane one), I hope that you will attempt to get it resolved, and report the violation to a copyright holder or enforcement agent if you can't. The part of this debate I find particularly useful here is that people are considering carefully whether or not various activities are GPL compliant. To quote the signs all over New York City subways, If you see something, say something. Always report suspicious activity around GPL software so we find out together as a community if there's really a GPL violation going on, and correct it if there is.

    Posted on Friday 11 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-03-05: The Slur “Open Core”: Toward More Diligent Analysis

    I certainly deserve some of the blame, and for that I certainly apologize: the phrase “Open Core” has apparently become a slur word, used by those who wish to discredit the position of someone else without presenting facts. I've done my best when using the term to also give facts that backed up the claim, but even so, I finally abandoned the term back in November 2010, and I hope you will too.

    The story, from my point of view, began seventeen months ago, when I felt that “Open Core” was a definable term and that behavior was a dangerous practice. I gave it the clear definition that I felt reflected problematic behavior, as I wrote at the time:

    Like most buzzwords, Open Core has no real agreed-upon meaning. I'm using it to describe a business model whereby some middleware-ish system is released by a single, for-profit entity copyright holder, who requires copyright-assigned changes back to the company, and that company sells proprietary add-ons and applications that use the framework.

    Later — shortly after I pointed out Mark Shuttleworth's fascination with and leanings towards this practice — I realized that it was better to use the preexisting, tried-and-true term for the practice: “proprietary relicensing”. I've been pretty consistent in avoiding the term “Open Core” since then. I called on Shuttleworth to adopt the FSF's recommendations to show Canonical, Ltd. isn't seeking proprietary relicensing and left the whole thing at that. (Shuttleworth, of course, has refused to even respond, BTW.)

    Sadly, it was too late: I'd help create a monster. A few weeks later, Alexandre Oliva (whose positions on the issue of proprietary software inside the kernel named Linux I definitely agree with) took it a step too far and called the kernel named Linux an “Open Core” project. Obviously, Linux developers don't and can't engage in proprietary relicensing; some just engage in a “look the other way” mentality with regard to proprietary components inside Linux. At the time, I said that the term “Open Core” was clearly just too confusing to analyze a real-world licensing situation.

    So, I just stopped calling things “Open Core”. My concerns currently are regarding the practice of collecting copyright assignments to copyleft software and engaging in proprietary relicensing activity, and I've focused on advocating against that specific practice. That's what I've criticized Canonical, Ltd. for doing — both with their existing copyright assignment policies and with their effort to extend those policies community-wide with the manipulatively named “Project Harmony”.

    Shuttleworth, for his part, is now making use the slur phrase I'd inadvertently help create. Specifically, a few days ago, Shuttleworth accused Fedora of being an “Open Core” product.

    I've often said that Fedora is primarily a Red Hat corporate project (and it's among the reasons that I run Debian rather than Fedora). However, since “Open Core” clearly still has no agreed-upon meaning, when I read what Shuttleworth said, I considered the question of whether his claim had any merit (using the “Open Core” definition I used myself before I abandoned the term). Put simply, I asked myself the question: Does Red Hat engaged in “proprietary relicensing of copyleft software with mandatory copyright assignment or non-copyleft CLA“ with Fedora?.

    Fact is, despite having serious reservations about how the RHEL business model works, I have no evidence to show that Red Hat requires copyright assignment or a mandatory non-copyleft CLA on copyleft projects on any products other than Cygwin. So, if Shuttleworth had said: Cygwin is Red Hat's Open Core product, I would still encourage him that we should all now drop the term “Open Core”, but I would also agree with him that Cygwin is a proprietary-relicensed product and that we should urge Red Hat to abandon that practice. (Update: It's also been noted by Fontana on identi.ca (although the statement was subsequently deleted by the user) that some JBoss projects require permissive CLAs but licenses back out under LGPL, so that would be another example.)

    But does Fedora require contributors to assign copyright or do non-copyleft licensing? I can't find the evidence, but there are some confusing facts. Fedora has a Contributor Licensing Agreement (CLA), which, in §1(D), clearly allows contributors to chose their own license. If the contributor accepts all the defaults on the existing Fedora CLA, the contributor gives a permissive license to the contribution (even for copyleft projects). Fortunately, though, the author can easily copyleft a work under the agreement, and it is still accepted by Fedora. (Contrast this with Canonical, Ltd.'s mandatory copyright assignment form, which explicitly demands Canonical, Ltd.'s power for proprietary relicensing.)

    While Fedora's current CLA does push people toward permissive licensing of copylefted works, the new draft of the Fedora CLA is much clearer on this point (in §2). In other words, the proposed replacement closes this bug. It thus seems to me Red Hat is looking to make things better, while Canonical, Ltd. hoodwinks us and is manufacturing consent in Project “Harmony” around a proprietary copyright-grab by for-profit corporations. When I line up the two trajectories, Red Hat's slowly getting better, and Canonical, Ltd. is quickly getting worse. Thus, Shuttleworth, sitting in his black pot, clearly has no right say that the slightly brown kettle sitting next to him is black, too.

    It could be that Shuttleworth is actually thinking of the RHEL business model itself, which is actually quite different than proprietary relicensing. I do have strong, negative opinions about the RHEL business model; I have long called it the if you like copyleft, your money is no good here business model. It's a GPL-compliant business model merely because the GPL is silent on whether or not you must keep someone as your customer. Red Hat tells RHEL customers that if they chose to engage in their rights under GPL, then their support contract will be canceled. I've often pointed out (although this may be the first time publicly on the Internet) that Red Hat found a bright line of GPL compliance, walked right up to it, and were the first to stake out a business model right on the line. (I've been told, though, that Cygnus experimented with this business model before being acquired by Red Hat.) This practice is, frankly, barely legitimate.

    Ironically, RMS and I used to say that Canonical, Ltd.'s new business model of interest — proprietary relicensing (once trailblazed by MySQL AB) — was also barely legitimate. In one literal sense, that's still true: it's legitimate in the sense that it doesn't violate GPL. In the sense of software freedom morality, I think proprietary relicensing harms the Free Software community too much, and that it was therefore a mistake to ever tolerate it.

    As for RHEL's business model, I've never liked it, but I'm still unsure (even ten years later since its inception) about its software freedom morality. It doesn't seem as harmful as proprietary relicensing. In proprietary licensing, those mistreated under the model are the small business and individual developers who are pressured to give up their copyleft rights lest their patches be rejected or rewritten. The small entities are left to chose between maintaining a fork or giving over proprietary corporate control of the codebase. In RHEL's business model, by contrast, the mistreated entities are large corporations that are forced to choose between exercising their GPL rights and losing access to the expensive RHEL support. It seems to me that the RHEL model is not immoral, but I definitely find it unfriendly and inappropriate, since it says: if you exercise software freedom, you can't be our customer.

    However, when we analyze these models that occupy the zone between license legitimacy and software freedom morality, I think I've learned from the mistake of using slur phrases like “Open Core”. From my point of view, most of these “edge” business models have ill effects on software freedom and community building, and we have to examine their nuances mindfully and gage carefully the level of harm caused. Sometimes, over time, that harm shows itself to be unbearable (as with proprietary relicensing). We must stand against such models and meanwhile continue to question the rest with precise analysis.

    Posted on Saturday 05 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-03-01: Software Freedom Is Elementary, My Dear Watson.

    I've watched the game show, Jeopardy!, regularly since its Trebek-hosted relaunch on 1984-09-10. I even remember distinctly the Final Jeopardy question that night as This date is the first day of the new millennium. At the age of 11, I got the answer wrong, falling for the incorrect What is 2000-01-01?, but I recalled this memory eleven years ago during the debates regarding when the millennium turnover happened.

    I had periods of life where I watched Jeopardy! only rarely, but in recent years (as I've become more of a student of games (in part, because of poker)), I've watched Jeopardy! almost nightly over dinner with my wife. I've learned that I'm unlikely to excel as a Jeopardy! player myself because (a) I read slow and (b) my recall of facts, while reasonably strong, is not instantaneous. I thus haven't tried out for the show, but I'm nevertheless a fan of strong players.

    Jeopardy! isn't my only spectator game. Right after college, even though I'm a worse-than-mediocre chess player, I watched with excitement as Deep Blue played and defeated Kasparov. Kasparov has disputed the results and how much humans were actually involved, but even so, such interference was minimal (between matches) and the demonstration still showed computer algorithmic mastery of chess.

    Of course, the core algorithms that Deep Blue used were well known and often implemented. I learned α-β pruning in my undergraduate AI course and it was clear that a sufficiently fast computer, given a few strong heuristics, could beat most any full information game with a reasonable branching factor. And, computers typically do these days.

    I suppose I never really thought about the issues of Deep Blue being released as Free Software. First, because I was not as involved with Free Software then as I am now, and also, as near as anyone could tell, Deep Blue's software was probably not useful for anything other than playing chess, and its primary power was in its ability to go very deep (hence the name, I guess) in the search tree. In short, Deep Blue was primarily a hardware, not a software, success story.

    It was nevertheless, impressive, and last month, I saw the next installment in this IBM story. I watched with interest as IBM's Watson defeated two champion Jeopardy! players. Ken Jennings, for one, even welcomed our new computer overlords.

    Watson beating Jeopardy! is, frankly, a lot more innovative than Deep Blue beating chess. Most don't know this about me, but I came very close to focusing my career on PhD work in Natural Language Processing; I believe fundamentally it's the area of AI most in need of attention and research. Watson is a shining example of success in modern NLP, and I actually believe some of the IBM hype about how Watson's technology can be applied elsewhere, such as medical information systems. Indeed, IBM has announced a deal with Columbia University Medical Center to adapt the system for medical diagnostics. (Perhaps Watson's next TV appearance will be on House.)

    This all sounds great to most people, but to me, my real concern is the freedom of the software. We've shown in the software freedom community that to advance software and improve it, sharing the software is essential. Technology locked up in a vaulted cave doesn't allow all the great minds to collaborate. Just as we don't lock up libraries so that only the guilded overlords have access, nor should the best software technology be restricted in proprietariness.

    Indeed, Eric Brown, at his Linux Foundation End User Linux Summit talk, told us that Watson relied heavily on the publicly available software freedom codebase, such as GNU/Linux, Hadoop, and other FLOSS components. They clearly couldn't do their work without building upon the work we shared with IBM, yet IBM apparently ignores its moral obligation to reciprocate.

    So, I just point-blank asked Brown why Watson is proprietary. Of course, I long ago learned to never ask a confrontational question from the crowd at a technical talk without knowing what the answer is likely to be. Brown answered in the way I expected: We're working with Universities to provide a framework for their research. I followed up asking when he would actually release the sources and what license would be. He dodged the question, and instead speculated about what licenses IBM sometimes like to use when it does chose to release code; he did not indicate if Watson's sources will ever be released. In short, the answer from IBM is clear: Watson's general ideas will be shared with academics, but the source code won't be.

    This point is precisely one of the reasons I didn't pursue a career in academic Computer Science. Since most jobs — including professorships at Universities — for PhDs in Computer Science require that any code written be kept proprietary, most Computer Science researchers have convinced themselves that code doesn't matter; only publishing ideas do. This belief is so pervasive that I knew something like this would be Brown's response to my query. (I was even so sure, I wrote almost this entire blog post before I asked the question).

    I'd easily agree that publishing papers is better than the technology being only a trade secret. At least we can learn a little bit about the work. But in all but the pure theoretical areas of Computer Science, code is written to exemplify, test, and exercise the ideas. Merely publishing papers and not the code is akin to a chemist publishing final results but nothing about the methodologies or raw data. Science, in such cases, is unverifiable and unreproducible. If we accepted such in fields other than CS, we'd have accepted the idea that cold fusion was discovered in 1989.

    I don't think I'm going to convince IBM to release Watson's sources as Free Software. What I do hope is that perhaps this blog post convinces a few more people that we just shouldn't accept that Computer Science is advanced by researchers who give us flashy demos and code-less research papers. I, for one, welcome our computer overlords…but only if I can study and modify their source code.

    Posted on Tuesday 01 March 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2011-02-15: Everyone in USA: Comment against ACTA today!

    In the USA, the deadline for comments on ACTA is today (Tuesday 15 February 2011) at 17:00 US/Eastern. It's absolutely imperative that every USA citizen submit a comment on this. The Free Software Foundation has details on how to do so.

    ACTA is a dangerous international agreement that would establish additional criminal penalties, promulgate DMCA/EUCD-like legislation around the world, and otherwise extend copyright law into places it should not go. Copyright law is already much stronger than anyone needs.

    On a meta-point, it's extremely important that USA citizens participate in comment processes like this. The reason that things like ACTA can happen in the USA is because most of the citizens don't pay attention. By way of hyperbolic fantasy, imagine if every citizen of the USA wrote a letter today to Mr. McCoy about ACTA. It'd be a news story on all the major news networks tonight, and would probably be in the headlines in print/online news stories tomorrow. Our whole country would suddenly be debating whether or not we should have criminal penalties for copying TV shows, and whether breaking a DVD's DRM should be illegal.

    Obviously, that fantasy won't happen, but getting from where we are to that wonderful fantasy is actually linear; each person who writes to Mr. McCoy today makes a difference! Please take 15 minutes out of your day today and do so. It's the least you can do on this issue.

    The Free Software Foundation has a sample letter you can use if you don't have time to write your own. I wrote my own, giving some of my unique perspective, which I include below.

    The automated system on regulations.gov assigned this comment below the tracking number of 80bef9a1 (cool, it's in hex! :)

    Stanford K. McCoy
    Assistant U.S. Trade Representative for Intellectual Property and Innovation
    Office of the United States Trade Representative
    600 17th St NW
    Washington, DC 20006

    Re: ACTA Public Comments (Docket no. USTR-2010-0014)

    Dear Mr. McCoy:

    I am a USA citizen writing to urge that the USA not sign ACTA. Copyright law already reaches too far. ACTA would extend problematic, overly-broad copyright rules around the world and would increase the already inappropriate criminal penalties for copyright infringement here in the USA.

    Both individually and as an agent of my employer, I am regularly involved in copyright enforcement efforts to defend the Free Software license called the GNU General Public License (GPL). I therefore think my perspective can be uniquely contrasted with other copyright holders who support ACTA.

    Specifically, when engaging in copyright enforcement for the GPL, we treat it as purely a civil issue, not a criminal one. We have been successful in defending the rights of software authors in this regard without the need for criminal penalties for the rampant copyright infringement that we often encounter.

    I realize that many powerful corporate copyright holders wish to see criminal penalties for copyright infringement expanded. As someone who has worked in the area of copyright enforcement regularly for 12 years, I see absolutely no reason that any copyright infringement of any kind ever should be considered a criminal matter. Copyright holders who believe their rights have been infringed have the full power of civil law to defend their rights. Using the power of government to impose criminal penalties for copyright infringement is an inappropriate use of government to interfere in civil disputes between its citizens.

    Finally, ACTA would introduce new barriers for those of us trying to change our copyright law here in the USA. The USA should neither impose its desired copyright regime on other countries, nor should the USA bind itself in international agreements on an issue where its citizens are in great disagreement about correct policy.

    Thank you for considering my opinion, and please do not allow the USA to sign ACTA.

    Sincerely,
    Bradley M. Kuhn

    Posted on Tuesday 15 February 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2011-01-23: A Brief Tutorial on a Shared Git Repository

    A while ago, I set up Git for a group privately sharing the same central repository. Specifically, this is a tutorial for those who would want to have a Git setup that is a little bit like a SVN repository: a central repository that has all the branches that matter published there in one repository. I found this file today floating in a directory of “thing I should publish at some point”, so I decided just to put it up, as every time I came across this file, it reminded me I should put this up and it's really morally wrong (IMO) to keep generally useful technical information private, even when it's only laziness that's causing it.

    Before you read this, note that most developers don't use Git this way, particularly with the advent of shared hosting facilities like Gitorious, as systems like Gitorious solve the weirdness of problems that this tutorial addresses. When I originally wrote this (more than a year ago), the only well-known project that I found using a system like this was Samba; I haven't seen a lot of other projects that do this. Indeed, this process is not really what Git is designed to do, but sometimes groups that are used to SVN expect there to be a “canonical repository” that has all the contents of the shared work under one proverbial roof, and set up a “one true Git repository” for the project from which everyone clones.

    Thus, this tutorial is primarily targeted to a user mostly familiar with an SVN workflow, that has ssh access to host.example.org that has a writable (usually by multiple people) Git repository living in the directory /git/REPOSITORY.git/.

    Ultimately, The stuff that I've documented herein is basically to fill in the gaps that I found when reading the following tutorials:

    So, here's my tutorial, FWIW. (I apologize that I make the mortal sin of tutorial writing: I drift wildly between second-person-singular, first-person-plural, and passive-voice third-person. If someone sends me a patch to the HTML file that fixes this, I'll fix it. :)

    Initial Setup

    Before you start using git, you should run these commands to let it know who you are so your info appears correctly in commit logs:

     $ git config --global user.email Your.Email@example.com
     $ git config --global user.name “Your Real Name”
    

    Examining Your First Clone

    To get started, first we clone the repository:

      $ git clone ssh://host.example.org/git/REPOSITORY.git/
    

    Now, note that Git almost always operates in the terms of branches. Unlike Subversion, Git's branches are first-class citizens and most operations in Git operate around a branch. The default branch is often called “master”, although I tend to avoid using the master branch for much, mainly because everyone who uses git has a different perception of what the master branch should embody. Therefore, giving all your branches more descriptive name is helpful. But, when you first import something into git, (for example, from existing Subversion trees), everything from Subversion's trunk is thrown on the master branch.

    So, we take a look at the result of that clone command. We have a new directory, called REPOSITORY, that contains a “working checkout&rquo; of the repository, and under that there is one special directory, REPOSITORY/.git/, which is a full copy of the repository. Note that this is not like Subversion, where what you have on your local machine is merely one view of the repository. With Git, you have a full copy of everything. However, an interesting thing has been done on your copy with the branches. You can take a look with these commands:

      $ git branch
      * master
      $ git branch -r
      origin/HEAD
      origin/master
    

    The first list of branches are the branches that are personal and local to you. (By default, git branch uses the -l option, which shows you only “local” branches; -r means “remote” branches. You can also use -a to see all of them.) Unless you take action to publish your local branches in some way, they will be your private area to work in and live only on your computer. (And be aware: they are not backed up unless you back them up!) The remote ones, that all start with “origin/” track the progress on the shared repository.

    (Note the term “origin” is a standard way of referring to “the repository from whence you cloned”, and origin/BRANCH refers to “BRANCH as it looks in the repository from whence you cloned”. However, there is nothing magical about the name “origin”. It's set up to DTRT in your WORKING-DIRECTORY/.git/config file, and the clone command set it all up for you, which is why you have them now.)

    Get to Work

    The canonical way to “get moving” with a new task in Git is to somehow create a branch for it. Branches are designed to be cheap and quick to create so that users will not be shy about creating a new one. Naming conventions are your own, but generally I like to call a branch USERNAME/TASK when I'm still not sure exactly what I'll be doing with it (i.e., who I will publish it to, etc.) You can always merge it back into another branch, or copy it to another branch (perhaps using a more formal name) later.

    Where do you Start Your Branch From?

    Once a repository exists, each branch in the repository comes from somewhere — it has a parent. These relationships help Git know how to easily merge branches together. So, the most typical procedure of starting a new branch of your own is to begin with an existing branch. The git checkout command is the easiest to use to start this:

       git checkout -b USERNAME/feature origin/master
    

    In this example, we've created our own local branch, called USERNAME/feature, and it's started from the current state of origin/master. When you are getting started, you will probably usually want to always base your new branches off of ones that exist on the origin. This isn't a rule, it's just less confusing for a newbie if all your branches have a parent revision that live on the server.

    Now, it's important to note here that no branch stands still. It's best to think about a branch as a “moving pointer” to a linked list of some set of revisions in the repository.

    Every revision stored in git, local or remote, has a SHA1 which is computed based on the revisions before it plus new patch the revision just applied.

    Meanwhile, the only two substantive differences between one of these SHA1 identifiers and an actual branch is that (a) Git keeps changing what identifier the branch refers to as new commits come in (aka it moves the branch's HEAD), and (b) Git keeps track of the history of identifiers the branch previously referred to.

    So, above, when we asked git checkout to creat a new branch called USERNAME/feature based on origin/master, the two important things to realize are that (a) your new branch has its HEAD pointing at the same head that is currently the HEAD of origin/master, and (b) you got a new list to start adding revisions in the new branch.

    We didn't have to use branch for that. We could have simply started our branch from any old SHA1 of any revision. We happened to want to declare a relationship with the master branch on the server in this case, but we could have easily picked any SHA1 from our git log and used that one.

    Do Not Fear the checkout

    Every time you run a git checkout SOMETHING command, your entire working directory changes. This normally scares Subversion users; it certainly scared me the first time I used git checkout SOMETHING. But, the only reason it is scary is because svn switch, which is the roughly analogous command in the Subversion world, so often doesn't do something sane with your working copy. By contrast, switching branches and changing your whole working directory is a common occurrence with git.

    Note, however, that you cannot do git checkout with uncommitted changes in your directory (which, BTW, also makes it safer than svn switch). However, don't be too Subversion-user-like and therefore afraid to commit things. Remember, with Git (and unlike with Subversion), committing and publishing are two different operations. You can commit to your heart's content on local branches and merge or push into public branches later. (There are even commands to squash many commits into one before putting it on a public branch, in case you don't want people to see all the intermediate goofiness you might have done. This is why, BTW, many Git users commit as often as an SVN user would save in their editors.)

    However, if you must switch checkouts but really do fear making commits, there is a tool for you: look into git stash.

    Share with the Group

    Once you've been doing some work, you'll end up with some useful work finished on a USERNAME/feature branch. As noted before, this is your own private branch. You probably want to use the shared repository to make your work available to others.

    When using a shared Git repository, there are two ways to share your branches with your colleagues. The first procedure is when you simply want to publish directly on an existing branch. The second is when you wish to create your own branch.

    Publishing to Existing Branch

    You may choose to merge your work directly into a known branch on the remote repository. That's a viable option, certainly, but often you want to make it available on a separate branch for others to examine, even before you merge it into something like the master branch. We discuss the slightly more complicated new branch publication next, but for the moment, we can consider the quicker process of publishing to an existing branch.

    Let's consider when we have work on USERNAME/feature and we would like to make it available on the master branch. Make sure your USERNAME/feature branch is clean (i.e., all your changes are committed).

    The first thing you should verify is that you have what I call a “local tracking branch” (this is my own term that I made up, I think, you won't likely see it in other documentation) that is tied directly with the same name to the origin. This is not completely necessary, but is much more convenient to keep track of what you are doing. To check, do a:

       $ git branch -a
       * USERNAME/feature
         master
         origin/master
    

    In the list, you should see both master and origin/master. If you don't have that, you should create it with:

       $ git checkout -b master origin/master
    

    So, either way, you wan to be on the master branch. To get there if it already existed, you can run:

       $ git checkout master
    

    And you should be able verify that you are now on master with:

       $ git branch
       * master
       ...
    

    Now, we're ready to merge in our changes:

       $ git merge USERNAME/feature
       Updating ded2fb3..9b1c0c9
       Fast forward
       FILE ...
       N files changed, X insertions(+), Y deletions(-)
    

    If you don't get any message about conflicts, everything is fine. Your changes from USERNAME/feature are now on master. Next, we publish it to the shared repository:

      $ git push
      Counting objects: N, done.
      Compressing objects: 100% (A/A), done.
      Writing objects: 100% (A/A), XXX bytes, done.
      Total G (delta T), reused 0 (delta 0)
      refs/heads/master: IDENTIFIER_X -> IDENTIFIER_Y
      To ssh://host.example.org/git/REPOSITORY.git
       X..Y  master -> master
    

    Your changes can now be seen by others when they git pull (See below for details).

    Publishing to a New Branch

    Suppose, what you wanted to instead of immediately putting the feature on the master branch, you wanted to simply mirror your personal feature branch to the rest of your colleagues so they can try it out before it officially becomes part of master. To do that, first, you need tell Git we want to make a new branch on the shared repository. In this case, you do have to use the git push command as well. (It is a catch-all command for any operations you want to do to the remote repository without actually logging into the server where the shared Git repository is hosted. Thus, Not surprisingly, nearly any git push commands you can think of will require you to be net.connected.)

    So, first let's create a local branch that has the actual name we want to use publicly. To do this, we'll just use the checkout command, because it's the most convenient and quick way to create a local branch from an already existing local branch:

      $ git branch -l
      * USERNAME/feature
        master
        ...
      $ git checkout -b proposed-feature USERNAME/feature
      Switched to a new branch “proposed-feature”
      $ git branch -l
      * proposed-feature
        USERNAME/feature
        master
        ...
    

    Now, again, we've only created this branch locally. We need an equivalent branch on the server, too. This is where git push comes in:

      $ git push origin proposed-feature:refs/heads/proposed-feature
    

    Let's break that command down. The first argument for push is always “the place you are pushing to”. That can be any sort of git URL, including ssh://, http://, or git://. However, remember that the original clone operation set up this shorthand “origin” to refer to the place from whence we cloned. We'll use that shorthand here so we don't have to type out that big long URL.

    The second argument is a colon-separated item. The left hand side is the local branch we're pushing from on our local repository, and the right hand side is the branch we are pushing to on the remote repository.

    (BTW, I have no idea why refs/heads/ is necessary. It seems you should be able to say proposed-feature:proposed-feature and git would figure out what you mean. But, in the setups I've worked with, it doesn't usually work if you don't put in refs/heads/.)

    That operation will take a bit to run, but when it is done we see something like:

      Counting objects: 35, done.
      Compressing objects: 100% (31/31), done.
      Writing objects: 100% (33/33), 9.44 MiB | 262 KiB/s, done.
      Total 33 (delta 1), reused 27 (delta 0)
      refs/heads/proposed-feature: 0000000000000000000000000000000000000000
                                     -> CURRENT_HEAD_SHA1_SUM
      To ssh://host.example.org/git/REPOSITORY.git/
       * [new branch]      proposed-feature -> proposed-feature
    

    In older Git clients, you may not see that last line, and you won't get the origin/proposed-feature branch until you do a subsequent pull. I believe newer git clients do the pull automatically for you.

    Reconfiguring Your Client to see the New Remote Branch

    Annoyingly, as the creator of the branch, we have some extra config work to do to officially tell our repository copy that these two branches should be linked. Git didn't know from our single git push command that our repository's relationship with that remote branch was going to be a long term thing. To marry our local to origin/proposed-feature to a local branch, we must use the commands:

      $ git config branch.proposed-feature.remote origin
      $ git config branch.proposed-feature.merge refs/heads/proposed-feature
    

    We can see that this branch now exists because we find:

      $ git branch -a
      * proposed-feature
        USERNAME/feature
        master
        origin/HEAD
        origin/proposed-feature
        origin/master
     

    After this is done, the remote repository has a proposed-feature branch and, locally, we have a proposed-feature branch that is a “local tracking branch” of origin/proposed-feature. Note that our USERNAME/feature, where all this stuff started from, is still around too, but can be deleted with:

    git branch -d USERNAME/feature
    

    Finding It Elsewhere

    Meanwhile, someone else who has separately cloned the repository before we did this won't see these changes automatically, but a simple git pull command can get it:

      $ git pull
      remote: Generating pack...
      remote: Done counting 35 objects.
      remote: Result has 33 objects.
      remote: Deltifying 33 objects...
      remote:  100% (33/33) done
      remote: Total 33 (delta 1), reused 27 (delta 0)
      Unpacking objects: 100% (33/33), done.
      From ssh://host.example.org/git/REPOSITORY.git
       * [new branch]      proposed-feature -> origin/proposed-feature
      Already up-to-date.
      $ git branch -a
      * master
        origin/HEAD
        origin/proposed-feature
        origin/master
    

    However, their checkout directory won't be updated to show the changes until they make a local “mirror” branch to show them the changes. Usually, this would be done with:

      $ git checkout -b proposed-feature origin/proposed-feature
    

    Then they'll have a working copy with all the data and a local branch to work on.

    BTW, if you want to try this yourself just to see how it works, you can always make another clone in some other director just to play with, by doing something like:

      $ git clone ssh://host.example.org/git/SOME-REPOSITORY.git/ \
        extra-clone-for-git-didactic-purposes
    

    Now on this secondary checkout (which makes you just like the user who is not the creator of the new branch), work can be pushed and pulled on that branch easily. Namely, anything you merge into or commit on your local proposed-feature branch will automatically be pushed to origin/proposed-feature on the server when you git push. And, anything that shows up from other users on the origin/proposed-feature branch will show up when you do a git pull. These two branches were paired together from the start.

    Irrational Rebased Fears

    When using a shared repository like this, it's generally the case that git rebase usually screws something up. When Git is used in the “normal way”, rebase is one of the amazing things about Git. The rebase idea is: you unwind the entire work you've done on one of your local branches, bringing in changes that other people have made in the meantime, and then reapply your changes on top of them.

    It works out great when you use Git the way the Linux Project does. However, if you use a single, shared repository in a work group, rebase can be dangerous.

    Generally speaking, though, with a shared repository, you can use git merge and won't need rebasing. My usual work flow is that I get started on a feature with:

      $ git checkout -b bkuhn/new-feature starting-branch
    

    I work work work away on it. Then, when it's ready, I send a patch around to a mailing list that I generate with:

      $ git diff $(git merge-base starting-branch bkuhn/new-feature) bkuhn/new-feature
    

    Note that the thing in the $() returns a single identifier for a version, namely, the version of the fork point between starting-branch and bkuhn/new-feature. Therefore, the diff output is just the stuff I've actually changed. This generates all the differences between the place where I forked and my current work.

    Once I have discussed and decided with my co-developers that we like what I've done, I do this:

      $ git checkout starting-branch
      $ git merge bkuhn/new-feature
    

    If all went well, this should automatically commit my feature into starting-branch. Usually, there is also an origin/starting-branch, which I've probably set up for automatic push/pull with my local starting-branch, so I then can make the change officially by running:

      $ git push
    

    The fact that I avoid rebase is probably merely FUD, and if I learned more, I could use it safely in cases with shared repository. But I have no advice on how to make it work. In particular, this Git FAQ entry shows quite clearly that my work sequence ceases to work all that well when you do a rebase — namely, doing a git push becomes more complicated.

    I am sure a rebase would easily become very necessary if I lived on bkuhn/new-feature for a long time and there had been tons of changes underneath me, but I generally try not to dive to deep into a fork, although many people love DVCS because they can do just that. YMMV, etc.

    Posted on Sunday 23 January 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-01-18: Free as in Freedom, Episode 0x07

    I realized that I should start regularly noting here on my blog when the oggcast that I co-host with Karen Sandler is released. There are perhaps folks who want content from my blog but haven't subscribed to the RSS feed of the show, and thus might want to know when new episodes come out. If this annoys people reading this blog, please let me know via email or identica.

    In particular, perhaps readers won't like that, in these posts (which are going to be written after the show), I'm likely to drift off into topics beyond what was talked about on the show, and there may be “spoilers” for the oggcast in them. Again, if this annoys you (or if you like it) please let me know.

    Today's FaiF episode is entitled Revoked?. The main issue of discussion is some recent confusions about the GPLv2 release of WinMTR. I was quoted in an article about the topic as well, and in the oggcast we discuss this issue at length.

    To summarize my primary point in the oggcast: I'm often troubled when these issues come up, because I've seen these types of confusions so many times before in the last decade. (I've seen this particular one, almost exactly like this, at least five times.) I believe that those of us who focus on policy issues in software freedom need to do a better job documenting these sorts of issues.

    Meanwhile, after we recorded the show I was thinking again about how Karen points out in the oggcast that the primary issues are legal ones. I don't really agree with that. These are policy questions, that are perhaps informed by legal analysis, and it's policy folks (and, specifically, Free Software project leaders) that should be guiding the discussion, not necessarily lawyers.

    That's not to say that lawyers can't be policy folks as well; I actually think Karen and a few other lawyers I know are both. The problem is that if we simply take things like GPL on their face — as if they are unchanging laws of nature that simply need to be interpreted — we miss out on the fact that licenses, too, can have bugs and can fail to work the way that they should. A lawyer's job is typically to look at a license, or a law, or something more or less fixed in its existence and explain how it works, and perhaps argue for a particular position of how it should be understood.

    In our community, activists and project leaders who set (or influence) policy should take such interpretations as input, and output plans to either change the licenses and interpretation to make sure they properly match the goals of software freedom, or to build up standards and practices that work within the existing licensing and legal structure to advance the goal of building a world where all published software is Free Software.

    So, those are a few thoughts I had after recording; be sure to listen to FaiF 0x07 available in ogg and mp3 formats.

    Posted on Tuesday 18 January 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2011-01-02: Conservancy Activity Summary, 2010-10-01 to 2010-12-31

    [ Crossposted from Conservancy's blog. ]

    I had hoped to blog more regularly about my work at Conservancy, and hopefully I'll do better in the coming year. But now seems a good time to summarize what has happened with Conservancy since I started my full-time volunteer stint as Executive Director from 2010-10-01 until 2010-12-31.

    New Members

    We excitedly announced in the last few months two new Conservancy member projects, PyPy and Git. Thinking of PyPy connects me back to my roots in Computer Science: in graduate school, I focused on research about programming language infrastructure and, in particular, virtual machines and language runtimes. PyPy is a project that connects Conservancy to lots of exciting programming language research work of that nature, and I'm glad they've joined.

    For its part, Git rounds out a group of three DVCS projects that are now Conservancy members; Conservancy is now the home of Darcs, Git, and Mercurial. Amusingly, when I reminded the Git developers when they applied that their “competition” were members, the Git developers told me that they were inspired to apply because these other DVCS' had been happy in Conservancy. That's a reminder that the software freedom community remains a place where projects — even that might seem on the surface as competitors — seek to get along and work together whenever possible. I'm glad Conservancy now hosts all these projects together.

    Meanwhile, I remain in active discussions with five projects that have been offered membership in Conservancy. As I always tell new projects, joining Conservancy is a big step for a project, so it often takes time for communities to discuss the details of Conservancy's Fiscal Sponsorship Agreement. It may be some time before these five projects join, and perhaps they'll ultimately decide not to join. However, I'll continue to help them make the right decision for their project, even if joining a different fiscal sponsor (or not joining one at all) is the ultimately right choice.

    Also, about once every two weeks, another inquiry about joining Conservancy comes in. We won't be able to accept all the projects that are interested, but hopefully many can become members of Conservancy.

    Annual Filings

    In the late fall, I finished up Conservancy's 2010 filings. Annual filings for a non-profit can be an administrative rat-hole at times, but the level of transparency they create for an organization makes them worth it. Conservancy's FY 2009 Federal Form 990 and FY 2009 New York CHAR-500 are up on Conservancy's filing page. I always make the filings available on our own website; I wish other non-profits would do this. It's so annoying to have to go to a third-party source to grab these documents. (Although New York State, to its credit, makes all the NY NPO filings available on its website.)

    Conservancy filed a Form 990-EZ in FY 2009. If you take a look, I'd encourage you to direct the most attention to Part III (which is on the top of page 2) to see most of Conservancy's program activities between 2008-03-01 to 2009-02-28.

    In FY 2010, Conservancy will move from the New York State requirement of “limited financial review” to “full audit“ (see page 4 of the CHAR-500 for the level requirements). Conservancy had so little funds in FY 2007 that it wasn't required to file a Form 990 at all. Now, just three years later, there is enough revenue to warrant a full audit. However, I've already begun preparing myself for all the administrative work that will entail.

    Project Growth and Funding

    Those increases in revenue are related to growth in many of Conservancy's projects. 2010 marked the beginning of the first full-time funding of a developer by Conservancy. Specifically, since June, Matt Mackall has been funded through directed donations to Conservancy to work full-time on Mercurial. Matt blogs once a month (under topic of Mercurial Fellowship Update) about his work, but, more directly, the hundreds of changesets that Matt's committed really show the advantages of funding projects through Conservancy.

    Conservancy is also collecting donations and managing funding for various part-time development initiatives by many developers. Developers of jQuery, Sugar Labs, and Twisted have all recently received regular development funding through Conservancy. An important part of my job is making sure these developers receive funding and report the work clearly and fully to the community of donors (and the general public) that fund this work.

    But, as usual with Conservancy, it's handling of the “many little things” for projects that make a big difference and sometimes takes the most time. In late 2010, Conservancy handled funding for Code Sprints and conferences for the Mercurial, Darcs, and jQuery. In addition, jQuery held a conference in Boston in October, for which Conservancy handled all the financial details. I was fortunate to be able to attend the conference and meet many of the jQuery developers in person for the first time. Wine also held their annual conference in November 2010, and Conservancy handled the venue details and reimbursements to many of travelers to the conference.

    Also, as always, Conservancy project contributors regularly attend other conferences related to their projects. At least a few times a month, Conservancy reimburses developers for travel to speak and attend important conferences related to their projects.

    Google Summer of Code

    Since its inception, Google's Summer of Code (SoC) program has been one of the most important philanthropy programs for Open Source and Free Software projects. In 2010, eight Conservancy projects (and 5% of the entire SoC program) participated in SoC. The SoC program funds college students for the summer to contribute to the projects, and an experienced contributor to project mentors each student. A $500 stipend is paid to the non-profit organization of the project for each project contributor who mentors a student.

    Furthermore, there's an annual conference, in October, of all the mentors, with travel funded by Google. This is a really valuable conference, since it's one of the few places where very disparate Free Software projects that usually wouldn't interact can meet up in one place. I attended this year's Soc Mentor Summit and hope to attend again next year.

    I'm really going to be urging all Conservancy's projects to take advantage of the SoC program in 2011. The level of funding given out by Google for this program is higher than any other open-application funding program for FLOSS. While Google's selfish motives are clear (the program presumably helps them recruit young programmers to hire), the benefit to Free Software community of the program can nevertheless not be ignored.

    GPL Enforcement

    GPL Enforcement, primarily for our BusyBox member project, remains an active focus of Conservancy. Work regarding the lawsuit continues. It's been more than a year since Conservancy filed a lawsuit against fourteen defendants who manufacture embedded devices that included BusyBox without source nor an offer for source. Some of those have come into compliance with the GPL and settled, but a number remain and are out of compliance; our litigation efforts continue. Usually, our lawyers encourage us not to comment on ongoing litigation, but we did put up a news item in August when the Court granted Conservancy a default judgment against one of the defendants, Westinghouse.

    Meanwhile, in the coming year, Conservancy hopes to expand efforts to enforce the GPL. New violation reports on BusyBox arrive almost daily that need attention.

    More Frequent Blogging

    As noted at the start of this post, my hope is to update Conservancy's blog more regularly with information about our activities.

    This blog post was covered on LWN and on lxnews.org.

    Posted on Sunday 02 January 2011 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2010

November

  • 2010-11-16: In Defense of Bacon

    Jono Bacon is currently being criticized for the manner in which he launched an initiative called OpenRespect.Org. Much of this criticism is unfair, and I decided to write briefly here in support of Jono, because he's a victim of a type of mistreatment that I've experienced myself, so I have particularly strong empathy for his situation.

    To be clear, I'm not even a supporter of Jono's OpenRespect.Org initiative myself. I think there are others who are doing good work in this area already (for example, various efforts around getting women involved in Free Software have long recognized and worked on the issue, since mutual respect is an essential part having a more diverse community). Also, I felt that Jono's initiative was slanted toward encouraging people respect all actions by companies, some of which don't advance Free Software. I commented on Jono's blog to share my criticisms of the initiative when he was still formulating it. In short, I think the wording of the current statement on OpenRespect.org seems to indicate people should accept anyone else's choice as equally moral. As someone who believes software freedom as a moral issue, and thus view development and distribution of proprietary software as an immoral act, I have a problem with such a mandate, although I nevertheless strive to be respectful in pursuit of that view. I would hate to be declared disrespectful merely because I believe in the morality of software freedom.

    Yet, despite the fact that I disagree with some of the details of Jono's initiative, I believe most of the criticisms have been unfair. First and foremost, we should take Jono at his word that this initiative is his own and not one undertaken on behalf of Canonical, Ltd. I doubt Jono would dispute that his work at Canonical, Ltd. inspired him to think about these issues, but that doesn't mean that everything he does on his own time on his own website is a Canonical, Ltd. activity.

    Indeed, I've personally been similarly attacked for items I've said on this blog of my own, which of course does not represent the views of any of my employers (past nor present) nor any organizations with which I have volunteer affiliations. When I have things to say on those topics, I have other fora to post officially, as does Jono.

    So, I've experienced first-hand what Jono is currently experiencing: namely, that people ignore disclaimers precisely to attack someone who has an opinion that they don't like. By conflating your personal opinions with those of your employer's, people subtly discredit you — for example, by using your employment relationship to put inappropriate pressure on you to change your positions. I'm very sad to see that this same thing I've been a victim of is now happening to Jono, too. I couldn't just watch it happen without making a statement of solidarity and pointing out that such treatment is unfair.

    Even if we don't agree with the OpenRespect.org initiative (and I don't, for reasons stated above), there is no one to blame but Jono himself, as he's told us clearly this isn't a Canonical initiative, and I've seen no evidence that shows the situation is otherwise.

    I do note that there are other criticisms raised, such as whether or not Jono reached out in the best possible way to others during the launch, or whether others thought they'd be involved when it turned out to be a unilateral initiative. All of that, of course, is something that's reparable (as is my primary complaint above, too), so on those fronts, we should just give our criticism and ask Jono to change it. That's what I did on my issue. He chose not to take my advice, which is his prerogative. My response thereafter was simply to not support the initiative.

    To the extent we don't have enough respect in the FLOSS community, here's an easy place to improve: we should take people at their word until we have evidence to believe otherwise. Jono says OpenRespect.org is his own thing; we should believe him. We shouldn't insist that everything someone says is on behalf of their employer, even if they have a spokesperson role. People have a right to be something more than automatons for their bosses.

    Disclosure: I did not tell Jono I was going to write this post, but after it was completely written, I gave him the chance to make a binary decision about whether I posted it publicly or not. Since you're reading this, he obviously answered 1.

    Posted on Tuesday 16 November 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-11-15: Comments on Perens' Comments on Software Patents

    Bruce Perens and I often disagree about lots of things. However, I urge everyone to read what Bruce wrote this weekend about software patents. I'm very glad he's looking deep into recent events surrounding this issue; I haven't had the time to do so myself because I've been so busy with the launch of my full-time work at Conservancy this fall.

    Despite my current focus on getting Conservancy ramped up with staff, so it can do more of its work, I nevertheless still remain frightfully concerned about the impact of software patents on the future of software freedom, and I support any activities that seek to make sure that software patent threats do not stand in the way of software freedom. Bruce and I have always agreed about this issue: software patents should end, and while individuals with limited means can't easily make that happen themselves, we must all work to raise awareness and public opinion against all patenting of software.

    Specifically, I'm really glad that Bruce has mentioned the issue of lobbying against software patents. Post-Bilski, it's become obvious that software patents can only be ended with legislative change. In the USA, sadly, the only way to do this effectively is through lobbying. Therefore, I've called on businesses (such as Google and Red Hat), that have been targets of software patent litigation, to fund lobbying efforts to end software patents; such funding would simultaneously help themselves as well as software freedom. Unfortunately, as far as I'm aware, no companies have stepped forward to fund such an effort, and they instead seem to spend their patent-related resources on getting more software patents of their own. Meanwhile, individual, not-for-profit Free Software developers simply don't have the resources to do this lobbying work ourselves.

    Nevertheless, there are still a few things individual developers can do in the meantime against software patents. I wrote a complete list of suggestions after Bilski; I just reread it and confirmed all of the suggestions listed there are still useful.

    Posted on Monday 15 November 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

October

  • 2010-10-20: Open Letter: Adopt RMS' CAA/CLA Suggested Texts

    I was glad to read today that Sam Varghese is reporting that Mark Shuttleworth doesn't want Canonical, Ltd. to engage in business models that abuse proprietary relicensing powers in a negative way. I wrote below a brief open letter to Mark for him to read when he returns from UDS (since the article said he would handle this in detail upon his return from there). It's fortunate that there is a simple test to see if Mark's words are a genuine commitment for change by Canonical, Ltd. There's a simple action he can take to show if means to follow through on his statement:

    Dear Mark,

    I was glad to read today that you have no plans to abuse the powers of proprietary relicensing that Canonical, Ltd's. CAAs/CLAs give you. As you are hopefully already aware, Richard Stallman published a few suggested texts to use if you are attempting to only consider benign business models as part of your CAA/CLA process. Since you've committed to that, I would expect you'd be ready, willing and able to adopt those immediately for Canonical, Ltd.'s, CLAs and CAAs. When will you do so?

    Thanks very much for taking my criticisms seriously and I look forward to seeing this change soon in Canonical, Ltd.'s CAAs and/or CLAs.

    Posted on Wednesday 20 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-10-19: Does “Open Core” Actually Differ from Proprietary Relicensing?

    I've been criticized — quite a bit this week, but before that too — for using the term “Open Core” as a shortcut for the phrase “proprietary relicensing0 that harms software freedom”. Meanwhile, Matt Aslett points to Andrew Lampitt's “Open Core” definition as canonical. I admit I wasn't aware of Lampitt's definition before, but I dutifully read it when Aslett linked to it, and I quote it here:

    [Lampitt] propose[s] the following for the Open Core Licensing business model:
    • core is GPL: if you embed the GPL in closed source, you pay a fee
    • technical support of GPL product may be offered for a fee (up for debate as to whether it must be offered)
    • annual commercial subscription includes: indemnity, technical support, and additional features and/or platform support. (Additional commercial features having viewable or closed source, becoming GPL after timebomb period are both up for debate).
    • professional services and training are for a fee.

    The amusing fact about this definition is that half the things on it (i.e., technical support, services/training, indemnity, tech support) can be part of any FLOSS business model and do not require the offering company to hold the exclusive right of proprietary relicensing. Meanwhile, the rest of the items on the list are definitely part of what was traditionally called the “proprietary relicensing business“ dating back to the late 1990s: namely, customers can buy their way out of GPL obligations, and a single company can exclusively offer proprietary add-ons. For example, this is precisely what Ximian did with their Microsoft Exchange Connector for Evolution, which predated the first use of the term “Open Core” by nearly a decade. Cygnus also used this model for Cygwin, which has unfortunately continued at Red Hat (although Richard Fontana of Red Hat wants to end the copyright assignment of Cygwin).

    In my opinion, mass terminology confusion exists on this point simply because there is a spectrum1 of behaviors that are all under the banner of “proprietary relicensing”. Moreover, these behaviors get progressively worse for software freedom as you continue down the spectrum. Nearly the entire spectrum consists of activities that are harmful to software freedom (to varying degrees), but the spectrum does begin with a practice that is barely legitimate.

    That practice is one that RMS' himself began calling barely legitimate in the early 2000s. RMS specifically and carefully coined his own term for it: selling exceptions to the GPL. This practice is a form of proprietary relicensing that never permits the seller to create their own proprietary fork of the code and always releases all improvements done by the sole proprietary licensee itself to the general public. If this practice is barely legitimate, it stands to reason that anything that goes even just a little bit further crosses the line into illegitimacy.

    From that perspective, I view this spectrum of proprietary relicensing thusly: on the narrow benign end of the spectrum we find what RMS calls “exception selling” and on the other end, we find GPL'd demoware that is merely functional enough to convince customers to call up the company to ask to buy more. Everything beyond “selling exceptions” in harmful to software freedom, getting progressively more harmful as you move further down the spectrum. Also, notwithstanding Lampitt's purportedly canonical definition, “Open Core” doesn't really have a well-defined meaning. The best we can say is that “Open Core” must be something beyond “selling exceptions” and therefore lives somewhere outside of the benign areas of “proprietary relicensing”. So, from my point of view, it's not a question of whether or not “Open Core” is a benign use of GPL: it clearly isn't. The only question to be asked is: how bad is it for software freedom, a little or a lot? Furthermore, I don't really care that much how far a company gets into “proprietary relicensing”, because I believe it's already likely to be harmful to software freedom. Thus, focusing debate only on how bad is it? seems to be missing the primary point: we should shun nearly all proprietary relicensing models entirely.

    Furthermore, I believe that once a company starts down the path of this proprietary relicensing spectrum, it becomes a slippery slope. I have never seen the benign “exception selling” last for very long in practice. Perhaps a truly ethical company might stick to the principle, and would thus use an additional promise-back as RMS' suggests to prove to the community they will never veer from it. RMS' suggested texts have only been available for less than a month, so more time is needed to see if they are actually adopted. Of course, I call on any company asking for a CLA and/or CAA to adopt RMS' texts, and I will laud any company that does.

    But, pragmatically, I admit I'll be (pleasantly) surprised if most CAA/CLA-requesting companies come forward to adopt RMS' suggested texts. We have a long historical list of examples of for-profit corporate CAAs and CLAs being used for more nefarious purposes than selling exceptions, even when that wasn't the original intent. For example2, When MySQL AB switched to GPL, they started benignly selling exceptions, but, by the end of their reign, part of their marketing was telling potential “customers” that they'd violated the GPL even when they hadn't — merely to manipulate the customer into buying a proprietary license. Ximian initially had no plans to make proprietary add-ons to Evolution, but nevertheless made use of their copyright assignment to make the Microsoft Exchange Connector. Sourceforge, Inc. (named VA Linux at the time) even went so far as to demand copyright assignments on the Sourceforge code after the fact (writing out changes by developers who refused) so they could move to an “Open Core”-style business model. (Ultimately, Sourceforge.net became merely demoware for a proprietary product.)

    In short, handing over copyright assignment to a company gives that company a lot of power, and it's naïve to believe a for-profit company won't use every ounce of that power to make a buck when it's not turning a profit otherwise. Non-profit assignors, for their part, mitigate the situation by making firm promises back regarding what will and won't be done with the code, and also (usually) have well-defined non-profit missions that prevent them from moving in troubling directions. For profit companies don't usually have either.

    Without strong assurances in the agreement, like the ones RMS suggests, individual developers simply must assume the worst when assigning copyright and/or giving a broad CLA to a for-profit company. Whether we can ever determine what is or is not “Open Core”, history shows us that for-profit companies with exclusive proprietary relicensing power eventually move away from the (extremely narrow) benign end of the proprietary relicensing spectrum.


    0Most pundits will prefer the term “dual licensing” for what I call “proprietary relicensing”. I urge avoidance of the term “dual licensing”. “Dual licensing” also has a completely orthogonal denotative usage: a Free Software license that has two branches, like jQuery's license of (GPLv2-or-later|MIT). That terminology usage was quite common before even the first “proprietary relicensing” business model was dreamed of, and therefore it only creates confusion to overload that term further.

    1BTW, Lampitt does deserve some credit here. His August 2008 post hints at this spectrum idea of proprietary licensing models. His post doesn't consider the software-freedom implications of the various types, but it seems to me that post was likely ahead of its time for two years ago, and I wish I'd seen it sooner.

    2I give here just of a few of the many examples, which actually name names. Although he doesn't name names, Michael Meeks, in his Some Thoughts on Copyright Assignment, gives quite a good laundry list of all the software-freedom-unfriendly things that have historically happened in situations where CAA/CLAs without adequate promises back were used.

    Posted on Tuesday 19 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-10-17: Canonical, Ltd. Finally On Record: Seeking Open Core

    I've written before about my deep skepticism regarding the true motives of Canonical, Ltd.'s advocacy and demand of for-profit corporate copyright assignment without promises to adhere to copyleft. I've often asked Canonical employees, including Jono Bacon, Amanda Brock, Jane Silber, Mark Shuttleworth himself, and — in the comments of this very blog postMatt Asay to explain (a) why exactly they demand copyright assignment on their projects, rather than merely having contributors agree to the GNU GPL formally (like projects such as Linux do), and (b) why, having received a contributor's copyright assignment, Canonical, Ltd. refuses to promise to keep the software copylefted and never proprietarize it (FSF, for example, has always done the latter in assignments). When I ask these questions of Canonical, Ltd. employees, they invariably artfully change the subject.

    I've actually been asking these questions for at least a year and a half, but I really began to get worried earlier this year when Mark Shuttleworth falsely claimed that Canonical, Ltd.'s copyright assignment was no different than the FSF's copyright assignment. That event made it clear to me that there was a job of salesmanship going on: Canonical, Ltd. was trying to sell something to community that the community doesn't want nor need, and trying to reuse the good name of other people and organizations to do it.

    Since that interview in February, Canonical, Ltd. has launched a manipulatively named product called “Project Harmony”. They market this product as a “summit” of sorts — purported to have no determined agenda other than to discuss the issue of contributor agreements and copyright assignment, and come to a community consensus on this. Their goal, however, was merely to get community members to lend their good names to the process. Indeed, Canonical, Ltd. has oft attempted to use the involvement of good people to make it seem as if Canonical, Ltd.'s agenda is endorsed by many. In fact, FSF recently distanced itself from the process because of Canonical, Ltd.'s actions in this regard. Simon Phipps had similarly distanced himself before that.

    Nevertheless, it seems Canonical, Ltd. now believes that they've succeed in their sales job, because they've now confessed their true motive. In an IRC Q&A session last Thursday0, Shuttleworth finally admits that his goal is to increase the amount of “Open Core” activity. Specifically, Shuttleworth says at 15:21 (and following):

    [C]ompare Qt and Gtk, Qt has a contribution agreement, Gtk doesn't, for a while, back in the bubble, Sun, Red Hat, Ximian and many other companies threw money at Gtk and it grew and improved very quickly but, then they lost interest, and it has stagnated. Qt was owned by Trolltech it was open source (GPL) but because of the contribution agreement they had many options including proprietary licensing, which is just fine with me alongside the GPL and later, because they owned Qt completely, they were an attractive acquisition for Nokia, All in all, the Qt ecosystem has benefitted and the Gtk ecosystem hasn't.

    It takes some careful analysis to parse what's going on here. First of all, Shuttleworth is glossing over a lot of complicated Qt history. Qt started with a non-FaiF license (QPL), which later became a GPL-incompatible Free Software license. After a few years of this oddball, license-proliferation-style software freedom license, Trolltech stumbled upon the “Open Core” model (likely inspired by MySQL AB), and switched to GPL. When Nokia bought Trolltech, Nokia itself discovered that full-on “Open Core” was bad for the code base, and (as I heralded at the time) relicensed the codebase to LGPL (the same license used by Gtk). A few months after that, Nokia abandoned copyright assignment completely for Qt as well! (I.e., Shuttleworth is just wrong on this point entirely.) In fact, Shuttleworth, rather than supporting his pro-Open-Core argument, actually gave the prime example of Nokia/TrollTech's lesson learned: “don't do an Open-Core-style contributor agreement, you'll regret it”. (RMS also recently published a good essay on this subject).

    Furthermore, Shuttleworth also ignores completely plenty of historical angst in communities that rely on Qt, which often had difficulty getting bugfixes upstream and other such challenges when dealing with a for-profit controlled “Open Core” library. (These were, in fact, among the reasons Nokia gave in May 2009 for the change in policy). Indeed, if the proprietary relicensing business is what made Trolltech such a lucrative acquisition for Nokia, why did they abandon the business model entirely within four months of the acquisition?

    Although, Shuttleworth's “lucrative acquisition” point has some validity. Namely, “Open Core” makes wealthy, profit-driven types (e.g., VCs) drool. Meanwhile, people like me, Simon Phipps, NASA's Chris Kemp, John Mark Walker, Tarus Balog and many others are either very skeptical about “Open Core”, or dead-set against it. The reason it's meeting with so much opposition is because “Open Core” is a VC-friendly way to control all the copyright “assets” while pretending to actually have the goal of building an Open Source community. The real goal of “Open Core”, of course, is a bait-and-switch move. (Details on that are beyond the scope of this post and well covered in the links I've given.)

    As to Shuttleworth's argument of Gtk stagnation, after my trip this past summer to GUADEC, I'm quite convinced that the GNOME community is extremely healthy. Indeed, as Dave Neary's GNOME Census shows, the GNOME codebases are well-contributed to by various corporate entities and (more importantly) volunteers. For-profit corporate folks like Shuttleworth and his executives tend not to like communities where a non-profit (in this case, the GNOME Foundation) shepherds a project and keeps the multiple for-profit interests at bay. In fact, he dislikes this so much that when GNOME was recently documenting its long standing copyright policies, he sent Silber to the GNOME Advisory Board (the first and only time Canonical, Ltd. sent such a high profile person to the Advisory Board) to argue against the long-standing GNOME community preference for no copyright assignment on its projects1. Silber's primary argument was that it was unreasonable for individual contributors to even ask to keep their own copyrights, since Canonical, Ltd. puts in the bulk of the work on their projects that require copyright assignment. Her argument was, in other words, an anti-software-freedom equality argument: a for-profit company is more valuable to the community than the individual contributor. Fortunately, GNOME Foundation didn't fall for this, continued its work with Intel to get the Clutter codebase free of copyright assignment (and that work has since succeeded). It's also particularly ironic that, a few months later, Neary showed that the very company making that argument contributes 22% less to the GNOME codebase than the volunteers Silber once argued don't contribute enough to warrant keeping their copyrights.

    So, why have Shuttleworth and his staff been on a year-long campaign to convince everyone to embrace “Open Core” and give up all their rights that copyleft provides? Well, in the same IRC log (at 15:15) I quoted above, Shuttleworth admits that he has some work left to do to make Canonical, Ltd. profitable. And therein lies the connection: Shuttleworth admits Canonical, Ltd.'s profitability is a major goal (which is probably obvious). Then, in his next answer, he explains at great length how lucrative and important “Open Core” is. We should accept “Open Core”, Shuttleworth argues, merely because it's so important that Canonical, Ltd. be profitable.

    Shuttleworth's argument reminds me of a story that Michael Moore (who famously made the documentary Roger and Me, and has since made other documentaries) told at a book-signing in the mid-1990s. Moore said (I'm paraphrasing from memory here, BTW):

    Inevitably, I end up on planes next to some corporate executive. They look at me a few times, and then say: Hey, I know you, you're Roger Moore [audience laughs]. What I want to know, is what the hell have you got against profit? What's wrong with profit, anyway? The answer I give is simple: There's nothing wrong with profit at all. The question I'm raising is: What lengths are acceptable to achieve profit? We all agree that we can't exploit child labor and other such things, even if that helps profitability. Yet, once upon a time, these sorts of horrible policies were acceptable for corporations. So, my point is that we still need more changes to balance the push for profit with what's right for workers.

    I quote this at length to make it abundantly clear: I'm not opposed to Canonical, Ltd. making a profit by supporting software freedom. I'm glad that Shuttleworth has contributed a non-trivial part of his personal wealth to start a company that employs many excellent FLOSS developers (and even sometimes lets those developers work on upstream projects). But the question really is: Are the values of software freedom worth giving up merely to make Canonical, Ltd. profitable? Should we just accept that proprietary network services like UbuntuOne, integrated on nearly every menu of the desktop, as reasonable merely because it might help Canonical, Ltd. make a few bucks? Do we think we should abandon copyleft's assurances of fair treatment to all, and hand over full proprietarization powers on GPL'd software to for-profit companies, merely so they can employ a few FLOSS developers to work primarily on non-upstream projects?

    I don't think so. I'm often critical of Red Hat, but one thing they do get right in this regard is a healthy encouragement of their developers to start, contribute to, and maintain upstream projects that live in the community rather than inside Red Hat. Red Hat currently allows its engineers to keep their own copyrights and license them under whatever license the upstream project uses, binding them to the terms of the copyleft licenses (when the upstream project is copylefted). For projects generated inside Red Hat, after experimenting with the sorts of CLAs that I'm complaining about, they learned from the mistake and corrected it (although unfortunately, Red Hat hasn't universally corrected the problem). For the most part, Red Hat encourages outside contributors to give under their own copyright under the outbound license Red Hat chose for its projects (some of which are also copylefted). Red Hat's newer policies have some flaws (details of which are beyond the scope of this post), but it's orders of magnitude better than the copyright assignment intimidation tactics that other companies, like Canonical, Ltd., now employ.

    So, don't let a friendly name like “Harmony” fool you. Our community has some key infrastructure, such as the copyleft itself, that actually keeps us harmonious. Contributor agreements aren't created equal, and therefore we should oppose the idea that contributor and assignment agreements should be set to the lowest common denominator to enable a for-profit corporate land-grab that Shuttleworth and other “Open Core” proponents seek. I also strongly advise the organizations and individuals who are assisting Canonical, Ltd. in this goal to stop immediately, particularly now that Shuttleworth has announced his “Open Core” plans.


    Update (2010-10-18): In comments, many people have, quite correctly, argued that I have not proved that Canonical, Ltd. has plans to go “Open Core” with their copyright-assigned copyleft products. Such comments are correct; I intended this article to be an opinion piece, not a logical proof. I further agree that without absolute proof, the title of this blog post is an exaggeration. (I didn't change it, as that seemed disingenuous after the fact).

    Anyway, to be clear, the only thing the chain of events described above prove is that Canonical, Ltd. wants “Open Core” as a possibility for the future. That part is trivially true: if they didn't want to reserve the possibility, they'd simply make a promise-back to keep the software as Free Software in their assignment. The only reason not to make an FSF-style promise-back is that you want to reserve the possibility of proprietary relicensing.

    Meanwhile, even though I cannot construct a logical proof of it, I still believe the only possible explanation for this 1+ year marketing campaign described above is that Canonical, Ltd. is moving toward “Open Core” for those projects on which they are the sole copyright holder. I have asked others to offer alternative explanations of why Canonical, Ltd. is carrying out this campaign: I agree that there could exist another logical explanation other than the one I've presented. If someone can come up with one, then I would be happy to link to it here.

    Finally, if Canonical, Ltd. comes out with a statement that they'll switch to using FSF's promise-back in their assignments, I will be very happy to admit I was wrong. The outcome I want is for individual developers to be treated right by corporations in control of particular codebases; I would much rather that happen than be correct in my opinions.


    0I originally credited OMG Ubuntu as publishing Shutleworth's comments as an interview. Their reformatting of his comments temporarily confused me, and I thought they'd done an interview. Thanks to @gotunandan who pointed this out.

    1Ironically, the debate had nothing to do with a Canonical, Ltd. codebase, since their contributions amount to so little (1%) of the GNOME codebase anyway. The debate was about the Clutter/Intel situation, which has since been resolved.


    Responses Not In the Identica Thread:

    • Alex Hudson's blog post
    • Discussion on Hacker News
    • LWN comments
    • Matt Aslett's response and my response to him
    • Ingolf Schaefer's blog post, which only allows comments with a Google Account, so I comment below instead (to be clear, I'm not criticizing Ingolf's choice of Google-account-to-comment, especially since I make everyone who wants to comment here sign up for identi.ca ;):

      Ingolf, you noted that you'd rather I not try to read between the lines to deduce that proprietary relicensing and/or “Open Core” is where Canonical, Ltd.'s marketing is leading. I disagree; I think it's useful to consider what seems a likely end-outcome here. My primary goal is to draw attention to it now in hopes of preventing it from happening. My best possible outcome is that I get proved wrong, and Canonical makes a promise-back in their assignment and/or CLA.

      Meanwhile, I don't think they can go “Open Core” and/or proprietary relicensing for all of Ubuntu, as you are saying. They aren't sole copyright holder in most of Ubuntu. The places where they can pursue these options is in Launchpad, pbuilder, upstart, and the other projects that require CLA and/or assignment.

      I don't know for sure that they'll do this, as I say above. I can deduce no other explanation. As I keep saying, if someone else has another possible explanation for Canonical, Ltd.'s behavior that I list above, I'm happy to link to it here. I can't see any other reason; they'd surely by now just made an FSF-style promise-back in their CLA if they didn't want to hold proprietarization as a possibility.

    Posted on Sunday 17 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-10-04: Conservancy's First Blog Post

    [ Crossposted from Conservancy's blog. ]

    As can be seen in today's announcement, today is my first day as full-time Executive Director at the Software Freedom Conservancy. For four years, I have worked part-time on nights, weekends, and lunch times to keep Conservancy running and to implement and administer the services that Conservancy provides to its member projects. It's actual quite a relief to now have full-time attention available to carry out this important work.

    From the start, one of my goals with Conservancy has been to run the non-profit organization as transparently as possible. At times, I've found that when time is limited, keeping the public informed about all your work is often the first item to fall too far down on the action item list. Now that Conservancy is my primary, daily focus, I hope to increase its transparency as much as possible.

    Specifically, I plan to keep a regular blog about activities of the Conservancy. I've found that a public blog is a particular convenient way to report to the public in a non-onerous way about the activities of an organization. Indeed, we usually ask those developers whose work is funded through Conservancy to keep a blog about their activities, so that the project's community and the public at large can get regular updates about the work. I should hold myself to no less a standard!

    I encourage everyone to subscribe to the full Conservancy site RSS feed, where you'll receive both news items and blog posts from the Conservancy. There are also separate feeds available for just news and just blog posts. Also, if you're a subscriber to my personal blog, I will cross-post these blog posts there, although my posts on Conservancy's blog will certainly be a proper subset of my entire personal blog.

    Posted on Monday 04 October 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

September

  • 2010-09-11: Two Thank-Yous

    I'm well known for being critical when necessary about what happens in the software freedom community, but occasionally, there's nothing to do but thank someone, particularly when they've done something I asked for. :)

    First, I'd like to thank Matthew Garrett for engaging in some GPL enforcement (as covered on lwn.net). He's taking an interesting tack of filing a complaint with US Customs. I've thought about this method in the past, but never really felt I wanted to go that route (mainly because I'm more familiar with the traditional GPL enforcement processes). However, it's really important that we try lots of different strategies for GPL enforcement; the path to success is often many methods in parallel. It looks like Matthew already got the attention of the violator. In the end, every GPL enforcement strategy is primarily to get the violator's attention so they take the issue seriously and come into compliance with the license.

    I've written before about how GPL enforcement can be a lonely place, and when I see someone get serious about doing some — as Matthew has in the last year or so — it makes GPL enforcement a lot less lonely. I still think I can count on my hands all the people active regularly in GPL enforcement efforts, but I am glad to see that's changing. The license stands for a principle, and we should defend it, despite the great length the corporate powers in the software freedom world go to in trying to stop GPL enforcement.

    Secondly, I need to thank my colleague Chris DiBona. Two years ago, I gave him quite a hard time that Google prohibited hosting of AGPLv3'd projects on its FLOSS Project Hosting site. The interesting part of our debate was that Chris argued that license proliferation was the reason to prohibit AGPLv3. I argued at the time that Google simply opposed AGPLv3 because many parts of Google's business model rely on the fact that the GPL behaves in practice somewhat like permissive licenses when deployed in a web services environment.

    Honestly, I never had definitive proof at Google's “real reasons” for holding the policy it did for two years, but it doesn't matter now, because yesterday Chris announced that Google Code Hosting now accepts AGPLv3'd projects0. I really appreciate Chris' friendly words on AGPLv3, noting that he didn't like turning away projects under licenses that serve a truly new function, like the AGPL.

    Google will now accept projects under any license that is on OSI's approved list. I think this is a reasonable outcome. I firmly believe that acceptable license lists must be the purview of not-for-profit organizations, not for-profit ones. Personally, I tend to avoid and distrust any license that fails to appear on both OSI's list and the FSF Free Software License List. While I obviously favor the FSF list myself (having helped originate it), I generally want to see a license on both lists before I'm ready to say for sure there are no worries about it.

    There are two other entities that maintain license lists, namely the Debian Project and Red Hat's Fedora Project. I wouldn't say that I find Debian's list definitive, mainly because, despite Debian's generally democratic slant, the ftp-masters hold a bit too much power in interpreting the DFSG.

    As for Fedora, that's ultimately a project controlled by a for-profit corporation (Red Hat), and therefore I have some trepidation about trusting their list, just as I had concerns that Google attempted to set licensing policy by defining an acceptable license list. As it stands at the moment, I trust Fedora's list because I know that Spot and Fontana currently have the ultimate say on what does or does not go onto Fedora's list. Nevertheless, Red Hat is ultimately in control of Fedora, so I think its license list can't be relied on indefinitely (e.g., in case Spot and/or Fontana ever leave Red Hat at some point.)

    Anyway, I think the best outcome for the community is for the logical conjunction of the OSI's list and the FSF's list to be considered the accepted list of licenses. While I often disagree with the OSI, I think it's in the best interest of the community to require that two distinct non-profits with different missions both approve a license before it's considered acceptable. (I suppose I'd have a different view if OSI had not accepted the AGPLv3, though. ;)


    0I must point out that Chris has an error in his blog post: namely, FSF's Code hosting site, Savannah accepts not just GPL'd projects, but any project that is listed as “GPL-Compatible” on FSF's Free Software License List.

    Posted on Saturday 11 September 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

August

  • 2010-08-27: The Saga of Sun RPC

    I first became aware of the Sun RPC license in mid-2001, but my email archives from the time indicate the issue predated my involvement with it; it'd been an issue of consideration since 1994. I later had my first large email thread “free-for-all” on the issue in April 2002, which was the first of too many that I'd have before it was all done. In December 2002, the Debian bug was filed, and then it became a very public debate. Late last week, it was finally resolved. It now ranks as the longest standing Free Software licensing problem of my career. A cast of dozens deserve credit for getting it resolved.

    Tom “spot” Callaway does a good job summarizing the recent occurrences on this issue (and by recent, I mean since 2005 — it's been going long enough that five years ago is “recent”), and its final resolution. So, I won't cover that recent history, but I encourage people to read Spot's summary. Simon Phipps, who worked on this issue during his time as the Chief Open Source Officer of Sun, also wrote about his work on the issue. For my part, I'll try to cover the “middle” part of the story from 2001-2005.

    So, the funny thing about this license is everyone knew it was Sun's intention to make it Free Software. The code is so old, it dates back to a time when the drafting of Free Software licenses weren't well understood (old-schoolers will, for example, remember the annoying advertising clause in early BSD licenses). Thus, by our modern standards, the Sun RPC license does appear on its face as trivially non-Free, but in its historical context, the intent was actually clear, in my opinion.

    Nevertheless, by 2002, we knew how to look at licenses objectively and critically, and it was clear to many people that the license had problems. Competing legal theories existed, but the concerns of Debian were enough to get everyone moving toward a solution.

    For my part, I checked in regularly during 2002-2004 with Danese Cooper (who was, effectively, Simon Phipps' predecessor at Sun), until I was practically begging her to pay attention to the issue. While I could frequently get verbal assurances from Danese and other Sun officials that it was their clear intention that glibc be permitted to include the code under the LGPL, I could never get something in writing. I had a hundred other things to worry about, and eventually, I stopped worrying about it. I remember thinking at the time: well, I've notes on all these calls and discussions I've had with Sun people about the license. Worst case scenario: I'll have to testify to this when Sun sues some Free Software project, and there will be a good estoppel defense.

    Meanwhile, around early 2004, my friend and colleague at FSF, David “Novalis” Turner took up the cause in earnest. I think he spent a year or two as I did: desperately trying to get others to pay attention and solve the problem. Eventually, he left FSF for other work, and others took up the cause, including Brett Smith (who took over Novalis' FSF job), and, by that time, Spot was also paying attention to this. Both Brett and Spot worked hard to get Simon Phipps attention on it, which finally happened. But around then began that long waiting period while Oracle was preparing to buy Sun. It stopped almost anything anyone wanted to get done with Sun, so everyone just waited (again). It was around that time that I decided I was pretty sure I never wanted to hear the phrase: “Sun RPC license” again in my life.

    Meanwhile, Richard Fontana had gone to work for Red Hat, and his self-proclaimed pathological obsession with Free Software (which can only be rivaled by my own) led him to begin discussing the Sun RPC issue again. He and Spot were also doing their best negotiating with Oracle to get it fixed. They took us the last miles of this marathon, and now the job is done.

    I admit that I feel of some shame that, in recent years, I've had such fatigue about this issue — a simple one that should've been solved a decade and a half ago — that, since 2008, I've done nothing but kibitz about the issue when people complained. I also didn't believe that a company as disturbing and anti-Free-Software as Oracle could ever be convinced to change a license to be more FaiF. Spot and Fontana proved me wrong, and I'm glad.

    Thanks to everyone in this great cast of characters that made this ultimately beneficial production of licensing theater possible. I've been honored that I shared the stage in the first few acts, and sorry that I hid backstage for the last few. It was right to keep working on it until the job was done. As Fontana said: Estoppel may be relevant but never enough; software freedom principle[s] should matter as much as legal risk. … [the] standard for FaiF can't simply be ‘good defense to copyright infringement likely’. Thanks to everyone; I'm so glad I no longer have to wait in fear of a subpoena from Oracle in a lawsuit claiming infringement of their Sun RPC copyrights.

    Posted on Friday 27 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-16: Considerations For FLOSS Hackers About Oracle vs. Google

    Many have already opined about the Oracle v. Google lawsuit filed last week. As you might expect, I'm not that worried about what company sues what company for some heap of cash; those sort of for-profit wranglings just aren't what concerns me. Rather, I'm focused on what this event means for the future of software freedom. And, I think even at this early stage of the lawsuit, there are already a few lessons for the Free Software community to learn.

    Avoid Single-Company-Controlled Language Infrastructure

    Fourteen months ago, before the Oracle purchase of Sun, I wrote about the specific danger of language infrastructure developed by a single for-profit patent-holding entity (when such infrastructure is less than 20 years old). In that blog post, I wrote:

    [Some] might argue that with all those patents consolidated [in a single company], patent trolls will have a tough time acquiring patents and attacking FaiF implementations. However, while this can sometimes be temporarily true, one cannot rely on this safety. Java, for example, is in a precarious situation now. Oracle is not a friend to Free Software, and soon will hold all Sun's Java patents — a looming threat to FaiF Java implementations … [A]n Oracle attack on FaiF Java is a possibility.

    I'm sorry that I was right about this, but we should now finally learn the lesson: languages like Java and C# are dangerous. Single companies developed them, and there are live, unexpired patents that can easily be used in a group to attack FaiF implementations. Of course, that doesn't mean other language infrastructures are completely safe from patents, but I believe there is greater relative risk of a system with patent consolidation at a single company.

    It also bears repeating the point I made on Linux Outlaws last July: this doesn't mean the Free Software community shouldn't have FaiF implementations of all languages. In fact, we absolutely should, because we do want developers who are familiar with those languages to bring their software over to GNU/Linux and other Free Software systems.

    However, this lawsuit proves that choosing some languages for newly written Free Software is dangerous and should be avoided, especially when there are safer choices like C, C++, Python, and Perl0. (See my blog post from last year for more on this subject.)

    Never Let Your Company File for Patents on Your Work
    James Gosling is usually pretty cryptic in his non-technical writing, but I think if you read carefully, it seems to me that Gosling regrets that Oracle now holds his patents on Java. I know developers get nice bonuses if they let their company apply for patents on their work. I also know there's pressure in most large companies to get more patents. We, as developers, must simply refuse this. We invent this stuff, not the suits and the lawyers who want to exploit our work for larger and larger profits. As a community of developers and computer scientists, we must simply refuse to ever let someone patent our work. In a phrase: just say no.

    Even if you like your company today, you never know who will own those software patents later. I'm sure James Gosling originally never considered the idea that a company as revolting as Oracle would have control of everything he's invented for the last two decades. But they do, and there's nothing Gosling can do about what's done with his work and “inventions”. Learn from this example; don't let your company patent your work. Instead, publish online to establish prior art as quickly as possible.

    Google Is Not Merely a Pure Free Software Distributor

    Google has worked hard to cast themselves as innocent, Free-Software-producing victims. That's good PR because it's true, but it's also not telling the whole truth. Google worked hard to make sure Android was completely Apache-2.0 (or even more permissively) licensed (except for Linux, of course). There was already plenty Java stuff available under the GPL that Google could have used. Sadly, Google was so allergic to GPL for Android/Linux that they even avoided LGPL'd components like uClibc and glibc (in favor of their own permissively-licensed C library based on a BSD version).

    Google's reason for permissive-only licensing for “everything but the kernel” was likely a classic “adoption is more important than software freedom” scenario. Google wants Android/Linux in as many phones as possible, and wants to eliminate any “barrier” to such adoption, even if such a “barrier” would defend software freedom.

    This new lawsuit would be much more interesting if Google had chosen GPL and/or LGPL for Android. In fact, if I fantasize about being empowered to design a binding, non-financial settlement to the lawsuit, the first item on my list would be a relicense of all future Android/Linux systems under GPL and/or LGPL. (Basically, Google would license only enough under LGPL to allow proprietary applications, and license all the rest as GPL, thus yielding the same licensing consequences as GNU/Linux and GNOME). Then, I'd have Oracle explicitly license all its patents under GPL and/or LGPL compatible licenses that would permit Android/Linux to continue unencumbered, but under copyleft. (BTW, Mark Wielaard has a blog post that discussed more about the issue of GPL'd/LGPL'd Java implementations and how they relate to this lawsuit.)

    I realize that's never going to happen, but it's an interesting thought experiment. I am of course opposed to software patents, and I certainly oppose companies like Oracle that produce almost all proprietary software. However, I can at least understand the logic of Oracle not wanting its software patents exercised in proprietary software. I think a trade off, whereby all software patents are licensed freely and royalty-free only for use in copylefted software is a reasonable compromise. OTOH, knowing Oracle, they could easily have plans to attack copyleft implementations too. Thus, we must assume they won't accept this reasonable compromise of “royalty-free licensing for copyleft only”. That brings me to my next point of FaiF hackers' concern about this lawsuit.

    Never Trust a Mere Patent Promise; Demand Real Patent Licenses

    I wrote after Bilski that patent promises just aren't enough, and this lawsuit is an example of why. I presume that Oracle's lawyers have looked carefully as the various promises and assurances that Sun made about its Java patents and have concluded Oracle has good arguments for why those promises don't apply to Android. I have no idea what those arguments are, but rarely do lawyers file a lawsuit without very good arguments already prepared. I hope Oracle's lawyers' arguments are wrong and they lose. But, the fact that Oracle even has a credible argument that Android/Linux doesn't already have a patent license shows again that patent promises are just not enough.

    Miguel de Icaza used this opportunity to point out how the Microsoft C# promises are “better” by comparison, in his opinion. But, Brett Smith at FSF already found huge holes in those Microsoft promises that haven't been fixed. In fact, any company making these promises always tries to hide as much nasty stuff as it can, to convince the users that they are safe from patent aggression when they really aren't. That's why the Free Software community must demand simple, clear, and permanent royalty-free patent licenses for all patents any company might hold. We should accept nothing less. As mentioned above, those licenses could perhaps require that a certain Free Software copyright license, such as GPLv3-or-later, be used for any software that gets the advantage of the license. (i.e., I can certainly understand if companies don't want to accidentally grant such patent licenses to their proprietary software competitors).

    Indeed, it's particularly important that the licenses cover all patents and those possibly exercised in future improvements in the software. This lawsuit has clearly shown that even if patent pools exist for some subsets of patents for some subsets of Free Software, patent holders will either use other patents for aggression, or they'll assert patents in the patent pools against Free Software that's not part of the pool. In essence, we must assume that any for-profit company will become a patent troll eventually (they always do), and therefore any cross-licensing pools that don't include every patent possible for any possible Free Software will always be inadequate. So, the answer is simple: trust no software-patent-holding company unless they give an explicit GPLv3-compatible license for all their patents.

    We Must End Software Patents

    The failure of the Bilski case to end software patents in the USA means much work lies ahead to end software patents. The End Software Patents Wiki has some good stuff about this case as well as lots of other information related to software patents. There are now heavily funded for-profit corporate efforts that seek to convince the Free Software community that patent reform is enough. But, it's not! For example, if you see presenters at FLOSS conferences claiming to have solutions to patent problems, ask them if their organization opposes all software patents, and ask them if their funders license all their patents freely for GPLv3-or-later software implementations. If you hear the wrong answers, then their motives and mission are suspect.

    Finally, I'd like to note that, in some sense, these patent battles help Free Software, because it may actually teach companies that the expense of having software patents is not worth the risk of patent lawsuits. It's possible we've reached a moment in history where it'd be better if the Software Patent Cold War becomes a full Software Patent Nuclear War. Software freedom can survive that “nuclear winter”. I sometimes think that in the Free Software community, we may find ourselves left with just two choices: fifty more years of Patent Cold War (with lots of skirmishes like this one), or ten years of full-on patent war (after which companies would beg Congress to end software patents). Both outcomes are horrible until they're resolved, but the latter would reach resolution quicker. I often wonder which one is the better long term for software freedom.

    But, no matter what happens next, the necessary position is: all software patents are bad for software freedom. Any entity that supports anything short of full abolition of software patents is working against software freedom.


    0I originally had PHP listed here, but jwildeboer argued that Zend Technologies, Ltd. might be a problem for PHP in the same way Oracle is for Java and Microsoft for C#. It's true that Zend is a software patent holder and was involved in the development of later PHP versions. I don't think the single-company-controlled software patent risks with PHP are akin to those of Java and C#, since Zend Technologies isn't the only entity involved in PHP's development, but certainly the other languages listed are likely preferable to PHP.

    Posted on Monday 16 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-13: GNOME Copyright Assignment Policy

    Vincent Untz announced and blogged today about the GNOME Copyright Assignment Policy and a longer guidelines document about the GNOME policy. I want to thank both Vincent and Michael Meeks for their work with me on this policy.

    As I noted in my blog last week, GUADEC really reminded me how great the GNOME community is. Therefore, it's with great pride that I was able to assist on this important piece of policy for the GNOME community.

    There are a lot of forces in the corporate side of Free Software right now that are aggressively trying to convince copylefted projects to begin assigning copyright of their code (or otherwise agree to CLAs) to corporations without any promises that the code will remain Free Software. We must resist this pressure: copyleft, when used correctly, is the force that keeps equality in the community, as I've written about before.

    I thank the GNOME Board of Directors for entrusting us to write the policy, and am glad they have adopted it.

    Posted on Friday 13 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-10: May They Make Me Superfluous

    The Linux Foundation announced today their own FLOSS license compliance program, which included the launch of a few software tools under a modified BSD license. They also have offered some training courses for those that want to learn how to comply.

    If this Linux Foundation (LF) program is successful, I may get something I've wished for since the first enforcement I ever worked on back in late 1998: I'd like to never do GPL enforcement again. I admit I talk a lot about GPL enforcement. It's indeed been a major center of my work for twelve years, but I can't say I've ever really liked doing it.

    By contrast, I have been hoping for years that someone would eventually come along and “put me out of the enforcement business”. Someday, I dream of opening up the <gpl@busybox.net> folder and having no new violation reports (BTW, those dreams usually become real-life nightmares, as I typically get two new violations reports each week). I also wish for the day that I don't have a backlogged queue of 200 or more GPL violations where no source nor offer for source has been provided. I hate that it takes so much time to resolve violations because of the sheer magnitude that exist.

    I got into GPL enforcement so heavily, frankly, because so few others were doing it. To this day, there are basically three groups even bothering to enforce GPL on behalf of the community: Conservancy (with enforcement efforts led by me), FSF (with enforcement efforts led by Brett Smith), and gpl-violations.org (with enforcement efforts led by Harald Welte). Generally, GPL enforcement has been a relatively lonely world for a long time, mainly because it's boring, tedious and patience-trying work that only the most dedicated (masochistic?) want to spend their time doing.

    There are a dozen of very important software-freedom-advancing activities that I'd rather spend my time doing. But as long as people don't respect the freedom of software users and ignore the important protections of copyleft, I have to continue doing GPL enforcement. Any effort like LF's is very welcome, provided that it reduces the number of violations.

    Of course, LF (as GPL educators) and Brett, Harald, and I (as GPL enforcers) will share the biggest obstacle: getting communication going with the actual violators. Fact is, people who know the LF exists or have heard of the GPL are likely to already be in compliance. When I find a new violation, it's nearly always someone who doesn't even know what's going on, and often doesn't even realize what their engineering team put into their firmware. If LF can reach these companies before they end up as a violation report emailed to me, I'll be as glad as can be. But it's a tall order.

    I do have a few minor criticisms of LF's program. First, I believe the directory of FLOSS Compliance Officers should be made publicly available. I think FLOSS Compliance Officers at companies should make themselves publicly known in the software freedom community so they can be contacted directly. As LF currently has it set up, you have to make a request of the LF to put you in touch with a company's compliance officer.

    Second, I admit I'd have liked to have been actively engaged in LF's process of forming this program. But, I presume that they wanted as much distance as possible from the world's most prolific GPL enforcer, and I can understand that. (I suppose there's a good cop/bad cop metaphor you could make here, but I don't like to think of myself as the GPL police.) I did offer to help LF on this back in April when they announced it at the Linux Collaboration Summit, but they haven't been in touch. Nevertheless, I'll hopefully meet with LF folks on Thursday at LinuxCon about their program. Also, I was invited a few months ago by Martin Michlmayr to join one subset of the project, the SPDX working group and I've been giving it time whenever I can.

    But, as I said, those are only minor complaints. The program as a whole looks like it might do some good. I hope companies take advantage of it, and more importantly, I hope LF can reach out to the companies who don't know their name yet but have BusyBox/Linux embedded in their products.

    Please, LF, help free me from the grind of GPL enforcement work. I remain committed to enforcing GPL until there are no violations left, but if LF can actually bring about an end to GPL violations sooner rather than later, I'll be much obliged. In a year, if I have an empty queue of GPL violations, I'll call LF's program a unmitigated success and gladly move on to other urgent work to advance software freedom.

    Posted on Tuesday 10 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-09: “Have To” Is a Relative Phrase

    I often hear it. I have to use proprietary software, people say. But usually, that's a justification and an excuse. Saying have to implies that they've been compelled by some external force to do it.

    It begs the question: Who's doing the forcing? I don't deny there might be occasions with a certain amount of force. Imagine if you're unemployed, and you've spent months looking for a job. You finally get one, but it generally doesn't have anything to do with software. After working a few weeks, your boss says you have to use a Microsoft Windows computer. Your choices are: use the software or be fired and spend months again looking for a job. In that case, if you told me you have to use proprietary software, I'd easily agree.

    But, imagine people who just have something they want to do, completely unrelated to their job, that is made convenient with proprietary software. In that case, there is no have to. One doesn't have to do a side project. So, it's a choice. The right phrase is wanted to, not have to.

    Saying that you're forced to do something when you really aren't is a failure to take responsibility for your actions. I generally don't think users of proprietary software are primarily to blame for the challenges of software freedom — nearly all the blame lies with those who write, market, and distribute proprietary software. However, I think that software users should be clear about why they are using the software. It's quite rare for someone to be compelled under threat of economic (or other) harm to use proprietary software. Therefore, only rarely is it justifiable to say you have to use proprietary software. In most cases, saying so is just making an excuse.

    As for being forced to develop proprietary software, I think it's even rarer yet. Back in 1991 when I first read the GNU Manifesto, I was moved by RMS' words about the issue:

    “Won't programmers starve?”

    I could answer that nobody is forced to be a programmer. Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else.

    But that is the wrong answer because it accepts the questioner's implicit assumption: that without ownership of software, programmers cannot possibly be paid a cent. Supposedly it is all or nothing.

    Well, even if it is all or nothing, RMS was actually right about this: we can do something else. By the mid 1990s, these words had inspired me to make a lifelong plan to make sure I'd never have to write or support proprietary software again. Despite being trained primarily as a computer scientist, I've spent much time building contingency plans to make sure I wouldn't be left with proprietary software support or development as my only marketable skill.

    During the 1990s, it wasn't clear that software freedom would have any success at all. It was a fringe activity; Cygnus was roughly the only for-profit company able to employ people to write Free Software. As such, I of course started learning the GCC codebase, figuring that I'd maybe someday get a job at Cygnus. I also started training as an American Sign Language translator, so I'd have a fallback career if I didn't get a job at Cygnus. Later, I learned how to play poker really well, figuring that in a worst case, I could end up as a professional poker player permanently.

    As it turned out, I've never had to rely fully on these fallback plans, primarily because I was hired by the FSF in 1999. For the last eleven years, I have been able to ensure that I've never had a job that required that I use, support, or write proprietary software and I've worked only on activities that directly advanced software freedom. I admit I was often afraid that someday I might be unable to find a job, and I'd have to support, use or write proprietary software again. Yet, despite that fear, since 1997, I've never even been close to that.

    So, honestly, I just don't believe those who say they have to use proprietary software. Almost always, they chose to use it, because it's more convenient than the other things they'd have to do to avoid it. Or, perhaps, they'd rather write or use proprietary software than write or use no software at all, even when avoiding software entirely was a viable option.

    In summary, I want to be clear that I don't judge people who use proprietary software. I realize not everyone wants to live their life as I do — with cascading fallback plans to avoid using, writing or supporting proprietary software. I nevertheless think it's disingenuous to say you have to use, support or develop proprietary software. It's a choice, and every year that goes by, the choice gets easier, so the statement sounds more like an excuse all the time.

    Posted on Monday 09 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-05: GUADEC 2010: Rate Conferences by Inspiration Value

    Conferences are often ephemeral. I've been going to FLOSS conferences since before there were conferences specifically for the topic. In the 1990s, I'd started attending various USENIX conferences. Many of my career successes can be traced back to attending those conferences and meeting key leaders in the FLOSS world. While I know this is true generally, I can't really recall, without reviewing notes from specific conferences, what happened at them, and how specifically it helped me personally or FLOSS in general. I know they're important to me and to software freedom, but it's tough to connect the dots perfectly without looking in detail at what happened when.

    Indeed, for most of us, after decades, conferences start to run together. At GUADEC this year, I had at least two conversations of the nature: What city was that? What conference was that? Wait, what year was that?. And that was just discussions about past GUADECs specifically, let alone other events!

    For my part, after checking my records, I discovered that I hadn't been to a GUADEC since 2003. I've served as FSF's representative on the GNOME Advisory Board straight through from 2001 until today, but nevertheless I hadn't been able to attend GUADECs from 2004-2009. Thus, the 2010 GUADEC was somewhat of a reintroduction for me to the in-person GNOME community.

    With fresh eyes, what I saw had great impact on me. GNOME seems to be a vibrant, healthy community, with many contributors and incredible diversity in both for-profit and volunteer contributions. GNOME's growth and project diversity has greatly exceeded what I would have expected to see between 2004 and 2010.

    It's not often I go to a conference and am jealous that I can't be more engaged as a developer. I readily admit that I haven't coded regularly in more than a decade (and I often long to do it again). But, I usually talk myself out of it when I remember the difficultly of getting involved and in shepherding work upstream. It's a non-trivial job, and some don't even bother. The challenges are usually enough to keep the enticement at bay.

    Yet, I left GUADEC 2010 and couldn't see a downside in getting involved. I found myself on the flight back wishing I could do more, thinking through the projects I saw and wondering how I might be a coder again. There must be some time on the weekends somewhere, I thought, and while I'm not a GUI programmer, there's plenty of system stuff in GNOME like dbus and systemd; surely I can contribute there.

    Fact is, I've got too many other FLOSS-world responsibilities and I must admit I probably won't contribute code, despite wanting to. What's amazing, though, is that everything about GUADEC made me want to get more involved and there appeared no downside in doing so. There's something special about a conference (and a community) that can inspire that feeling in a hardened, decade-long conference attendee. I interact with a lot of FLOSS communities, and GNOME is probably the most welcoming of all.

    The rest of this post is a random bullet list of cool things that happened at GUADEC that I witnessed/heard/thought about:

    • There was a lot of debate and concern about the change in the GNOME 3 release schedule. I was impressed at the community unity on this topic when I heard a developer say in the hall: The change in GNOME 3 schedule is bad for me, but it's clearly the right thing for GNOME, so I support it. That's representative of the “all for one” and selfless attitude you'll find in the GNOME community.
    • Dave Neary presented a very interesting study on GNOME code contributions, which he was convinced to release under CC-By-SA. The study has caused some rancor in the community about who does or does not contribute to GNOME upstream, but generally speaking, I'm glad the data is out there, and I'm glad Dave's released it under a license that allows people to build on the work and reproduce and/or verify the results. (Dave's also assured me he'll release the tools and config files and all other materials under FaiF licenses as well; I'll put a link here when he has one.) Thing is, the most important and wonderful datum from Dave's study is that a plurality of GNOME contribution comes from volunteers: a full 23%! I think every FLOSS project needs a plurality of volunteer contribution to truly be healthy, and it seems GNOME has it.
    • My talk on GPLv3 was reasonably well received, notwithstanding some friendly kibitzing from Michael Meeks. There had been push back in previous discussions in the GNOME community about GPLv3. It seems now, however, that developers are interested in the license. It's not my goal to force anyone to switch, but I hope that my talk and my participation in this recent LGPLv3 thread on desktop-list might help to encourage a slow-but-sure migration to GPLv3-or-later (for applications) and (GPLv2|LGPLv3-or-later) (for platform libraries) in GNOME. If folks have questions about the idea, I'm always happy to discuss them.
    • I enjoyed rooming with Brad Taylor. We did wonder, though, if the GNOME Travel Committee assigned us rooms by similar first names. (In fact, I was so focused that on the fact that we shared the same first name, I previously had typed Brad's last name wrong here!) I liked hearing about his TomBoy online project, Snowy. I'm obviously delighted to see adoption of AGPLv3, the license I helped create. I've promised Brad that I'll try to see if I can convince the org-mode community to use Snowy for its online storage as well.
    • Owen Taylor demoed and spoke about GNOME Shell 3.0. I don't use GUIs much myself, but I can see how GUI-loving users will really enjoy this excellent work.
    • I met Lennart Poettering and discussed with him in detail the systemd project. While I can see how this could be construed as a Canonical/Red Hat fight over the future of what's used for system startup, I still was impressed with Lennart's approach technically, and find it much healthier that his community isn't requiring copyright assignment.
    • Emmanuele Bassi's talk on Clutter was inspiring, as he delivered heartfelt slide indicating that he'd overcome the copyright assignment requirements and assignment is no longer required by Intel for Clutter upstream contributions. I like to believe that Vincent Untz's, Michael Meeks' and my work on the (yet to be ratified) GNOME Copyright Assignment Policy was a help to Emmanuele's efforts in this regard. However, it sounds to me like the outcome was primarily due to a lot of personal effort on Emmanuele's part internally to get Intel to DTRT. I thank him for this effort and congratulate him on that success.
    • It was great to finally meet Fabian Scherschel in person. He kindly brought me some gifts from Germany and I brought him some gifts from the USA (we prearranged it; I guess that's the “outlaw” version of gifts). Fab also got some good interviews for the Linux Outlaws podcast that he does with Dan Lynch. It seems that podcast has been heavily linked to in the GNOME community, which is really good for Dan and Fab and for GNOME, I think.
    Sponsored by the GNOME Foundation!

    That's about all the random thoughts and observations I have from GUADEC. The conference was excellent, and I think I simply must readd it to my “must attend each year” list.

    Finally, I want to thank the GNOME Foundation for sponsoring my travel costs. It allowed me to take some vacation time from my day job to attend and participate in GUADEC.

    Posted on Thursday 05 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-08-03: More GPL Enforcement Progress

    LWN is reporting a GPL enforcement story that I learned about during last week while at GUADEC (excellent conference, BTW, blog post on that later this week). I wasn't sure if it was really of interest to everyone, but since it's hit the press, I figured I'd write a brief post to mention it.

    As many probably know, I'm president of the Software Freedom Conservancy, which is the non-profit organizational home of the BusyBox project. As part of my role at Conservancy, I help BusyBox in its GPL enforcement efforts. Specifically and currently, the SFLC is representing Conservancy in litigation against a number of defendants who have violated the GPL and were initially unresponsive to Conservancy's attempts to bring them into compliance with the terms of the license.

    A few months ago, one of those defendants, Westinghouse Digital Electronics, LLC, stopped responding to issues regarding the lawsuit. On Conservancy's behalf, SFLC asked the judge to issue a default judgment against them. A “default” means what it looks like: Conservancy asked to “win by default” since Westinghouse stopped showing up. And, last week, Conservancy was granted a default judgment against Westinghouse, which included an injunction to stop their GPL-non-compliant distributions of BusyBox.

    “Injunctive Relief”, as the lawyers call it, is a really important thing for GPL enforcement. Obviously our primary goal is full compliance with the GPL, which means giving the complete and corresponding source code (C&CS, as I tend to abbreviate it) to all those who received binary distributions of the software. Unfortunately, in some cases (for example, when a company simply won't cooperate in the process despite many efforts to convince them to do so), the only option is to stop further distribution of the violating software. As many parts of the GPL itself point out, it's better to not have software distributed at all, if it's only being distributed as (de facto) proprietary software.

    I'm really glad that a judge has agreed that the GPL is important enough a license to warrant an injunction on out-of-compliance distribution. This is a major step forward in GPL enforcement in the USA. (Please note that Harald Welte had past similar successes in Germany, and deserves credit and kudos for getting this done the first time in the world. This success follows in his footsteps.)

    Posted on Tuesday 03 August 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2010-07-15: At Least Motorola Admits It

    I've written before about the software freedom issues inherent with Android/Linux. Summarized shortly: the software freedom community is fortunate that Google released so much code under Free Software licenses, but since most of the code in the system is Apache-2.0 licensed, we're going to see a lot of proprietarized, non-user-upgradable versions. In fact, there's no Android/Linux system that's fully Free Software yet. (That's why Aaron Williamson and I try to keep the Replicant project going. We've focused on the HTC Dream and the NexusOne, since they are the mobile devices closest to working with only Free Software installed, and because they allow the users to put their own firmware on the device.)

    I was therefore intrigued to discover last night (via mtrausch) a February blog post by Lori Fraleigh of Motorola, wherein Fraleigh clarifies Motorola's opposition to software freedom for its Android/Linux users:

    We [Motorola] understand there is a community of developers interested in … Android system development … For these developers, we highly recommend obtaining either a Google ADP1 developer phone or a Nexus One … At this time, Motorola Android-based handsets are intended for use by consumers.

    I appreciate the fact that Fraleigh and Motorola are honest in their disdain for software developers. Unlike Apple — who tries to hide how developer-unfriendly its mobile platform is — Motorola readily admits that they seek to leave developers as helpless as possible, refusing to share the necessary tools that developers need to upgrade devices and to improve themselves, their community, and their software. Companies like Motorola and Apple both seek to squelch the healthy hacker tendency to make technology better for everyone. Now that I've seen Fraleigh's old blog post, I can at least give Motorola credit for full honesty about these motives.

    I do, however, find the implication of Fraleigh's words revolting. People who buy the devices, in Motorola's view, don't deserve the right to improve their technology. By contrast, I believe that software freedom should be universal and that no one need be a “mere consumer” of technology. I believe that every technology user is a potential developer who might have something to contribute but obviously cannot if that user isn't given the tools to do so. Sadly, it seems, Motorola believes the general public has nothing useful to contribute, so the public shouldn't even be given the chance.

    But, this attitude is always true for proprietary software companies, so there are actually no revelations on that point. Of more interest is how Motorola was able to do this, given that Android/Linux (at least most of it) is Free Software.

    Motorola's ability to take these actions is a consequence of a few licensing issues. First, most of the Android system is under the Apache-2.0 license (or, in some cases, an even more permissive license). These licenses allow Motorola to make proprietary versions of what Google released and sell it without source code nor the ability for users to install modified versions. That license decision is lamentable (but expected, given Google's goals for Android).

    The even more lamentable licensing issue here is regarding Linux's license, the GPLv2. Specifically, Fraleigh's post claims:

    The use of open source software, such as the Linux kernel … in a consumer device does not require the handset running such software to be open for re-flashing. We comply with the licenses, including GPLv2.

    I should note that, other than Fraleigh's assertion quoted above, I have no knowledge one way or another if Motorola is compliant with GPLv2 on its Android/Linux phones. I don't own one, have no plans to buy one, and therefore I'm not in receipt of an offer for source regarding the devices. I've also received no reports from anyone regarding possible non-compliance. In fact, I'd love to confirm their compliance: please get in touch if you have a Motorola Android/Linux phone and attempted to install a newly compiled executable of Linux onto your phone.

    I'm specifically interested in the installation issue because GPLv2 requires that any binary distribution of Linux (such as one on telephone hardware) include both the source code itself and the scripts to control compilation and installation of the executable. So, if Motorola wrote any helper programs or other software that installs Linux onto the phones, then such software, under GPLv2, is a required part of the complete and corresponding source code of Linux and must be distributed to each buyer of a Motorola Android/Linux phone.

    If you're surprised by that last paragraph, you're probably not alone. I find that many are confused regarding this GPLv2 nuance. I believe the confusion stems from discussions during the GPLv3 process about this specific requirement. GPLv3 does indeed expand the requirement for the scripts to control compilation and installation of the executable into the concept of Installation Information. Furthermore, GPLv3's Installation Information is much more expansive than merely requiring helper software programs and the like. GPLv3's Installation Information includes any material, such as an authorization key, that is necessary for installation of a modified version onto the device.

    However, merely because GPLv3 expanded installation information requirements does not lessen GPLv2's requirement of such. In fact, in my reading of GPLv2 in comparison to GPLv3, the only effective difference between the two on this point relates to cryptographic device lock-down. I do admit that under GPLv2, if you give all the required installation scripts, you could still use cryptography to prevent those scripts from functioning without an authorization key. Some vendors do this, and that's precisely why GPLv3 is written the way that it is: we'd observed such lock-down occurring in the field, and identified that behavior as a bug in GPLv2 that is now closed with GPLv3.

    However, because of all that hype about GPLv3's new Installation Information definition, many simply forgot that the GPLv2 isn't silent on the issue. In other words, GPLv3's verbosity on the subject led people to minimize the important existing requirements of GPLv2 regarding installation information.

    As regular readers of this blog know, I've spent much of my time for the last 12 years doing GPL enforcement. Quite often, I must remind violators that GPLv2 does indeed require the scripts to control compilation and installation of the executable, and that candidate source code releases missing the scripts remain in violation of GPLv2. I sincerely hope that Android/Linux redistributors haven't forgotten this.

    I have one final and important point to make regarding Motorola's February statement: I've often mentioned that the mobile industry's opposition to GPLv3 and to user-upgradable devices is for their own reasons, and nothing to do with regulators or other outside entities preventing them from releasing such software. In their blog post, Motorola tells us quite clearly that the community of developers interested in … experimenting with Android system development and re-flashing phones … [should obtain] either a Google ADP1 developer phone or a Nexus One, both of which are intended for these purposes. In other words, Motorola tacitly admits that it's completely legal and reasonable for the community to obtain such telephones, and that, in fact, Google sells such devices. Motorola was not required to put lock-down restrictions in place, rather they made a choice to prohibit users in this way. On this point, Google chose to treat its users with respect, allowing them to install modified versions. Motorola, by contrast, chose to make Android/Linux as close to Apple's iPhone as they could get away with legally.

    So, the next time a mobile company tries to tell you that they just can't abide by GPLv3 because some third party (the FCC is their frequent scapegoat) prohibits them, you should call them on their FUD. Point out that Google sells phones on the open market that provide all Installation Information that GPLv3 might require. (In other words, even if Linux were GPLv3'd, Android/Linux on the NexusOne and HTC Dream would be a GPLv3-compliant distribution.) Meanwhile, at least one such company, Motorola, has admitted their solitary reason for avoiding GPLv3: the company just doesn't believe users deserve the right to install improved versions of their software. At least they admit their contempt for their customers.

    Update (same day): jwildeboer pointed me to a few posts in the custom ROM and jailbreaking communities about their concerns about Motorola's new offering, the Droid-X. Some commentors there point out that eventually, most phones get jailbroken or otherwise allow user control. However, the key point of the CrunchGear User Manifesto is a clear and good one: no company or person has the right to tell you that you may not do what you like with your own property. This is a point akin and perhaps essential to software freedom. It doesn't really matter if you can figure out to how to hack a device; what's important is that you not give your money to the company that prohibits such hacking. For goodness sake, people, why don't we all use ADP1's and NexusOne's and be done with this?

    Updated (2010-07-17): It appears that cryptographic lock down on the Droid-X is confirmed (thanks to rao for the link). I hope everyone will boycott all Motorola devices because of this, especially given that there are Android/Linux devices on the market that aren't locked down in this way.

    BTW, in Motorola's answer to Engadget on this, we see they are again subtly sending FUD that the lock-down is somehow legally required:

    Motorola's primary focus is the security of our end users and protection of their data, while also meeting carrier, partner and legal requirements.
    I agree the carriers and partners probably want such lock down, but I'd like to see their evidence that there is a legal restriction that requires that. They present none.

    Meanwhile, they also state that such cryptographic lock-down is the only way they know how to secure their devices:

    Checking for a valid software configuration is a common practice within the industry to protect the user against potential malicious software threats. Pity that Motorola engineers aren't as clueful as the Google and HTC engineers who designed the ADP1 and Nexus One.

    Posted on Thursday 15 July 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-07-07: Proprietary Software Licensing Produces No New Value In Society

    I sought out the quote below when Chris Dodd paraphrased it on Meet The Press on 25 April 2010. (I've been, BTW, slowly but surely working on this blog post since that date.) Dodd was quoting Frank Rich, who wrote the following, referring to the USA economic system (and its recent collapse):

    As many have said — though not many politicians in either party — something is fundamentally amiss in a financial culture that thrives on “products” that create nothing and produce nothing except new ways to make bigger bets and stack the deck in favor of the house. “At least in an actual casino, the damage is contained to gamblers,” wrote the financial journalist Roger Lowenstein in The Times Magazine last month. This catastrophe cost the economy eight million jobs.

    I was drawn to this quote for a few reasons. First, as a poker player, I've spend some time thinking about how “empty” the gambling industry is. Nothing is produced; no value for humans is created; it's just exchanging of money for things that don't actually exist. I've been considering that issue regularly since around 2001 (when I started playing poker seriously). I ultimately came to a conclusion not too different from Frank Rich's point: since there is a certain “entertainment value”, and since the damage is contained to those who chose to enter the casino, I'm not categorically against poker nor gambling in general, nor do I think they are immoral. However, I also don't believe gambling has any particular important value in society, either. In other words, I don't think people have an inalienable right to gamble, but I also don't think there is any moral reason to prohibit casinos.

    Meanwhile, I've also spent some time applying this idea of creating nothing and producing nothing to the proprietary software industry. Proprietary licenses, in many ways, are actually not all that different from these valueless financial transactions. Initially, there's no problem: someone writes software and is paid for it; that's the way it should be. Creation of new software is an activity that should absolutely be funded: it creates something new and valuable for others. However, proprietary licenses are designed specifically to allow a single act of programming generate new revenue over and over again. In this aspect, proprietary licensing is akin to selling financial derivatives: the actual valuable transaction is buried well below the non-existent financial construction above it.

    I admit that I'm not a student of economics. In fact, I rarely think of software in terms of economics, because, generally, I don't want economic decisions to drive my morality nor that of our society at large. As such, I don't approach this question with an academic economic slant, but rather, from personal economic experience. Specifically, I learned a simple concept about work when I was young: workers in our society get paid only for the hours that they work. To get paid, you have to do something new. You just can't sit around and have money magically appear in your bank account for hours you didn't work.

    I always approached software with this philosophy. I've often been paid for programming, but I've been paid directly for the hours I spent programming. I never even considered it reasonable to be paid again for programming I did in the past. How is that fair, just, or quite frankly, even necessary? If I get a job building a house, I can't get paid every day someone uses that house. Indeed, even if I built the house, I shouldn't get a royalty paid every time the house is resold to a new owner0. Why should software work any differently? Indeed, there's even an argument that software, since it's so much more trivial to copy than a house, should be available gratis to everyone once it's written the first time.

    I recently heard (for the first time) an old story about a well-known Open Source company (which no longer exists, in case you're wondering). As the company grew larger, the company's owners were annoyed that the company could only bill the clients for the hour they worked. The business was going well, and they even had more work than they could handle because of the unique expertise of their developers. The billable rates covered the cost of the developers' salaries plus a reasonable profit margin. Yet, the company executives wanted more; they wanted to make new money even when everyone was on vacation. In essence, having all the new, well-paid programming work in the world wasn't enough; they wanted the kinds of obscene profits that can only be made from proprietary licensing. Having learned this story, I'm pretty glad the company ceased to exist before they could implement their make money while everyone's on the beach plan. Indeed, the first order of business in implementing the company's new plan was, not surprisingly, developing some new from-scratch code not covered by GPL that could be proprietarized. I'm glad they never had time to execute on that plan.

    I'll just never be fully comfortable with the idea that workers should get money for work they already did. Work is only valuable if it produces something new that didn't exist in the world before the work started, or solves a problem that had yet to be solved. Proprietary licensing and financial bets on market derivatives have something troubling in common: they can make a profit for someone without requiring that someone to do any new work. Any time a business moves away from actually producing something new of value for a real human being, I'll always question whether the business remains legitimate.

    I've thus far ignored one key point in the quote that began this post: “At least in an actual casino, the damage is contained to gamblers”. Thus, for this “valueless work” idea to apply to proprietary licensing, I had to consider (a) whether or not the problem is sufficiently contained, and (b) whether software or not is akin to the mere entertainment activity, as gambling is.

    I've pointed out that I'm not opposed to the gambling industry, because the entertainment value exists and the damage is contained to people who want that particular entertainment. To avoid the stigma associated with gambling, I can also make a less politically charged example such as the local Chuck E. Cheese, a place I quite enjoyed as a child. One's parent or guardian goes to Chuck E. Cheese to pay for a child's entertainment, and there is some value in that. If someone had issue with Chuck E. Cheese's operation, it'd be easy to just ignore it and not take your children there, finding some other entertainment. So, the question is, does proprietary software work the same way, and is it therefore not too damaging?

    I think the excuse doesn't apply to proprietary software for two reasons. First, the damage is not sufficiently contained, particularly for widely used software. It is, for example, roughly impossible to get a job that doesn't require the employee to use some proprietary software. Imagine if we lived in a society where you weren't allowed to work for a living if you didn't agree to play Blackjack with a certain part of your weekly salary? Of course, this situation is not fully analogous, but the fundamental principle applies: software is ubiquitous enough in industrialized society that it's roughly impossible to avoid encountering it in daily life. Therefore, the proprietary software situation is not adequately contained, and is difficult for individuals to avoid.

    Second, software is not merely a diversion. Our society has changed enough that people cannot work effectively in the society without at least sometimes using software. Therefore, the “entertainment” part of the containment theory does not properly apply1, either. If citizens are de-facto required to use something to live productively, it must have different rules and control structures around it than wholly optional diversions.

    Thus, this line of reasoning gives me yet another reason to oppose proprietary software: proprietary licensing is simply a valueless transaction. It creates a burden on society and gives no benefit, other than a financial one to those granted the monopoly over that particular software program. Unfortunately, there nevertheless remain many who want that level of control, because one fact cannot be denied: the profits are larger.

    For example, Mårten Mikos recently argued in favor of these sorts of large profits. He claims that to benefit massively from Open Source (i.e., to get really rich), business models like “Open Core” are necessary. Mårten's argument, and indeed most pro-Open-Core arguments, rely on this following fundamental assumption: for FLOSS to be legitimate, it must allow for the same level of profits as proprietary software. This assumption, in my view, is faulty. It's always true that you can make bigger profits by ignoring morality. Factories can easily make more money by completely ignoring environmental issues; strip mining is always very profitable, after all. However, as a society, we've decided that the environment is worth protecting, so we have rules that do limit profit maximization because a more important goal is served.

    Software freedom is another principle of this type. While you can make a profit with community-respecting FLOSS business models (such as service, support and freely licensed custom modifications on contract), it's admittedly a smaller profit than can be made with Open Core and proprietary licensing. But that greater profit potential doesn't legitimatize such business models, just as it doesn't legitimize strip mining or gambling on financial derivatives.

    Update: Based on some feedback that I got, I felt it was important to make clear that I don't believe this argument alone can create a unified theory that shows why software freedom should be an inalienable right for all software users. This factor of lack of value that proprietary licensing brings to society is just another to consider in a more complete discussion about software freedom.

    Update: Glynn Moody wrote a blog post that quoted from this post extensively and made some interesting comments on it. There's some interesting discussion in the blog comments there on his site; perhaps because so many people hate that I only do blog comments on identi.ca (which I do, BTW, because it's the only online forum I'm assured that I'll actually read and respond to.)


    0I realize that some argue that you can buy a house, then rent it to others, and evict them if they fail to pay. Some might argue further that owners of software should get this same rental power. The key difference, though, is that the house owner can't really make full use of the house when it's being rented. The owner's right to rent it to others, therefore, is centered around the idea that the owner loses some of their personal ability to use the house while the renters are present. This loss of use never happens with software.

    1You might be wondering, Ok, so if it's pure entertainment software, is it acceptable for it to be proprietary?. I have often said: if all published and deployed software in the world were guaranteed Free Software except for video games, I wouldn't work on the cause of software freedom anymore. Ultimately, I am not particularly concerned about the control structures in our culture that exist for pure entertainment. I suppose there's some line to be drawn between art/culture and pure entertainment/diversion, but considerations on differentiating control structures on that issue are beyond the scope of this blog post.

    Posted on Wednesday 07 July 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

June

  • 2010-06-30: Post-Bilski Steps for Anti-Software-Patent Advocates

    Lots of people are opining about the USA Supreme Court's ruling in the Bilski case. Yesterday, I participated in a oggcast with the folks at SFLC. In that oggcast, Dan Ravicher explained most of the legal details of Bilski; I could never cover them as well as he did, and I wouldn't even try.

    Anyway, as a non-lawyer worried about the policy questions, I'm pretty much only concerned about those forward-looking policy questions. However, to briefly look back at how our community responded to this Bilski situation over the last 18 months: it seems similar to what happened while the Eldred case was working its way to the Supreme Court. In the months preceding both Eldred and Bilski, there seemed to be a mass hypnosis that the Supreme Court would actually change copyright law (Eldred) or patent law (Bilski) to make it better for freedom of computer users.

    In both cases, that didn't happen. There was admittedly less of that giddy optimism before Bilski as there was before Eldred, but the ultimate outcome for computer users is roughly no different in both cases: as we were with Eldred, we're left back with the same policy situation we had before Bilski ever started making its way through the various courts. As near as I can tell from what I've learned, the entire “Bilski thing” appears to be a no-op. In short, as before, the Patent Office sometimes can and will deny applications that it determines are only abstract ideas, and the Supreme Court has now confirmed that the Patent Office can reject such an application if the Patent Office knows an abstract idea when it sees it. Nothing has changed regarding most patents that are granted every day, including those that read on software. Those of us that oppose software patents continue to believe that software algorithms are indeed merely abstract ideas and pure mathematics and shouldn't be patentable subject matter. The governmental powers still seems to disagree with us, or, at least, just won't comment on that question.

    Looking forward, my largest concern, from a policy perspective, is that the “patent reform” crowd, who claim to be the allies of the anti-software-patent folks, will use this decision to declare that the system works. Bilski's patent was ultimately denied, but on grounds that leave us no closer to abolishing software patents. Patent reformists will say: Well, invalid patents get denied, leaving space for the valid ones. Those valid ones, they will say, do and should include lots of patents that read on software. But only the really good ideas should be patented, they will insist.

    We must not yield to the patent reformists, particularly at a time like this. (BTW, be sure to read RMS' classic and still relevant essay, Patent Reform Is Not Enough, if you haven't already.)

    Since Bilski has given us no new tools for abolishing software patents, we must redouble efforts with tools we already have to mitigate the threat patents pose to software freedom. Here are a few suggestions, which I think are actually all implementable by the average developer, to will keep up the fight against software patents, or at least, mitigate their impact:

    • License your software using the AGPLv3, GPLv3, LGPLv3, or Apache-2.0. Among the copyleft licenses, AGPLv3 and GPLv3 offer the best patent protections; LGPLv3 offers the best among the weak copyleft licenses; Apache License 2.0 offers the best patent protections among the permissive licenses. These are the licenses we should gravitate toward, particularly since multiple companies with software patents are regularly attacking Free Software. At least when such companies contribute code to projects under these licenses, we know those particular codebases will be safe from that particular company's patents.
    • Demand real patent licenses from companies, not mere promises. Patent promises are not enough0. The Free Software community deserves to know it has real patent licenses from companies that hold patents. At the very least, we should demand unilateral patent licenses for all their patents perpetually for all possible copylefted code (i.e., companies should grant, ahead of time, the exact same license that the community would get if the company had contributed to a yet-to-exist GPLv3'd codebase)1. Note further that some companies, that claim to be part of the FLOSS community, haven't even given the (inadequate-but-better-than-nothing) patent promises. For example, BlackDuck holds a patent related to FLOSS, but despite saying they would consider at least a patent promise, have failed to do even that minimal effort.
    • Support organizations/efforts that work to oppose and end software patents. In particular, be sure that the efforts you support are not merely “patent reform” efforts hidden behind anti-software patent rhetoric. Here are a few initiatives that I've recently seen doing work regarding complete abolition of software patents. I suggest you support them (with your time or dollars):
    • Write your legislators. This never hurts. In the USA, it's unlikely we can convince Congress to change patent law, because there are just too many lobbying dollars from those big patent-holding companies (e.g., the same ones that wrote those nasty amicus briefs in Bilski). But, writing your Senators and Congresspeople once a year to remind them of your opposition patents that read on software simply can't hurt, and may theoretically help a tiny bit. Now would be a good time to do it, since you can mention how the Bilski decision convinced you there's a need for legislative abolition of software patents. Meanwhile, remember, it's even better if you show up at political debates during election season and ask these candidates to oppose software patents!
    • Explain to your colleagues why software patents should be abolished, particularly if you work in computing. Software patent abolition is actually a broad spectrum issue across the computing industry. Only big and powerful companies benefit from software patents. The little guy — even the little guy proprietary developer — is hurt by software patents. Even if you can't convince your colleagues who write proprietary software that they should switch to writing Free Software, you can instead convince them that software patents are bad for them personally and for their chances to succeed in software. Share the film, Patent Absurdity, with them and then discuss the issue with them after they've viewed it. Blog, tweet, dent, and the like about the issue regularly.
    • (added 2010-07-01 on tmarble's suggestion) Avoid products from pro-software-patent companies. This is tough to do, and it's why I didn't call for an all-out boycott. Most companies that make computers are pro-software-patent, so it's actually tough to buy a computer (or even components for one) without buying from a pro-software-patent company. However, avoiding the companies who are most aggressive with patent aggression is easy: starting with avoiding Apple products is a good first step (there are plenty of other reasons to avoid Apple anyway). Microsoft would be next on the list, since they specifically use software patents to attack FLOSS projects. Those are likely the big two to avoid, but always remember that all large companies with proprietary software products actively enforce patents, even if they don't file lawsuits. In other words, go with the little guy if you can; it's more likely to be a patent-free zone.
    • If you have a good idea, publish it and make sure the great idea is well described in code comments and documentation, and that everything is well archived by date. I put this one last on my list, because it's more of a help for the software patent reformists than it is for the software patent abolitionists. Nevertheless, sometimes, patents will get in the way of Free Software, and it will be good if there is strong prior art showing that the idea was already thought of, implemented, and put out into the world before the patent was filed. But, fact is, the “valid” software patents with no prior art are a bigger threat to software freedom. The stronger the patent, the worst the threat, because it's more likely to be innovative, new technology that we want to implement in Free Software.

    I sat and thought of what else I could add to this list that individuals can do to help abolish software patents. I was sad that these were the only five six things that I could collect, but that's all the more reason to do these five six things in earnest. The battle for software freedom for all users is not one we'll win in our lifetimes. It's also possible abolition of software patents will take a generation as well. Those of us that seek this outcome must be prepared for patience and lifelong, diligent work so that the right outcome happens, eventually.


    0 Update: I was asked for a longer write up on software patent licenses as compared to mere “promises”. Unfortunately, I don't have one, so the best I was able to offer was the interview I did on Linux Outlaws, Episode 102, about Microsoft's patent promise. I've also added a TODO to write something up more completely on this particular issue.

    1 I am not leaving my permissively-license-preferring friends out of this issue without careful consideration. Specifically, I just don't think it's practical or even fair to ask companies to license their patents for all permissively-licensed code, since that would be the same as licensing to everyone, including their proprietary software competitors. An ahead-of-time perpetual license to practice the teachings of all the company's patents under AGPLv3 basically makes sure that code that's eternally Free Software will also eternally be patent-licensed from that company, even if the company never contributes to the AGPLv3'd codebase. Anyone trying to make proprietary code that infringed the patent wouldn't have benefit of the license; only Free Software users, distributors and modifiers would have the benefit. If a company supports copyleft generally, then there is no legitimate reason for the company to refuse such a broad license for copyleft distributions and deployments.

    Posted on Wednesday 30 June 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-06-23: New Ground on Terminology Debate?

    (These days, ) I generally try to avoid the well-known terminology debates in our community. But, if you hang around this FLOSS world of ours long enough, you just can't avoid occasionally getting into them. I found myself in one this afternoon that spanned three identica threads. I had some new thoughts that I've shared today (and even previously) on my identi.ca microblog. I thought it might be useful to write them up in one place rather than scattered across a series of microblog statements.

    I gained my first new insight into the terminology issues when I had dinner with Larry Wall in early 2001 after my Master's thesis defense. It was first time I talked with him about these issues of terminology, and he said that it sounded like a good place to apply what he called the “golden rule of network protocols”: Always be conservative in what you emit and liberal in what you accept. I've recently noted again that's a good rule to follow regarding terminology.

    More recently, I've realized that the FLOSS community suffers here, likely due to our high concentration of software developers and engineers. Precision in communication is a necessarily component of the lives of developers, engineers, computer scientists, or anyone in a highly technical field. In our originating fields, lack of precise and well-understood terminology can cause bridges to collapse or the wrong software to get installed and crash mission critical systems. Calling x by the name y sometimes causes mass confusion and failure. Indeed, earlier this week, I watched a PBS special, The Pluto Files, where Neil deGrasse Tyson discussed the intense debate about the planetary status of Pluto. I was actually somewhat relieved that a subtle point regarding a categorical naming is just as contentious in another area outside my chosen field. Watching the “what constitutes a planet” debate showed me that FLOSS hackers are no different than most other scientists in this regard. We all take quite a bit of pride in our careful (sometimes pedantic) care in terminology and word choice; I know I do, anyway.

    However, on the advocacy side of software freedom (the part that isn't technical), our biggest confusion sometimes stems from an assumption that other people's word choice is as necessarily as precise as ours. Consider the phrase “open source”, for example. When I say “open source”, I am referring quite exactly to a business-focused, apolitical and (frankly) amoral0 interest in, adoption of, and contribution to FLOSS. Those who coined the term “open source” were right about at least one thing: it's a term that fits well with for-profit interests who might otherwise see software freedom as too political.

    However, many non-business users and developers that I talk to quite clearly express that they are into this stuff precisely because there are principles behind it: namely, that FLOSS seeks to make a better world by giving important rights to users and programmers. Often, they are using the phrase “open source” as they express this. I of course take the opportunity to say: it's because those principles are so important that I talk about software freedom. Yet, it's clear they already meant software freedom as a concept, and just had some sloppy word choice.

    Fact is, most of us are just plain sloppy with language. Precision isn't everyone's forte, and as a software freedom advocate (not a language usage advocate), I see my job as making sure people have the concepts right even if they use words that don't make much sense. There are times when the word choices really do confuse the concepts, and there are other times when they don't. Sometimes, it's tough to identify which of the two is occurring. I try to figure it out in each given situation, and if I'm in doubt, I just simplify to the golden rule of network protocols.

    Furthermore, I try to have faith in our community's intelligence. Regardless of how people get drawn into FLOSS: be it from the moral software freedom arguments or the technical-advantage-only open source ones, I don't think people stop listening immediately upon their arrival in our community. I know this even from my own adoption of software freedom: I came for the Free as in Price, but I stayed for the Free as in Freedom. It's only because I couldn't afford a SCO Unix license in 1992 that I installed GNU/Linux. But, I learned within just a year why the software freedom was what mattered most.

    Surely, others have a similar introduction to the community: either drawn in by zero-cost availability or the technical benefits first, but still very interested to learn about software freedom. My goal is to reach those who have arrived in the community. I therefore try to speak almost constantly about software freedom, why it's a moral issue, and why I work every day to help either reduce the amount of proprietary software, or increase the amount of Free Software in the world. My hope is that newer community members will hear my arguments, see my actions, and be convinced that a moral and ethical commitment to software freedom is the long lasting principle worth undertaking. In essence, I seek to lead by example as much as possible.

    Old arguments are a bit too comfortable. We already know how to have them on autopilot. I admit myself that I enjoy having an old argument with a new person: my extensive practice often yields an oratorical advantage. But, that crude drive is too much about winning the argument and not enough about delivering the message of software freedom. Occasionally, a terminology discussion is part of delivering that message, but my terminology debate tools box has a “use with care” written on it.


    0 Note that here, too, I took extreme care with my word choice. I mean specifically amorality — merely an absence of any moral code in particular. I do not, by any stretch, mean immoral.

    Posted on Wednesday 23 June 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-06-11: Where Are The Bytes?

    A few years ago, I was considering starting a Free Software project. I never did start that one, but I learned something valuable in the process. When I thought about starting this project, I did what I usually do: ask someone who knows more about the topic than I do. So I phoned my friend Loïc Dachary, who has started many Free Software projects, and asked him for advice.

    Before I could even describe the idea, Loïc said: you don't have a URL? I was taken aback; I said: but I haven't started yet. He said: of course you have, you're talking to me about it, so you've started already. The most important thing you can tell me, he said, is Where are the bytes?

    Loïc explained further: Most projects don't succeed. The hardest part about a software freedom project is carrying it far enough so it can survive even if its founders quit. Therefore, under Loïc's theory, the most important task at the project's start is to generate those bytes, in hopes those bytes find their way to the a group of developers who will help keep the project alive.

    But, what does he mean by “bytes”? He means, quite simply, that you have to core dump your thinking, your code, your plans, your ideas, just about everything on a public URL that everyone can take a look at. Push bytes. Push them out every time you generate a few. It's the only chance your software freedom project has.

    The first goal of a software freedom project is to gain developers. No project can have long-term success without a diverse developer base. The problem is, the initial development work and project planning too often ends up trapped in the head of a few developers. It's human nature: How can I spend my time telling everyone about what I'm doing? If I do that, when will I actually do anything? Successful software freedom project leaders resist this human urge and do the seemingly counterintuitive thing: they dump their bytes on the public, even if it slows them down a bit.

    This process is even more essential in the network age. If someone wants to find a program that does a job, the first tool is a search engine: to find out if someone else has done it yet. Your project's future depends completely that every such search performed helps developers find your bytes.

    In early 2001, I asked Larry Wall, of all the projects he'd worked on, which was the hardest. His answer was quick: when I was developing the first version of perl5, Larry said, I felt like I had to code completely alone and just make it work by myself. Of course, Larry's a very talented guy who can make that happen: generate something by himself that everyone wanted to use. While I haven't asked him what he'd do in today's world if he was charged with a similar task, I can guess — especially given at how public the Perl6 process has been — that he'd instead use the new network tools, such as DVCS, to push his bytes early and often and seek to get more developers involved early.0

    Admittedly, most developers' first urge is to hide everything. We'll release it when it's ready, is often heard, or — even worse — Our core team works so well together; it'll just slow us down to make things public now. Truth is, this is a dangerous mixture of fear and narcissism — the very same drives that lead proprietary software developers to keep things proprietary.

    Software freedom developers have the opportunity to actually get past the simple reality of software development: all code sucks, and usually isn't complete. Yet, it's still essential that the community see what's going on at ever step, from the empty codebase and beyond. When a project is seen as active, that draws in developers and gives the project hope of success.

    When I was in college, one of the teams in a software engineering class crashed and burned; their project failed hopelessly. This happened despite one of the team members spending about half the semester up long nights, coding by himself, ignoring the other team members. In their final evaluation, the professor pointed out: Being a software developer isn't like being a fighter pilot. The student, missing the point, quipped: Yeah, I know, at least a fighter pilot has a wingman. Truth is, one person, or two people, or even a small team, aren't going to make a software freedom project succeed. It's only going to succeed when a large community bolsters it and prevents any single point of failure.

    Nevertheless, most software freedom projects are going to fail. But, there is no shame in pushing out a bunch of bytes, encouraging people to take a look, and giving up later if it just doesn't make it. All of science works this way, and there's no reason computer science should be any different. Keeping your project private assures its failure; the only benefit is that you can hide that you even tried. As my graduate advisor told me when I was worried my thesis wasn't a success: a negative result can be just as compelling as a positive one. What's important is to make sure all results are published and available for public scrutiny.


    When I started discussing this idea a few weeks ago, some argued that early GNU programs — the founding software of our community — were developed in private initially. This much is true, but just because GNU developers once operated that way doesn't mean it was the right way. We have the tools now to easily do development in public, so we should. In my view, today, it's not really in the spirit of software freedom until the project, including its design discussions, plans, and prototypes are all developed in public. Code (regardless of its license) merely dumped over the wall on intervals deserves to be forked by a community committed to public development.


    Update (2010-06-12): I completely forgot to mention The Risks of Distributed Version Control by Ben Collins-Sussman, which is five years old now but still useful. Ben is making a similar point to mine, and pointing out how some uses of DVCS can cause the effects that I'm encouraging developers to avoid. I think DVCS is like any tool: it can be used wrongly. The usage Ben warns about should be avoided, and DVCS, when used correctly, assists in the public software development process.


    0Note that pushing code out to the public in the mid-1990s was substantially more arduous (from a technological perspective) than it is today. Those of you who don't remember shar archives may not realize that. :)

    Posted on Friday 11 June 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

May

  • 2010-05-08: Beware of Proprietary Drift

    The Free Software Foundation (FSF) announced yesterday a campaign to collect a clear list of OpenOffice.Org extensions that are FaiF, to convince the OO.o Community Council to list only FaiF extensions, and to find those extensions that are proprietary software, so that OO.o extension developers can focus of their efforts on writing replacements under a software-freedom-respecting license.

    I use OpenOffice.Org (OO.o) myself only when someone else sends me a document in that format; I'm a LaTeX, DocBook, MarkDown, or HTML user for documents I originate. Nevertheless, I'm obviously a rare sort of software user, and I understand that OO.o is a program many people use. Plus, a program like OO.o is extremely large, with a diverse user base, so extension-style improvement, from a technological perspective, makes sense to meet all the users' requirements.

    Unfortunately, the social impact of a program designed this way causes danger for software freedom. It sometimes causes a chain of events that I call “proprietary drift” — a social phenomena that leads otherwise FaiF codebases to slowly become, in their default use, mostly proprietary packages, at least with regard the features users find most important and necessary.

    Copyleft itself was originally designed to address this problem: to make sure that improved versions of packages were available with as much software freedom as the original. Copyleft isn't a perfect solution to reach this goal, and furthermore many essential software freedom codebases are under weak copyleft and/or permissive licenses. Such is the case with OO.o, and the proprietary drift of the codebase is thus of great concern here.

    For those of us that have the goal of building a world where software freedom is given for all published and deployed software, this problem of proprietary drift is a terrible threat. In many ways, it's even a worse threat than the marketing and production of fully proprietary software. This may seem a bit counter-intuitive on its surface; logic would seem to dictate that some software freedom is better than none, and therefore an OO.o user with a few proprietary extensions installed is better off than a Microsoft Word user. And, in fact, none of that is false.

    However, the situation introduces a complexity. In short, it can inspire a “good enough” reaction among users. Particularly for users who have generally used only proprietary software, the experience of using a package that mostly respects software freedom can be incredibly liberating. When 98% of your software is FaiF-licensed, you sometimes don't notice the 2% that isn't. Over time, the 2% goes up to 3%, then 4%. This proprietary drift will often lead back to a system not that much different from (for example) Apple's operating system, which has a permissively-licensed software freedom core, but most of the system is very much proprietary. In other words, in the long term, proprietary drift leads to mostly proprietary systems.

    Sometimes, I and other software freedom advocates are criticized for giving such a hard time to those who are seemingly closest to our positions. Often, this is because the threat of proprietary drift is so great. Concern about proprietary drift is, at least in large part, the inspiration for positions opposing UbuntuOne, for the Linux Libre project, and for this this new initiative to catalog the FaiF OO.o extensions and rewrite the proprietary ones. We all agree that purely proprietary software programs like those from Apple, Microsoft, and Oracle are the greatest threat to software freedom in the short term. But, in the long term, proprietary drift has the potential to creep up on users who prefer software freedom. You may never see it coming if you aren't constantly vigilant.

    [There's a derivative version of this article available in Arabic. I can't personally attest to the accuracy of the translation, as I can't read Arabic, but osamak, the translator, is a good guy.]


    Disclaimer: While I am a member of FSF's Board of Directors, and I believe the positions stated above are consistent with FSF's positions, the opinions are not necessarily those of the FSF even though I refer to various FSF-sponsored initiatives. Furthermore, this remains my personal blog and the opinions certainly do not express those of my employer nor those of any other organization or project for which I volunteer.

    Posted on Saturday 08 May 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2010-04-21: Launchpad Single Sign On Released

    I wrote 15 months ago thanking Canonical for their release of Launchpad. However, in the interim, a part of the necessary codebase was made proprietary, namely the authentication system used in the canonical instance of Launchpad hosted by Canonical. (Yes, I still insist on using canonical in the canonical way despite the company name making it confusing. :). I added this fact to my list of reasons of abandoning Ubuntu and other Canonical products.

    Fortunately, I've now removed this reason from the list of reasons I switched back to Debian from Ubuntu, since Jono Bacon announced release of this code today. According to Jono, this release means that Launchpad and its dependencies are again fully Free Software. This is a step forward. And, I did promise many people at Canonical that I'd make a point about thanking them for doing Free Software releases when they do them, since I do make a point of calling them out about negative things they do.

    Like any mixed proprietary/Free Software company, there is tons more to be released. I remain most concerned about UbuntuOne's server side code, but I very much hope this release today marks a bounce-back for Canonical to its roots in the 100% Free Software world.

    Posted on Wednesday 21 April 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-04-07: Proprietary Licenses Are Even Worse Than They Look

    There are lots of evil things that proprietary software companies might do. Companies put their own profit above the rights and freedoms of their users, and to that end, much can be done that subjugates users. Even as someone who avoids proprietary software, I still read many proprietary license agreements (mainly to see how bad they are). I've certainly become numb to the constant barrage of horrible restrictions they place on users. But, sometimes, proprietary licenses go so far that I'm taken aback by their gratuitous cruelty.

    Apple's licenses are probably the easiest example of proprietary licensing terms that are well beyond reasonableness. Of course, Apple's licenses do the usual things like forbidding users from copying, modifying, sharing, and reverse engineering the software. But even worse, Apple also forbid users from running Apple software on any hardware that is not produced by Apple.

    The decoupling of one's hardware vendor from one's software vendor was a great innovation brought about by the PC revolution, in which, ironically, Apple played a role. Computing history has shown us that when your software vendor also controls your hardware, you can easily be “locked in“ in ways that make mundane proprietary software licenses seem almost nonthreatening.

    Film image from Tron of the Master Control Program (MCP)

    Indeed, Apple has such a good hype machine that they even have convinced some users this restrictive policy makes computing better. In this worldview, the paternalistic vendor will use its proprietary controls over as many pieces of the technology as possible to keep the infantile users from doing something that's “just bad for them”. The tyrannical MCP of Tron comes quickly to my mind.

    I'm amazed that so many otherwise Free Software supporters are quite happy using OSX and buying Apple products, given these kinds of utterly unacceptable policies. The scariest part, though, is that this practice isn't confined to Apple. I've been recently reminded that other companies, such as IBM, do exactly the same thing. As a Free Software advocate, I'm critical of any company that uses their control of a proprietary software license to demand that users run that software only on the original company's hardware as well. The production and distribution of mundane proprietary software is bad enough. It's unfortunate that companies like Apple and IBM are going the extra mile to treat users even worse.

    Posted on Wednesday 07 April 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

March

  • 2010-03-26: LibrePlanet 2010 Completes Its Orbit

    Seven and a half years ago, I got this idea: the membership of the Free Software Foundation should have a chance to get together every year and learn about what the FSF has been doing for the last year. I was so nervous at the first one on Saturday 15 March 2003, that I even wore a suit which I rarely do.

    The basic idea was simple: the FSF Board of Directors came into town anyway each March for the annual board meeting. Why not give a chance for FSF associate members to meet the leadership and staff of FSF and ask hard questions to their hearts' content? I'm all about transparency, as you know. :)

    Since leaving the position of Executive Director a few months before the 2005 meeting, I've attended every annual meeting, just as an ordinary Associate Member and FSF volunteer. It's always enjoyable to attend a conference organized by someone else that you used to help organize; it's like, after having done sysadmin work for other people for years, to have someone keep a machine running and up to date just for you. It's been wonderful to watch the FSF AM meeting grow into a full-fledged conference for discussion and collaboration between folks from all over the Free Software world. “One room, one track, one day” has become “five rooms, three tracks, and three days” with the proverbial complaint throughout: But, why do I have to miss this great session so that I can go to some other great session!?!

    Some highlights for me this year were:

    • I saw John Gilmore win a well-deserved FSF Award for the Advancement of Free Software.
    • I got to spend time with the intrepid gnash developer Rob Savoye again, whom I knew of for years (his legend precedes him) but I'd rarely had a chance to see in person regularly, until lately.
    • I met so many young people excited about software freedom. I can only imagine to be only 19 or 20 years old and have the opportunity meet other Free Software developers in person. At that age, I considered myself lucky to simply have Usenet access so that I could follow and participate in online discussions about Free Software (good ol' gnu.misc.discuss ;). I am so glad that young folks, some from as far away as Brazil, had the opportunity to visit and speak about their work.
    • On the informal Friday sessions, I was a bit amazed that I pulled off a marathon six-hour session of mostly well-received talks/discussions (for which I readily admit I had not prepped well). The first three hours was about the challenges of software freedom on mobile devices, and the second three were about the nitty-gritty details of the hardest and most technical GPL enforcement task: the C&CS check. People seemed to actually enjoy watching me break half my Fedora chroots trying to build some source code for a plasma television. Someone even told me later: it was more fun because we got to see you make all the mistakes.
    • Finally (and I realize I've probably buried the lede here, but I've kept the list chronological, since I wrote most of it before I found out this last thing), after the FSF Board meeting, which followed LibrePlanet, I was informed by a phone call from my good friend Henry Poole that I'd been elected to FSF's Board of Directors, which has now been announced by FSF on Peter Brown's blog. I've often told the story that when I first learned about the FSF as a young programmer and sysadmin, I thought that someday, maybe I could be good enough to get a job as a sysadmin for the FSF. I did indeed volunteer as a sysadmin for the FSF starting around 1996, but I truly felt I'd exceeded any possible dream when I was later named FSF's Executive Director, and was able to serve in that post for so many years. Now, being part of the Board of Directors is an even greater opportunity for involvement in the organization that I've loved and respected for so long.

    FSF is an organization based around a very simple, principled idea: that users and programmers alike deserve inalienable rights to copy, share, modify, and redistribute all the software that they use. This issue isn't merely about making better software (although Free Software developers usually do, anyway); it's about a principle of morality: everyone using computers should be treated well and be given the maximal opportunity to treat their neighbors well, too. Helping make this simple idea into reality is the center of all the work I've done for the last 12 years of my life, and I expect it will be the focus of my (hopefully many) remaining years. I am thankful that the Voting Members of FSF have given me this additional opportunity to help our shared cause. I plan to work hard in this and all the other responsibilities that I already have to our Free Software community. Like everyone on FSF's Board of Directors, I serve in that role completely as a volunteer, so in some ways I feel this is just a natural extension of the volunteer work I've continued to do for the FSF regularly since I left its employment in 2005.

    Finally, I was glad to meet (or meet again) so many FSF supporters at LibrePlanet, and I deeply hope that I can serve our shared goal well in this additional role.

    Posted on Friday 26 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-15: Is Your Support of Copyleft Logically Consistent?

    Most of you are aware from one of my previous posts that It's a Wonderful Life! is my favorite film. Recently, I encountered something in the software freedom community that reminded me of yet another quote from the flim:

    Picture of George Bailey whispering to Clarence at the bar

    GEORGE:
    Look, uh … I think maybe you better not mention getting your wings around here.
    CLARENCE:
    Why? Don't they believe in angels?
    GEORGE:
    I… yeah, they believe in them…
    CLARENCE:
    Ohhh … Why should they be surprised when they see one?

    Obviously, I don't believe in angels myself. But, Clarence's (admittedly naïve) logic is actually impeccable: Either you believe in angels or you don't. If you believe in angels, then you shouldn't be surprised to (at least occasionally) see one.

    This film quote came to my mind in reference to a concept in GPL enforcement. Many people give lip service to the idea that the GPL, and copyleft generally, is a unique force that democratizes software and ensures that FLOSS cannot be exploited by proprietary software interests. Many of these same people, though, oppose GPL enforcement when companies exploit GPL'd code and don't give the source code and take away users' rights to modify and share that software.

    I've admitted that the copyleft is merely a strategy to achieve maximal software freedom. There are other strategies too, such as the Apache community process. The Apache Software Foundation releases software under a permissive non-copyleft license, but then negotiates with companies to convince them to contribute to the code base publicly. For some projects, that strategy has worked well, and I respect it greatly.

    Some (although not all) people in non-copyleft FLOSS communities (like the Apache community) are against GPL enforcement. I disagree with them, but their position is logically consistent. Such folks don't agree with us (copyleft-supporting folks) that a license should be used as a mechanism to guarantee that all published and deployed improved versions of the software are released in software freedom. It's not that those other folks don't prefer FLOSS; they simply prefer a non-legally binding social pressure to encourage software sharing rather than a strategy with legal backup. I prefer a strategy with legal strength, but I still respect non-copyleft folks who don't support that. They take a logically consistent and reasonable approach.

    However, it's ultimately hypocritical to claim support for a copyleft structure but oppose GPL enforcement. If you believe the license should have a legal requirement that ensures software is always distributed in software freedom, then why would you be surprised — or, even worse, angry — that a copyright holder would seek to uphold users' rights when that license is violated?

    There is great value in having multiple simultaneous strategies ongoing to achieve important goals. Universal software freedom is my most important goal, and I expect to spend nearly all of my life focused on achieving it for all published and deployed software in the world. However, I don't expect nor even want everyone else to single-minded-ly support my exact same strategies in all cases. The diversity of the software freedom community makes it more likely that we'll succeed if we avoid single point of failure on any particular plan, and I support that diversity.

    However, I also think it's reasonable to expect logically consistent positions. A copyleft license is effectively indistinguishable from the Apache license if copyleft is never enforced when violations occur. Condemning community-oriented0 GPL enforcement (that seeks primarily to get the code released) while also claiming to support the idea of copyleft is a logically inconsistent and self-contradictory position. It's unfortunate that so many people hold this contradictory position.


    0There are certain types of GPL enforcement that are not consistent with the goal of universal software freedom. For example, some so-called “Open Core” companies are well known for releasing their (solely) copyrighted code under GPL, and then using GPL enforcement as a mechanism to pressure users to take a proprietary license. GPL enforcement is only acceptable in my view if its primary goal is to have all code released under GPL. Such enforcement must never compromise about one point: that compliance with the GPL is a non-negotiable term of settling the enforcement action. If the enforcer is willing to sell out the rights that users' have to source code, then even I would condemn, as I have previously, such GPL enforcement as bad for the software freedom community. For this reason, in all GPL enforcement that I engage in, I make it a term of my participation that compliance with the terms of the GPL for the code in question be a non-negotiable requirement.

    Posted on Monday 15 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-05: Ok, Be Afraid if Someone's Got a Voltmeter Hooked to Your CPU

    Boy, do I hate it when a FLOSS project is given a hard time unfairly. I was this morning greeted with news from many places that OpenSSL, one of the most common FLOSS software libraries used for cryptography, was somehow severely vulnerable.

    I had a hunch what was going on. I quickly downloaded a copy of the academic paper that was cited as the sole source for the story and read it. As I feared, OpenSSL was getting some bad press unfairly. One must really read this academic computer science article in the context it was written; most commenting about this paper probably did not.

    First of all, I don't claim to be an expert on cryptography, and I think my knowledge level to opine on this subject remains limited to a little blog post like this and nothing more. Between college and graduate school, I worked as a system administrator focusing on network security. While a computer science graduate student, I did take two cryptography courses, two theory of computation courses, and one class on complexity theory0. So, when compared to the general population I probably am an expert, but compared to people who actually work in cryptography regularly, I'm clearly a novice. However, I suspect many who have hitherto opined about this academic article to declare this severe vulnerability have even less knowledge than I do on the subject.

    This article, of course, wasn't written for novices like me, and certainly not for the general public nor the technology press. It was written by and for professional researchers who spend much time each week reading dozens of these academic papers, a task I haven't done since graduate school. Indeed, the paper is written in a style I know well; my “welcome to CS graduate school” seminar in 1997 covered the format well.

    The first thing you have to note about such papers is that informed readers generally ignore the parts that a newbie is most likely focus on: the Abstract, Introduction and Conclusion sections. These sections are promotional materials; they are equivalent to a sales brochure selling you on how important and groundbreaking the research is. Some research is groundbreaking, of course, but most is an incremental step forward toward understanding some theoretical concept, or some report about an isolated but interesting experimental finding.

    Unfortunately, these promotional parts of the paper are the sections that focus on the negative implications for OpenSSL. In the rest of the paper, OpenSSL is merely the software component of the experiment equipment. They likely could have used GNU TLS or any other implementation of RSA taken from a book on cryptography1. But this fact is not even the primary reason that this article isn't really that big of a deal for daily use of cryptography.

    The experiment described in the paper is very difficult to reproduce. You have to cause very subtle faults in computation at specific times. As I understand it, they had to assemble a specialized hardware copy of a SPARC-based GNU/Linux environment to accomplish the experiment.

    Next, the data generated during the run of the software on the specially-constructed faulty hardware must be collected and operated upon by a parallel processing computing environment over the course of many hours. If it turns out all the needed data was gathered, the output of this whole process is the private RSA key.

    The details of the fault generation process deserve special mention. Very specific faults have to occur, and they can't occur such that any other parts of the computation (such as, say, the normal running of the operating system) are interrupted or corrupted. This is somewhat straightforward to get done in a lab environment, but accomplishing it in a production situation would be impractical and improbable. It would also usually require physical access to the hardware holding the private key. Such physical access would, of course, probably give you the private key anyway by simply copying it off the hard drive or out of RAM!

    This is interesting research, and it does suggest some changes that might be useful. For example, if it doesn't slow a system down too much, the integrity of RSA signatures should be verified, on a closely controlled proxy unit with a separate CPU, before sending out to a wider audience. But even that would be a process only for the most paranoid. If faults are occurring on production hardware enough to generate the bad computations this cracking process relies on, likely something else will go wrong on the hardware too and it will be declared generally unusable for production before an interloper could gather enough data to crack the key. Thus, another useful change to make based on this finding is to disable and discard RSA keys that were in use on production hardware that went faulty.

    Finally, I think this article does completely convince me that I would never want to run any RSA computations on a system where the CPU was emulated. Causing faults in an emulated CPU would only require changes to the emulation software, and could be done with careful precision to detect when an RSA-related computation was happening, and only give the faulty result on those occasions. I've never heard of anyone running production cryptography on an emulated CPU, since it would be too slow, and virtualization technologies like Xen, KVM, and QEMU all pass-through CPU instructions directly to hardware (for speed reasons) when the virtualized guest matches the hardware architecture of the host.

    The point, however, is that proper description of the dangers of a “security vulnerability” requires more than a single bit field. Some security vulnerabilities are much worse than others. This one is substantially closer to the “oh, that's cute” end of the spectrum, not the “ZOMG, everyone's going to experience identity theft tomorrow” side.


    0Many casual users don't realize that cryptography — the stuff that secures your networked data from unwanted viewers — isn't about math problems that are unsolvable. In fact, it's often based on math problems that are trivially solvable, but take a very long time to solve. This is why algorithmic complexity questions are central to the question of cryptographic security.

    1 I'm oversimplifying a bit here. A key factor in the paper appears to be the linear time algorithm used to compute cryptographic digital signatures, and the fact that the signatures aren't verified for integrity before being deployed. I suspect, though, that just about any RSA system is going to do this. (Although I do usually test the integrity of my GnuPG signatures before sending them out, I do this as a user by hand).

    Posted on Friday 05 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-04: Musings on Software Freedom for Mobile Devices

    I started using GNU/Linux and Free Software in 1992. In those days, while everything I needed for a working computer was generally available in software freedom, there were many components and applications that simply did not exist. For highly technical users who did not need many peripherals, the Free Software community had reached a state of complete software freedom. Yet, in 1992, everyone agreed there was still much work to be done. Even today, we still strive for a desktop and server operating system, with all relevant applications, that grants complete software freedom.

    Looked at broadly, mobile telephone systems are not all that different from 1992-era GNU/Linux systems. The basics are currently available as Free, Libre, and Open Source Software (FLOSS). If you need only the bare minimum of functionality, you can, by picking the right phone hardware, run an almost completely FLOSS operating system and application set. Yet, we have so far to go. This post discusses the current penetration of FLOSS in mobile devices and offers a path forward for free software advocates.

    A Brief History

    The mobile telephone market has never functioned like the traditional computer market. Historically, the mobile user made arrangements with some network carrier through a long-term contract. That carrier “gave” the user a phone or discounted it as a loss-leader. Under that system, few people take their phone hardware choice all that seriously. Perhaps users pay a bit more for a slightly better phone, but generally they nearly always pick among the limited choices provided by the given carrier.

    Meanwhile, Research in Motion was the first to provide corporate-slave-oriented email-enabled devices. Indeed, with the very recent focus on consumer-oriented devices like the iPhone, most users forget that Apple is by far not the preferred fruit for the smart phone user. Today, most people using a “smart phone” are using one given to them by their employer to chain them to their office email 24/7.

    Apple, excellent at manipulating users into paying more for a product merely because it is shiny, also convinced everyone that now a phone should be paid for separately, and contracts should go even longer. The “race to mediocrity” of the phone market has ended. Phones need real features to stand out. Phones, in fact, aren't phones anymore. They are small mobile computers that can also make phone calls.

    If these small computers had been introduced in 1992, I suppose I'd be left writing the Mobile GNU Manifesto, calling for developers to start from scratch writing operating systems for these new computers, so that all users could have software freedom. Fortunately, we have instead been given a head start. Unlike in 1992, not every company in the market today is completely against releasing Free Software. Specifically, two companies have seen some value in releasing (some parts of) phone operating systems as Free Software: Nokia and Google. However, the two companies have done this for radically different reasons.

    The Current State of Mobile Software Freedom

    For its part, Nokia likely benefited greatly from the traditional carrier system. Most of their phones were provided relatively cheaply with contracts. Their interest in software freedom was limited and perhaps even non-existent. Nokia sold new hardware every time a phone contract was renewed, and the carrier paid the difference between the loss-leader price and Nokia's wholesale cost. The software on the devices was simple and mostly internally developed. What incentive did Nokia have to release software in software freedom? (Nokia realized too late this was the wrong position, but more on that later.)

    In parallel, Nokia had chased another market that I've never fully understood: the tablet PC. Not big enough to be a real computer, but too large to be a phone, these devices have been an idea looking for a user base. Regardless of my personal views on these systems, though, GNU/Linux remains the ideal system for these devices, and Nokia saw that. Nokia built the Debian-ish Maemo system as a tablet system, with no phone. However, I can count on one hand all the people I've met who bothered with these devices; I just don't think a phone-less small computer is going to ever become the rage, even if Apple dumps billions into marketing the iPad. (Anyone remember the Newton?)

    I cannot explain, nor do I even understand, why Nokia took so long to use Maemo as a platform for a tablet-like telephone. But, a few months ago, they finally released one. This N900 is among only a few available phones that make any strides toward a fully free software phone platform. Yet, the list of proprietary components required for operation remains quite long. The common joke is that you can't even charge the battery on your N900 without proprietary software.

    While there are surely people inside Nokia who want more software freedom on their devices, Nokia is fundamentally a hardware company experimenting with software freedom in hopes that it will bolster hardware sales. Convincing Nokia to shorten that proprietary list will prove difficult, and the community based effort to replace that long list with FLOSS (called Mer) faces many challenges. (These challenges will likely increase with the recent Maemo merger with Moblin to form MeeGo).

    Fortunately, hardware companies are not the only entity interested in phone operating systems. Google, ever-focused on routing human eyes to its controlled advertising, realizes that even more eyes will be on mobile computing platforms in the future. With this goal in mind, Google released the Android/Linux system, now available on a variety of phones in varying degrees of software freedom.

    Google's motives are completely different than Nokia's. Technically, Google has no hardware to sell. They do have a set of proprietary applications that yield the “Google online experience” to deliver Google's advertising. From Google's point of view, an easy-to-adopt, licensing-unencumbered platform will broaden their advertising market.

    Thus, Android/Linux is a nearly fully non-copylefted phone operating system platform where Linux is the only GPL licensed component essential to Android's operation. Ideally, Google wants to see Android adopted broadly in both Free Software and mixed Free/proprietary deployments. Google's goals do not match that of the software freedom community, so in some cases, a given Android/Linux device will give the user more software freedom than the N900, but in many cases it will give much less.

    The HTC Dream is the only Android/Linux device I know of where a careful examination of the necessary proprietary components have been analyzed. Obviously, the “Google experience” applications are proprietary. There also are about 20 hardware interface libraries that do not have source code available in a public repository. However, when lined up against the N900 with Maemo, Android on the HTC Dream can be used as an operational mobile telephone and 3G Internet device using only three proprietary components: a proprietary GSM firmware, proprietary Wifi firmware, and two audio interface libraries. Further proprietary components are needed if you want a working accelerometer, camera, and video codecs as their hardware interface libraries are all proprietary.

    Based on this analysis, it appears that the HTC Dream currently gives the most software freedom among Android/Linux deployments. It is unlikely that Google wants anything besides their applications to be proprietary. While Google has been unresponsive when asked why these hardware interface libraries are proprietary, it is likely that HTC, the hardware maker with whom Google contracted, insisted that these components remain proprietary, and perhaps fear patent suits like the one filed this week are to blame here. Meanwhile, while no detailed analysis of the Nexus One is yet available, it's likely similar to the HTC Dream.

    Other Android/Linux devices are now available, such as those from Motorola and Samsung. There appears to have been no detailed analysis done yet on the relative proprietary/freeness ratio of these Android deployments. One can surmise that since these devices are from traditionally proprietary hardware makers, it is unlikely that these platforms are freer than those available from Google, whose maximal interest in a freely available operating system is clear and in contrast to the traditional desires of hardware makers.

    Whether the software is from a hardware-maker desperately trying a new hardware sales strategy, or an advertising salesman who wants some influence over an operating system choice to improve ad delivery, the software freedom community cannot assume that the stewards of these codebases have the interests of the user community at heart. Indeed, the interests between these disparate groups will only occasionally be aligned. Community-oriented forks, as has begun in the Maemo community with Mer, must also begin in the Android/Linux space too. We are slowly trying with the Replicant project, founded by myself and my colleague Aaron Williamson.

    A healthy community-oriented phone operating system project will ultimately be an essential component to software freedom on these devices. For example, consider the fate of the Mer project now that Nokia has announced the merger of Maemo with Moblin. Mer does seek to cherry-pick from various small device systems, but its focus was to create a freer Maemo that worked on more devices. Mer now must choose between following the Maemo in the merge with Moblin, or becoming a true fork. Ideally, the right outcome for software freedom is a community-led effort, but there may not be enough community interest, time and commitment to shepherd a fork while Intel and Nokia push forward on a corporate-controlled codebase. Further, Moblin will likely push the MeeGo project toward more of a tablet-PC operating system than a smart phone.

    A community-oriented Android/Linux fork has more hope. Google has little to lose by encouraging and even assisting with such forks; such effort would actually be wholly consistent with Google's goals for wider adoption of platforms that allow deployment of Google's proprietary applications. I expect that operating system software-freedom-motivated efforts will be met with more support from Google than from Nokia and/or Intel.

    However, any operating system, even a mobile device one, needs many applications to be useful. Google experience applications for Android/Linux are merely the beginning of the plethora of proprietary applications that will ultimately be available for MeeGo and Android/Linux platforms. For FLOSS developers who don't have a talent for low-level device libraries and operating system software, these applications represent a straightforward contribution towards mobile software freedom. (Obviously, though, if one does have talent for low-level programming, replacing the proprietary .so's on Android/Linux would be the optimal contribution.)

    Indeed, on this point, we can take a page from Free Software history. From the early 1990s onward, fully free GNU/Linux systems succeeded as viable desktop and server systems because disparate groups of developers focused simultaneously on both operating systems and application software. We need that simultaneous diversity of improvement to actually compete with the fully proprietary alternatives, and to ensure that the “mostly FLOSS” systems of today are not the “barely FLOSS” systems of tomorrow.

    Careful readers have likely noticed that I have ignored Nokia's other release, the Symbian> codebase. Every time I write or speak about the issues of software freedom in mobile devices, I'm chastised for leaving it out of the story. My answer is always simple: when a FLOSS version of Symbian can be compiled from source code, using a FLOSS compiler or SDK, and that binary can be installed onto an actual working mobile phone device, then (and only then) will I believe that the Symbian source release has value beyond historical interest. We have to get honest as a community about the future of Symbian: it's a ten-year-old proprietary codebase designed for devices of that era that doesn't bootstrap with any compilers our community uses regularly. Unless there's a radical change to these facts, the code belongs in a museum, not running on a phone.

    Also, lest my own community of hard-core FLOSS advocates flame me, I must also mention the Neo FreeRunner device and the OpenMoko project. This was a noble experiment: a freely specified hardware platform running 100% FLOSS. I used an OpenMoko FreeRunner myself, hoping that it would be the mobile phone our community could rally around. I do think the device and its (various) software stack(s) have a future as an experimental, hobbyist device. But, just as GNU/Linux needed to focus on x86 hardware to succeed, so must software freedom efforts in mobile systems focus on mass-market, widely used, and widely available hardware.

    Jailbreaking and the Self-Installed System

    When some of us at my day-job office decided to move as close to a software freedom phone platform as we could, we picked Android/Linux and the HTC Dream. However, we carefully considered the idea of permission to run one's own software on the device. In the desktop and server system market, this is not a concern, but on mobile systems, it is a central question.

    The holdover of those carrier-controlled agreements for phone acquisition is the demand that devices be locked down. Devices are locked down first to a single carrier's network, so that devices cannot (legally) be resold as phones ready for any network. Second, carriers believe that they must fear the FCC if device operating systems can be reinstalled.

    On the first point, Google is our best ally in this regard. The HTC Dream developer models, called the Android Dev Phone 1 (aka ADP1), while somewhat more expensive than T-Mobile branded G1s, permit the user to install any operating system on the phone, and the purchase agreement extract no promises from the purchaser regarding what software runs on the device. Google has no interest in locking you to a single carrier, but only to a single Google experience application vendor. Offering a user “carrier freedom of choice”, while tying those users tighter to Google applications, is probably a central part of their marketing plans.

    The second point — fear of an FCC crack down when mobile users have software freedom — is beyond the scope of this article. However, what Atheros has done with their Wifi devices shows that software freedom and FCC compliance can co-exist. Furthermore, the central piece of FCC's concern — the GSM chipset and firmware — runs on a separate processor in modern mobile devices. This is a software freedom battle for another day, but it shows that the FCC can be pacified in the meantime by keeping the GSM device a black box to the Free Software running on the primary processor of the device.

    Conclusion

    Seeking software freedom on mobile devices will remain a complicated endeavor for some time. Our community should utilize the FLOSS releases from companies, but should not forget that, until viable community forks exist, software freedom on these devices exists at the whim of these companies. A traditional “get some volunteers together and write some code” approach can achieve great advancement toward community-oriented FLOSS systems on mobile devices. Developers could initially focus on applications for the existing “mostly FLOSS” platforms of MeeGo and Android/Linux. The challenging and more urgent work is to replace lower-level proprietary components on these systems with FLOSS alternatives, but admittedly needs special programming skills that aren't easy to find.

    (This blog post first appeared as an article in the March 2010 issue of the Canadian online journal, The Open Source Business Resource.)

    Posted on Thursday 04 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-03-03: Thoughts on Jeremy's Sun/Oracle Analysis

    Leslie Hawthorn referred me to an excellent article by Jeremy Allison about Sun merging with Oracle. It was a particularly interesting read for me since, while I knew that Jeremy worked for Sun early in his career, I didn't realize that he started in engineering tech support.

    The most amusing part to me is that it's quite possible Jeremy was on the UK tech support hotline during the same time frame when I was calling USA Sun tech support while working for Westinghouse. I probably would have had a different view of proprietary software if Jeremy had answered the USA phone calls. One of the major life experiences that led me down the path of hard-core software freedom beliefs were my many calls to Sun tech support, who would usually tell me they just weren't going to fix the bugs I was reporting because Westinghouse just wasn't “big enough” (it was ironically one of the largest employers in Maryland in the 1980s and early 1990s) to demand that Sun fix such bugs (notwithstanding our monthly Sun maintenance fees).

    But, more fascinating still is Jeremy's analysis of why Sun failed as a FLOSS company. Specifically, Jeremy points out that the need for corporate control over all software technologies that Sun released, specifically demanding the exclusive right to proprietarize non-Sun contributions, was a primary reason that Sun just never succeeded as a FLOSS company.

    Meanwhile, I'm less optimistic than Jeremy on the future of Oracle. I have paid attention to Oracle's contributions to btrfs in light of recent events. Amusingly, btfs exists in no small part because ZFS was never licensed correctly and never turned into a truly community-oriented project. While the two projects don't have identical goals, they are similar enough that it seems unlikely btrfs would exist if Sun had endeavored to become a real FLOSS contributor and shepherd ZFS into Linux upstream using normal Linux community processes. It's thus strange to think that Oracle controls ZFS, even while it continues to contribute to btrfs, in a normal, upstream way (i.e., collaborating under the terms of GPLv2 with community developers and employees of other companies such as Red Hat, HP, Intel, Novell, and Fujitsu).

    I have mostly considered Oracle's contributions to btrfs (and to Xen, to which they contribute to in much the same way) as a complete fluke. Oracle is third only to Apple and Microsoft in its predatory, proprietary software marketing practices and mistreatment of users. Other than these notable exceptions, Oracle's attitude generally matches Sun's long-ago roots (and Apple's current attitude) in this regard: non-copyleft FLOSS without giving contributions back is the best “Open Source” plan.

    Software corporations usually oscillate between treating users and developers well and treating them poorly. Larger companies are often completely self-contradictory on this issue across multiple divisions. Microsoft and Apple are actually unique in their consistency of anti-software-freedom attitudes; I've typically assessed Oracle as roughly equivalent to the two of them0. I don't really see Oracle's predatory proprietary licensing models changing, and I expect them to try to manipulate FLOSS to bolster their proprietary licensing. Oracle was never an operating system company before the Sun acquisition, and therefore contributing to operating system components like btrfs and Xen were historically a non-issue. My pessimistic view is that Oracle's FLOSS involvement won't go beyond what currently exists (and I even find myself worrying if others can pick up the slack on btrfs if (when?) Oracle starts marketing a proprietarized ZFS-based solution instead). In short, I expect Oracle's primary business will still be anti-FLOSS. Nevertheless, I'll try to quickly acknowledge it if it turns out I'm wrong.


    0 Contrary to the popular receptions at the time, I was actually quite depressed both when, in 1999, Oracle announced first that they'd have a certified version of Oracle's database available for Red Hat Linux and when, in 2002, Oracle announced so-called “Unbreakable” Linux. These moves were not toward more software freedom, but rather to leverage the availability of a software freedom operating system, GNU/Linux, to sell proprietary licenses for Oracle databases. Neither event should have been heralded as anything but negative for software freedom.

    Posted on Wednesday 03 March 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

February

  • 2010-02-22: SCALE 8x Highlights

    I just returned today (unfortunately on an overnight flight, which always causes me to mostly lose the next day to sleep problems) from SCALE 8x. I spoke about GPL enforcement efforts, and also was glad to spend all day Saturday and Sunday at the event.

    These are my highlights of SCALE 8x:

    • Karsten Wade's keynote was particularly good. It's true that some of his talk was the typical messaging we hear from Corporate Open Source PR people (which are usually called “Community Managers”, although Karsten calls himself a “Senior Community Gardener” instead). Nevertheless, I was persuaded that Karsten does seek to educate Red Hat internally to have the right attitude about FLOSS contribution. In particular, he opened with a an illuminating literary analogy (from Chris Grams) about Tom Sawyer manipulating his acquaintances into paying him to do his work. I hadn't seen Chris' article when it was published back in September, and found this (“new to me”) analogy quite compelling. This is precisely the kind of activity that I see happening with problematic copyright assignments. I think the Tom Sawyer analogy fits aptly to that situation, because a contributor first does some work without compensation (the original patch), and then is manipulated even further into giving up something of value (signing away copyrights for nothing in return) for the mere honor of being able to do someone else's work. It was no surprised that after Karsten's keynote, jokes abounded in the SCALE 8x hallways all weekend that we should nickname Canonical's new COO, Matt Asay, the “Tom Sawyer of Open Source”. I am sure Red Hat will be happy that their keynote inspired some anti-Canonical jokes.
    • Another Red Hat employee (who is also my good friend and former cow-orker), Richard Fontana, also gave an excellent talk that many missed, as it was scheduled in the very final session slot. Fontana put forward more details about his theory of the “Lex Mercatoria” of FLOSS and how it works in resolving licensing conflicts and incompatibility inside the community. He contrasted it specifically against the kinds of disputes that happen in normal GPL violations, which are primarily perpetrated by those outside the FLOSS world). I agreed with Fontana's conclusions, but his argument seemed to assume that these in-community licensing issues were destabilizing. I asked him about this, pointing out that the community is really good at solving these issues before they destabilize anything. Fontana agreed that they do get easily resolved, and revised his point to say that the main problem is that distribution projects (like Debian and Fedora) hold the majority of responsibility for resolving these issues, and that upstreams need to take more responsibility on this. (BTW, Karsten was also in the audience for Fontana's talk, has written a more detailed blog post about it.) Fontana noted to me after his talk that he thought I wasn't paying attention, as I was using my Android phone a lot during the talk. I was actually dent'ing various points from his talk. I realized when Fontana expressed this concern that perhaps we as speakers have to change our views about what it means when people seem focused on computing devices during a talk. (I probably would have thought the same as Fontana in the situation.) The online conversation during a talk is a useful part of the interaction. Stormy Peters even once suggested before a talk at Linux World that we should have a way to put dents up on the screen as people comment during a talk. I may actually try to find a way to do this next time I give a talk.
    • I also saw Brian Aker's presentation about Drizzle, which is a fork of the MySQL codebase that he began inside Sun and now maintains further (having left Sun before the Oracle merger completed). I was impressed to see how much Drizzle has grown in just a few years, and how big its user base is. (Being a database developer, Brian thinks user numbers in the tens of thousands is just a start, but there are many FLOSS projects that would be elated even to max out at tens of thousands users. While I admire his goals of larger user bases, I think they've already accomplished a lot.) I talked with Brian for an hour after his talk all about the GPL and the danger of single-copyright-held business models. He's avoided this for Drizzle, and it sounds like none of the consulting companies spouting up around the user community has too much power over the project. (Brian also blogged a summary of some of the points in the discussion we had.)
    • Because it directly time-conflicted Brian's talk, I missed my friend and colleague's Karen Sandler's talk about trademarks, but I hear it went well. Karen told me not to attend anyway since she said I already knew everything it contained, and that she would have went to Brian's talk too if my talk was against it. She did however make a brief appearance at my talk, so I feel bad my post-talk chat with Brian made it impossible for me to do the same for her talk.
    • I spoke extensively with Matt Kraai in the Debian booth. It was great to meet Matt for the first time, as he had previously volunteered on the Free Software Directory project when I was at FSF, and he's also contributed a lot of development effort to BusyBox. It's always strange but great to finally meet someone in person you've occasionally been in touch with for nearly a decade online.
    • Don Armstrong was also in the Debian booth. I got to know Don when we served on one of the GPLv3 discussion committees together, and I hadn't been in touch with him regularly since the GPLv3 process ended. He's continuing to do massive amounts of volunteer work for Debian, including being in charge of the bug tracking system! I asked him for some ideas in how to help Debian more, and he immediately mentioned the Debian/GNOME Bug Weekend coming up this weekend. I'm planning to get involved this weekend, and I hope others will too.
    • Finally, I had a number of important meetings with lots of people in the FLOSS world, such as Tarus Balog, Michael Dexter, Bob Gobeille, Deb Nicholson, Rob Savoye and Randal Schwartz. Ok, enough name-dropping. (BTW, Tarus has written about his trip as well, and mentioned our ongoing copyright assignment debate. Tarus argues that he can do non-promise copyright assignment in OpenNMS and still avoid the normal Open Core shareware-like outcomes, which he dubs “fauxpen source” for “fake open source”. Time will tell.)

    SCALE is really the gold standard of community-run, local FLOSS conferences. It is the inspiration for many of the other regional events such as OLF, SELF, and the like. A major benefit of these regional events is that while they draw speakers from all over the country, the average attendee is a local who usually cannot travel to the better-known events like OSCON.

    Posted on Monday 22 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-17: Computer Science Education Benefits from FLOSS

    I read with interest today when Linux Weekly News linked to Greg DeKoenigsberg's response to Mark Guzdial's ACM Blog post, The Impact of Open Source on Computing Education (which is mostly a summary of his primary argument on his personal blog). I must sadly admit that I was not terribly surprised to read such a post from an ACM-affiliated academic that speaks so negatively of FLOSS's contribution to Computer Science education.

    I mostly agree with (and won't repeat) DeKoenigsberg's arguments, but I do have some additional points and anecdotal examples that may add usefully to the debate. I have been both a student (high school, graduate and undergraduate) and teacher (high school and TA) of Computer Science. In both cases, software freedom was fundamental and frankly downright essential to my education and to that of my students.

    Before I outline my copious disagreements, though, I want to make abundantly clear that I agree with one of Guzdial's primary three points: there is too much unfriendly and outright sexist (although Guzdial does not use that word directly) behavior in the FLOSS community. This should not be ignored, and needs active attention. Guzdial, however, is clearly underinformed about the extensive work that many of us are doing to raise awareness and address that issue. In software development terms: it's a known bug, it's been triaged, and development on a fix is in progress. And, in true FLOSS fashion, patches are welcome, too (i.e., get involved in a FLOSS community and help address the problem).

    However, the place where my disagreement with Guzdial begins is that this sexism problem is unique to FLOSS. As an undergraduate Computer Science major, it was quite clear to me that a sexist culture was prevalent in my Computer Science department and in CS in general. This had nothing to do with FLOSS culture, since there was no FLOSS in my undergraduate department until I installed a few GNU/Linux machines. (See below for details.)

    Computer Science as a whole unfortunately remains heavily male-dominated with problematic sexist overtones. It was common when I was an undergraduate (in the early 1990s) that some of my fellow male students would display pornography on the workstation screens without a care about who felt unwelcome because of it. Many women complained that they didn't feel comfortable in the computer lab, and the issue became a complicated and ongoing debate in our department. (We all frankly could have used remedial sensitivity training!) In graduate school, a CS professor said to me (completely straight-faced) that women didn't major in Computer Science because most women's long term goals are to have babies and keep house. Thus, I simply reject the notion that this sexism and lack of acceptance of diversity is a problem unique to FLOSS culture: it's a CS-wide problem, AFAICT. Indeed, the CRA's Taulbee Survey shows (see PDF page 10) that only 22% of the tenure track CS faculty in the USA and Canada are women, and only 12% of the full professors are. In short, Guzdial's corner of the computing world shares this problem with mine.

    Guzdial's second point is the most offensive to the FLOSS community. He argues that volunteerism in FLOSS sends a message that no good jobs are available in computing. I admit that I have only anecdotal evidence to go on (of course, Guzdial quotes no statistical data, either), but in my experience, I know that I and many others in FLOSS have been successfully and gainfully employed precisely because of past volunteer work we've done. Ted T'so is fond of saying: Thanks to Linux, my hobby became my job and my job became my hobby. My experience, while neither as profound nor as important as Ted's, is somewhat similar.

    I downloaded a copy of GNU/Linux for the first time in 1992. I showed it to my undergraduate faculty, and they were impressed that I had a Unix-like system running on PC hardware, and they encouraged me to build a computer lab with old PC's. I spent the next three and half years as the department's volunteer0 sysadmin and occasional developer, gaining essential skills that later led me to a lucrative career as a professional sysadmin and software developer. If the lure of software freedom advocacy's relative poverty hadn't sidetracked me, I'd surely still be on that same career path.

    But that wasn't even the first time I developed software and got computers working as a volunteer. Indeed, every computer geek I know was compelled to write code and do interesting things with computers from the earliest of ages. We didn't enter Computer Science because we wanted to make money from it; we make a living in computing because we love it and are driven to do it, regardless of how much we get paid for it. I've observed that dedicated, smart people who are really serious about something end up making a full-time living at that something, one way or the other.

    Frankly, there's an undertone in Guzdial's comments on this point that I find disturbing. The idea of luring people to Computer Science through job availability is insidious. I was an undergraduate student right before the upward curve in CS majors, and a graduate student during the plateau (See PDF page 4 of the Taulbee Survey for graphs). As an undergraduate, I saw the very beginnings of people majoring in Computer Science “for the money”, and as a graduate student, I was surrounded by these sorts of undergraduates. Ultimately, I don't think our field is better off for having such people in it. Software is best when it's designed and written by people who live to make it better — people who really hate to go to bed with a bug still open. I must constantly resist the urge to fix any given broken piece of software in front of me lest I lose focus on my primary task of the moment. Every good developer I've met has the same urge. In my experience, when you see software developed by someone who doesn't have this drive, you see clearly that it's (at best) substandard, and (usually) pure junk. That's what we're headed for if we encourage students to major in Computer Science “for the money”. If students' passion is making money for its own sake, we should encourage them to be investment bankers, not software developers, sysadmins, and Computer Scientists.

    Guzdial's final point is that our community is telling newcomers that programming is all that matters. The only evidence Guzdial gives for this assertion is a pithy quote from Linus Torvalds. If Guzdial actually listened to interviews that Torvalds has given, Guzdial would hear that Torvalds cares about a lot more than just code, and spends most of his time in natural language discussions with developers. The Linux community doesn't just require code; it requires code plus a well-argued position of why the code is right for the users.

    Guzdial's primary point here, though, is that FLOSS ignores usability. Using Torvalds and the Linux community as the example here makes little sense, since “usability” of a kernel is about APIs for fellow programmers. Linus' kernel is the pinnacle of usability measured against the userbase who interacts with it directly. If a kernel is something non-technical users are aware of “using”, then it's probably not a very usable kernel.

    But Guzdial's comment isn't really about the kernel; instead, he subtly insults the GNOME community (and other GUI-oriented FLOSS projects). Usability work is quite expensive, but nevertheless the GNOME community (and others) desperately want it done and try constantly to fund it. In fact, very recently, there has been great worry in the GNOME community that Oracle's purchase of Sun means that various usability-related projects are losing funding. I encourage Guzdial to get in touch with projects like the GNOME accessibility and usability projects before he assumes that one offhand quote from Linus defines the entire FLOSS community's position on end-user usability.

    As a final anecdote, I will briefly tell the story of my year teaching high school. I was actively recruited (again, yet another a job I got because of my involvement in FLOSS!) to teach a high school AP Computer Science class while I was still in graduate school in Cincinnati. The students built the computer lab themselves from scratch, which one student still claims is one of his proudest accomplishments. I had planned to teach only ‘A’ topics, but the students were so excited to learn, we ended up doing the whole ‘AB’ course. All but two of the approximately twenty students took the AP exam. All who took it at least passed, while most excelled. Many of them now have fruitful careers in computing and other sciences.

    I realize this is one class of students in one high school. But that's somewhat the point here. The excitement and the “do it yourself” inspiration of the FLOSS world pushed a random group of high school students into action to build their own lab and get the administration to recruit a teacher for them. I got the job as their teacher precisely because of my involvement in FLOSS. There is no reason to believe this success story of FLOSS in education is an aberration. More likely, Guzdial is making oversimplifications about something he hasn't bothered to examine fully.

    Finally, I should note that Guzdial used Michael Terry's work as a jumping off point for his comments. I've met, seen talks by, and exchanged email with Terry and his graduate students. I admit that I haven't read Terry's most recent papers, but I have read some of the older ones and am familiar generally with his work. I was thus not surprised to find that Terry clarified that his position differs from Guzdial's, in particular noting that we found that open source developers most certainly do care about the usability of their software, but that those developers make an error by focusing too much on a small subset of their userbase (i.e., the loudest). I can certainly verify that fact from the anecdotal side. Generally speaking, I know that Terry is very concerned about FLOSS usability, and I think that our community should work with him to see what we can learn from his research. I have never known Terry to be dismissive of the incredible value of FLOSS and its potential for improvement, particularly in the area of usability. Terry's goal, it seems to me, is to convince and assist FLOSS developers to improve the usability of our software, and that's certainly a constructive goal I do support.

    (BTW, I mostly used last names through out this post because Mark, Michael, and Greg are relatively common names and I can think of a dozen FLOSS celebrities who have one of those first names. :)


    0Technically, I was “paid” in that I was given my own office in the department because I was willing to do the sysadmin duties. It was nice to be the only undergraduate on campus (outside of student government) with my own office.

    Posted on Wednesday 17 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-08: The New Era of Big Company Forks

    I was intrigued to read Greg Kroah-Hartman's analysis of what's gone wrong with the Android fork of Linux, and the discussion that followed on lwn.net. Like Greg, I am hopeful that the Android platform has a future that will work closely with upstream developers. I also have my own agenda: I believe Android/Linux is the closest thing we have to a viable fully FaiF phone operating system platform to take on the proprietary alternatives like the BlackBerry and the iPhone.

    I believe Greg's comments hint at a “new era” problem that the FLOSS community hasn't yet learned to solve. In the “old days”, we had only big proprietary companies like Apple and Microsoft that had little interest in ever touching copylefted software. They didn't want to make improvements and share them. Back then (and today too) they prefer to consume all the permissively licensed Free Software they can, and release/maintain proprietary forks for years.

    I'm often critical of Google, but I must admit Google is (at least sometimes) not afraid of dumping code on a regular basis to the public, at least when it behooves them to do it0. A source-available Android/Linux helps Google, because Google executives know the profit can be found in pushing proprietary user-space Android application programs that link to Google's advertising. They don't want to fight with Apple or Research in Motion to get their ads onto those platforms; they'll instead use Free Software to shift the underlying platform.

    So, in this case, the interests of software freedom align a bit with Google's for-profit motive. We want a fully FaiF phone operating system, that also has a vibrant group of Free Software applications for that operating system. While Google doesn't care a bit about Free Software applications on the phone, they need a readily available phone operating system so that many hardware phone manufacturers will adopt it. The FLOSS community and Google thus can work together here, in much the same way various companies have always helped improve GNU/Linux on the desktop because they thought it would foil their competitors (i.e., Microsoft and Apple).

    Yet, the problematic spot for FLOSS developers is Google doesn't actually need our development help. Sure, Google needs the FLOSS licenses we developed, and they need to get access to the upstream. But they have that by default; all that knowledge and code is public. Meanwhile, they can easily afford to have their engineers maintain Android's Linux fork indefinitely, and can more or less ignore Greg's suggestions for shepherding the code upstream. A small company with limited resources would have to listen to Greg, lest the endeavor run out of steam. But Google has plenty of steam.

    We're thus left appealing to Google's sense of decency, goodwill, collaboration and other software freedom principles that don't necessarily make an impact on their business. This can be a losing battle when communicating with a for-profit company (particularly a publicly traded one). They don't have any self-interest nor for-profit reason to work with upstream; they can hire as many good Linux hackers as they need to keep their fork going.

    This new era problem is actually harder than the old problem. In other words, I can't simply write an anti-Google blog post here like I'd write an anti-Apple one. Google is releasing their changes, making them available. They even have a public git repository for (at least) the HTC Dream platform. True, I can and do criticize both Google and HTC for making some hardware interface libraries1 proprietary, but that makes them akin to NVidia, not Microsoft and Apple.

    I don't have an answer for this problem; I suggest only that our community get serious about volunteer development and improvement of Android/Linux. When Free Software started, we needed people to spend their nights and weekends writing Free Software because there weren't any companies and for-profit business models to pay them yet. The community even donated to Free Software charitable non-profits to sponsor development that served the public. The need for that hasn't diminished; it's actually increased. Now, there is more code than ever available under FaiF licenses, but even more limited not-for-profit community resources to shepherd that code in a community-oriented direction. For-profit employers are beginning to control the destiny of more community developers, and this will lead to more scenarios like the one Greg describes. We need people to step forward and say: I want to do what's right with this code for this particular userbase, not what's right for one company. I hope someone will see the value in this community-directed type of development and fund it, but for the meantime, it has my nights and weekends. Just about every famous FLOSS hacker today started with that attitude. We need a bit more of that to go around.

    (I don't think I can end a blog post on this topic without giving a little bit of kudos to a company whom I rarely agree with: Novell. As near as I can tell, despite the many negative things Novell does, they have created a position for Greg that allows him to do what's right for Linux with what (appears to be) minimal interference. They deserve credit for this, and I think more companies that benefit from FLOSS should create more positions like this. Or, even better, create such positions through non-profit intermediaries, as the companies that fund Linux Foundation do for Linus Torvalds.)


    0Compare this to Apple, which is so allergic to copyleft licenses that they will do bizarre things that are clearly against their own interest and more or less a waste of time merely to avoid GPL'd codebases.

    1Updated: I originally wrote drivers here, but Greg pointed out that there aren't actually Linux drivers that are proprietary. I am not sure what to call these various .so files which are clearly designed to interface with the HTC hardware in some way, so I just called them hardware interface libraries.

    Posted on Monday 08 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-02: I Think I Just Got Patented.

    I could not think of anything but the South Park quote, They took our jobs! when I read today Black Duck's announcement of their patent, Resolving License Dependencies For Aggregations of Legally-Protectable Content.

    I've read through the patent, from the point of view of someone skilled in this particular art. In fact, I'm specifically skilled in two distinct arts related to this patent: computer programming and Free Software license compatibility analysis. It's from that perspective that I took a look at this patent.

    (BTW, the thing to always remember about reading patents is that the really significant part isn't the abstract, which often contains pie-in-the-sky prose about what the patent covers. The claims are the real details of the so-called “invention”.)

    So, when I look closely at these claims, I am appalled to discover this patent claims, as a novel invention, things that I've done regularly, with a mix of my brain and a computer, since at least 1999. I quickly came to the conclusion that this is yet another stupid patent granted by the USPTO that it would be better to just ignore.

    Indeed, ever since Amazon's one-click patent, I've hated the inundation of “look what stupid patent was granted today” slashdot items. I think it's a waste of time, generally speaking, since the USPTO is granting many stupid software patents every single day. If we spend our time gawking and saying how stupid they are, we don't get any real work done.

    But, the (likely obvious) reason this caught my attention is that the patent covers activities I've done regularly for so long. It gives me this sick feeling in my stomach to read someone else claiming as an invention something I've done and considered quite obvious for more than a decade.

    I'm not a patent agent (nor do I want to be — spending a week of my life studying for a silly exam to get some credential hasn't been attractive to me since I got my Master's degree), but honestly, I can't see how this patented process isn't obvious to everyone skilled in the arts of FLOSS license evaluation and computer programming. Indeed, the process described is so simple-minded, that it's a waste of time in my view to spend time writing a software system to do it. With a few one-off 10-line Perl programs and a few greps, I've had a computer assist me with processes like this one many times since the late 1990s.

    I do feel some shame that I've now contributed to the “hey, everyone, let's gawk at this silly pointless surely-invalid patent” rant. I guess that I have new sympathy for website designers who were so personally offended regarding the Amazon one-click patent. I can now confirm first-hand: it does really feel different when the patent claims seem close to an activity you've engaged in yourself for many years prior to the patent application. It's when the horribleness of the software patent system starts to really hit home.

    The saddest part, though, is that Black Duck again shows itself as a company whose primary goal is to prey on people's fear of software freedom. They make proprietary software and acquire software patents with the primary goal of scaring people into buying stuff they probably don't need. I've spent a lot more time working regularly on FLOSS license compliance than anyone who has ever worked at Black Duck. Simply put, coming into (and staying in) compliance is a much simpler process than they say, and can be done easily without the use of overpriced proprietary analysis of codebases.

    Posted on Tuesday 02 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-02-01: Not All Copyright Assignment is Created Equal

    In an interview with IT Wire, Mark Shuttleworth argues that all copyright assignment systems are equal, saying further that what Intel, Canonical and other for-profit companies ask for in the process are the same things asked for by Free Software non-profit organizations like the Free Software Foundation.

    I've written about this before, and recently quit using Ubuntu in part because of Canonical's assignment policies (which are, as Mark correctly points out, not that different from other for-profit company's assignment forms.)

    However, it's quite disingenuous for companies to point to the long standing tradition of copyright assignment to the FSF as a justification for their own practices. There are two key differences that people like Shuttleworth constantly gloss over or outright ignore:

    • FSF promises to never make their software proprietary. Shuttleworth claims that All copyright assignment agreements empower dual licensing, and relicensing, but that is simply a false statement if you include FSF in the “All”. FSF promises to never proprietarize its versions of the software assigned to it and always release its versions of the software under Free Software licenses.
    • Non-profits have a different duty to the public. For-profit companies have one duty: to make money for their owners and/or shareholders. Non-profit organizations, by contrast, are chartered to carry out the public good. Therefore, they cannot liberally ignore what's in the public good just because it makes some money. An organization like FSF, which has a public charter that explicitly says that it seeks to advance software freedom, would fail to carry out its public mission if it engaged in proprietary relicensing.

    It seems that Mark Shuttleworth wants to confuse us about copyright assignment so we just start signing away our software. In essence, companies try to bank on the goodwill created by the FSF copyright assignment process over the years to convince developers to give up their rights under GPL and hand over their hard work for virtually nothing in return. We shouldn't give in.

    I am not opposed to copyright assignment in the least, in fact, I support it in many cases. However, without assurances that otherwise copylefted software won't be relicensed as proprietary software, developers should treat a copyright assignment process with maximum skepticism. Furthermore, we should simply not tolerate attempts by for-profit companies to confuse the developer community by comparing as equals copyright assignment systems that are radically different in their intent, execution, and consequences.

    (Some useful additional reading: my “Open Core” Is the New Shareware, Michael Meeks' Thoughts on Copyright Assignment, Dave Neary's Copyright assignment and other barriers to entry, and this LWN article.)

    Posted on Monday 01 February 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

January

  • 2010-01-26: Proud to Be a Member of GNOME Foundation

    I suppose that I should have applied years ago to be a member of the GNOME Foundation. I have served since 2001 as the Free Software Foundation's representative on the GNOME Advisory Board, and have worked hard the last nine years to maintain a good relationship between the FSF and the GNOME Foundation. Indeed, I was very glad and willing when FSF asked me to continue to serve in this role as a volunteer after I left employment of the FSF in 2005.

    Regarding actual GNOME Foundation membership, though, I suppose that I previously felt under-qualified to apply since (a) my personal avoidance of all things GUI is widely known, and (b) obviously I haven't contributed any code or even documentation to GNOME. The most I've done on the development side is the occasional bug report over the years. Yet, ever since I was finally able to switch the non-technical users in my life over to GNU/Linux, I've been very grateful and supportive for GNOME and its mission to create a Free Software desktop that everyone — not just computer geeks — can use effectively.

    Meanwhile, Leslie Hawthorn reminded me recently to stop perpetuating the false belief that the only useful FLOSS contribution is code and documentation. I think that it was her point that encouraged me to apply for GNOME Foundation membership. I was excited to receive my acceptance this morning.

    Many people in the GNOME community already know that I'm a good contact person if you have any issues that relate to the relationship between GNOME and GNU or between FSF and GNOME Foundation (these are, BTW, two clear and distinct sets of relationships). I'll take this opportunity to remind everyone that if you ever have a concern related to these relationships, I am always glad to assist in my diplomatic role between the two organizations (and projects).

    And, of course, as I have for years, I remain available to the GNOME community for the occasional licensing policy questions and/or GPL enforcement assistance.

    I very much hope to go to GUADEC this year, as I have not been in six years! However, I'm a bit worried about the tight scheduling between it and OSCON (which would mean at least two and a half weeks away in a row!), but I'll strive to be there.

    Posted on Tuesday 26 January 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2010-01-14: Back Home, with Debian!

    By the end of 2004, I'd been running Debian ‘testing’ on my laptop since around early 2003. For almost two years, I'd lived with periodic instability — including a week in the spring of 2003 when I couldn't even get X11 started — for the sake of using a distribution that maximally respected software freedom.

    I'd had no trouble with ‘potato’ for its two year lifespan, but after 6-8 months of woody, I was backporting far too much and I couldn't spare the time for upkeep. Running ‘testing’ was the next best option, as I could pin myself for 3-6 months at a time on a particularly stable day and have a de-facto “release”. But, I slowly was unable to spare the time for even that work, and I was ready to throw up my hands in surrender.

    At just about that time, a thing called ‘warty’ was released. I'd already heard about this company, Canonical, as they'd tried earlier that year to buy a domain name I technically own (canonical.org), but had long since given over to a group of old friends. (They of course had no interest in selling such a “hot property”). This new distribution, Ubuntu, was Debian-based, and when installed, it “felt” like Debian. Canonical was committed to a six-month release schedule, so I said to myself: well, if I have to ‘go corporate’ again, I might as well go to something that works like the distribution I prefer. And so, my five year stint as an Ubuntu user began.

    Of course, I hadn't always been a Debian user. I started in 1992 with SLS and quickly moved to Slackware. When the pain of that got too great, I went “corporate” for a while back then, too. I used Red Hat Linux from early 1996 until 1998. I ultimately gave up Red Hat because the distribution eventually became focused around the advancement of the company. They were happy to include lots of proprietary software — indeed, in the later 1990s, Red Hat CDs typically came with as many as two extra CDs filled with proprietary software. Red Hat (the company) had earlier made some efforts to appease us harder-core software-freedom folks. But, by the late 1990s, their briefly-lived RMS (aka Red Hat Means Source) distribution had withered completely. By then, I truly regretted my 1996 decision to go corporate, and fell in love quickly with Debian and its community-led, software-freedom-driven community. I remained a Debian user from 1998 until 2004.

    But, by the end of 2004, the pain of waiting for ‘sarge’ was great. So, for technical reasons only, “going corporate” again seemed like a reasonable trade-off. Ubuntu initially looked basically like Debian: ‘main’ and ‘universe’ were FaiF, ‘restricted’ was like ‘non-free’.

    Sadly, though, a for-profit, corporate-controlled distribution can never remain community-oriented. A for-profit company is eventually always going to put the acquisition of wealth above any community principle. So it has become with Ubuntu, in my view. The time has come (for me, at least) to go back to a truly community-oriented, software-freedom-respecting distribution. (Hopefully, I'll also never be tempted to leave again.)

    I didn't take this decision lightly, and didn't take it for only one reason. I've gone back to Debian for three (now) seven specific reasons:

    (Updated on 2010-02-17: As can be seen above, my mere list of three reasons posted just one month ago has now more than doubled! It's as if Canonical made a 2010 plan to “do less software freedom”, and is executing it with amazing corporate efficiency. As Queen Gertrude says in Hamlet, One woe doth tread upon another's heel, so fast they follow.)

    When considering all this and taking a step back and look at the status of major distributions, my honest assessment is this: among the two primary corporate-controlled-but-dabbling-in-community-orientation distributions (aka Fedora and Ubuntu), Fedora is clearly much more software-freedom-friendly. Nevertheless, since I've twice gone corporate and ultimately regretted it, I decided it was time to go back home — back to Debian.

    So, during the last week of 2009, I took nearly two full days off to reinstall and configure my laptop from scratch with lenny. I've thus been back on Debian since 2010-01-01. Twelve days in, I am very impressed. Really, all the things I liked about Ubuntu are now available upstream as well. This isn't the distribution I left in 2004; it's much better, all while being truly community-oriented and software-freedom-respecting. It's good to be home. Thank you, Debian developers.


    0 For more information on the danger that proprietary network services pose to software freedom, please see the Franklin Street Statement.

    Posted on Thursday 14 January 2010 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

2009

December

  • 2009-12-14: Litigation filed against Various GPL Violators

    I probably won't comment too much on the specifics at this point, but I wanted to make sure everyone saw that SFLC filed a lawsuit against fourteen GPL violators today on behalf of the Software Freedom Conservancy and Erik Andersen.. A PDF copy of the complaint is available.

    Posted on Monday 14 December 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-12-10: Thanks for Rafael Rivera, an Excellent GPL Compliance Engineer

    I'd like to congratulate Rafael Rivera on his successful GPL compliance work regarding the Microsoft WUDT software, which is apparently used to make ISOs from stuff you downloaded from Microsoft software.

    I'm of course against the idea of using Microsoft Windows, and why you'd ever want to make an ISO out of some Microsoft Windows stuff is beyond my comprehension. However, Rafael identified that the WUDT was based on some GPL'd software, and as such he was quite correct in demanding that Microsoft comply with the terms of the GPL (as it has done before, for example, with its Windows Services for Unix). Rafael was first to discover and point out this violation. More importantly, he also did what we in the GPL enforcement world call the “compliance engineering work”, which includes confirming the violation exists by technical measures, and checking that the complete and corresponding source code actually builds and installs the binary as expected.

    That importance of that latter part of the work is unfortunately not often identified. GPL is designed to hook up the legal requirements of a copyright license with certain technical requirements needed to allow downstream users to modify and improve the software. This is the true innovation of the GPL: to make copyright law into a tool that gives users the actual means to improve and redistribute modified versions of software.

    When we check to see if someone is in compliance, it's not merely about seeing if they dumped a big pile of source onto the world. We also have to check carefully that the source builds and that the process produces a working binary that can be installed by the user. That's why GPLv2 requires scripts to control compilation and installation of the executable and what GPLv3 clarifies that requirement even further into the formally defined Installation Information.

    Thanks again to Rafael for doing this work. While everyone knows how often I fault Microsoft, I have to say they did a timely job in this particular case. A little under a month is actually the best one can hope for from initial identification to a violator about a problem to having in our hands complete and corresponding source code (or “C&CS”, as we GPL enforcement geeks call it). Microsoft should have known better than to screw this up after years of working with the GPL, but everyone makes mistakes, and the real measure of a company is how quickly they redress a mistake.

    Now if we could just get Microsoft to stop the more harmful mistake of attacking FLOSS with patents, but that's a tougher problem to solve…

    Posted on Thursday 10 December 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-12-06: The Anatomy of a Modern GPL Violation

    I've been thinking the last few weeks about the evolution of the GPL violation. After ten years of being involved with GPL enforcement, it seems like a good time to think about how things have changed.

    Roughly, the typical GPL violation tracks almost directly the adoption and spread of Free Software. When I started finding GPL violations, it was in a day when Big Iron Unix was still king (although it was only a few years away from collapse), and the GNU tools were just becoming state of the art. Indeed, as a sysadmin, I typically took a proprietary Unix system, and built a /usr/local/ filled with the GNU tools, because I hated POSIX tools that didn't have all the GNU extensions.

    At the time, many vendors were discovering the same frustrations I was as a sysadmin. Thus, the typical violation in those days was a third-party vendor incorporating some GNU tools into their products, for use on some Big Iron Unix. This was the age of the violating backup product; we saw frequently backup products that violated the GPL on GNU tar in those days.

    As times changed, and computers got truly smaller, the embedded Unix-like system was born. GNU/Linux and (more commonly) BusyBox/Linux were the perfect solutions for this space. What was once a joke on comp.os.linux.advocacy in the 1990s began to turn into a reality: it was actually nearly possible for Linux to run on your toaster.

    The first class of embedded devices that were BusyBox/Linux-based were the wireless routers. Throughout the 2000s, the typical violation was always some wireless router. I still occasionally see those types of products violating the GPL, but I think the near-constant enforcement done by Erik Andersen, FSF, and Harald Welte throughout the 2000's has led the wireless router violation to become the exception rather than the rule. That enforcement also led to the birth of community-focused development of the OpenWRT and DD-WRT, that all started from that first enforcement that we (Erik, Harald and FSF (where I was at the time)) all did together in 2002 to ensure the WRT54G source release.

    In 2009, there's a general purpose computer in almost every electronics product. Putting a computer with 8MB RAM and a reasonable processor in a device is now a common default. Well, BusyBox/Linux was always the perfect operating system for that type of computer! So, when you walk through the aisles of the big electronics vendors today, it's pretty likely that many of the devices you see are BusyBox/Linux ones.

    Some people think that a company can just get away with ignoring the GPL and the requirements of copyleft. Perhaps if a company has five customers total, and none of them ask for source, your violation may never be discovered. But, if you produce a mass market product based on BusyBox/Linux, some smart software developer is going to eventually buy one. They are going to get curious, and when they poke, they'll see what you put in there. And, that developer's next email is going to be to me to tell me all about that device. In my ten years of enforcement experience, I find that a company's odds of “getting away” with a GPL violation are incredibly low. The user community eventually notices and either publicly shames the company (not my preferred enforcement method), or they contact someone like me to pursue enforcement privately and encourage the company in a friendly way to join the FLOSS community rather than work against it.

    I absolutely love that so many companies have adopted BusyBox/Linux as their default platform for many new products. Since circa 1994 when I first saw the “can my toaster run Linux?” joke, I've dreamed of time when it would be impossible to buy a mass-market electronics product without finding FLOSS inside. I'm delighted we've nearly reached that era during my lifetime.

    However, such innovation is made possible by the commons created by the GPL. I have dedicated a large portion of my adult life to GPL enforcement precisely because I believe deeply in the value of that commons. As I find violator after violator, I look forward to welcoming them to our community in a friendly way, and ask them to respect the commons that gave them so much, and give their code back to the community that got them started.

    Posted on Sunday 06 December 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

November

  • 2009-11-08: GPL Enforcement: Don't Jump to Conclusions, But Do Report Violations

    In one of my favorite movies, Office Space, Tom Smykowski (one of the fired employees) has a magic-eight-ball-style novelty product idea: a “Jump to Conclusions” mat. Sometimes, I watch discussions in the software freedom community and think that, as a community, we're all jumping around on one of these mats.

    I find that people are most likely to do this when something seems novel and exciting. I don't really blame anyone for doing it; I do it myself when I have discovered an exciting thing that's new to me, even if it's well known by others. But, often, this new thing is actually rather mundane, and it's better to check in with the existing knowledge about the idea before “jumping” to any conclusions. In other words, the best square on the mat for us to land on is the one that reads: Think again!

    Meanwhile, as some who follow my microblog know, I've been on a mission in recent months to establish just how common and mundane GPL violations are. Since 21 August 2009, I've been finding one new GPL violating company per day (on average) and I am still on target to find one per day for 365 days straight. When I tell this to people who are new to GPL enforcement, they are surprised and impressed. However, when I tell people who have done GPL enforcement themselves, they usually say some version of: Am I supposed to be impressed by that? Couldn't a monkey do that? Fact is, the latter are a little bit right: there are so many GPL violations that I might easily be able to go on finding one per day for two years straight.

    In short, GPL violations are common and everyday occurrences. I believe firmly they should be addressed, and I continue to dedicate much of my life to resolve them. However, finding yet another GPL violation isn't a huge and earth-shaking discovery. Indeed, it's what I was doing today to kill time while drinking my Sunday morning coffee.

    I don't mean to imply that I don't appreciate greatly when folks find new GPL violations. I think finding and reporting GPL violations is a very valuable service, and I wouldn't spend so much time finding them myself if I didn't value the work highly. But, the work is more akin to closing annoying bugs than it is to launching a paradigm-shifting FLOSS project. Closing bugs is an essential part of FLOSS development, but no one blogs about every single bug they close (although maybe we do microblog them ;).

    Having this weekend witnessed another community tempest about a potential GPL violation, I decided to share a few guidelines that I encourage everyone to follow when finding a GPL violation. (In other words, what follows are a some basic guidelines for reporting violations; other such guides are also available at the FSF's site and the gpl-violations.org site.)

    • Assume the violation is an oversight or an accident by the violator until you have clear evidence that tells you differently. I'd say that 98% of the violations I've ever worked on since 1998 have been unintentional and due primarily to negligence, not malice.

    • Don't go public first. Back around late 1999, when I found my first GPL violation from scratch, I wanted to post it to every mailing list I could find and shame that company that failed to respect and cooperate with the software freedom community. I'm glad that I didn't do that, because I've since seen similar actions destroy the lines of communication with violators, and make resolution tougher. Indeed, I believe that if the Cisco/Linksys violations had not been a center of public ridicule in 2003 when I (then at the FSF) was in the midst of negotiating with them for compliance, we would not have ended up with such a long saga to resolution.

    • Do contact the copyright holders, or their designated enforcement agents. Since the GPL is a copyright license, if the violator fails to comply on their own, only the copyright holder (typically) has the power to enforce the license0. Here's a list of contact addresses that I know for reporting various violations (if you know more such addresses, please let me know and I'll add them here):

      If the GPL'd project you've found a violation on isn't on the list above, just find email addresses of people with commit access to the repository for the project or with email addresses in the MAINTAINERS or CONTRIBUTORS files. It's better not to post the violation to a public discussion list for the project, as that's just “going public”.

    • Never treat a “community violator” the same way as a for-profit violator. I believe there is a fundamental difference between someone who makes a profit during the act of infringement than someone who merely seeks to contribute as a volunteer and screws something up. There isn't a perfect line between the two — it's a spectrum. However, those who don't make any money from their infringement are probably just confused community members who misunderstood the GPL and deserve pure education and non-aggressive enforcement. Those who make money from the infringement deserve some friendly education too, of course, but ultimately they are making a profit by ignoring the rights of their users. I think these situations are fundamentally different, and deserve different tactics.

    • Once you've reported a violation, please be patient with those of us doing enforcement. There are always hundreds of GPL violations that need action, and there are very few of us engaged in regular and active enforcement. Also, most of us try to get compliance not just on the copyrights we represent, but all GPL'd software. (This behooves both the software freedom community and the violator, as the former wants to see broad compliance, and the latter doesn't want to deal with each copyright holder individually). Thus, it takes much time and effort to do each enforcement action. So, when you report a new violation, it might take some time for the situation to resolve.

    • Do try your best to request source from the violator on your own. While making the violation public doesn't help, inquiring privately does often help. If you have received distribution of a binary that you think is GPL'd or LGPL'd (or used a network service that you think is AGPL'd), do write to the violator (typically best to use the technical support channels) and ask for the complete and corresponding source code. Be as polite and friendly as possible, and always assume it is their intention to comply until you have specific evidence that they don't intend to do so.

    • Share as much good information with the violator as you can to encourage their compliance. My colleagues and I wrote A Practical Guide to GPL Compliance for just this purpose.

    We need a careful balance regarding GPL enforcement. Remember that the primary goal of the GPL is encourage more software freedom in the world. For many violators, the first experience the violator has with FLOSS is an enforcement action. We therefore must ensure that enforcement action is reasonable and friendly. I view every GPL violator as a potential FLOSS contributor, and try my best to open every enforcement action with that attitude. I am human and thus sometimes become more frustrated with uncooperative violators than I should be. However, striving for kindness with violators only helps give a great image to the software freedom community.


    0In some situations, there are a few possibilities for users that exist if the copyright holder is unable or unwilling to enforce the GPL. We've actually recently seen an interesting successful enforcement by a user. I plan to blog in detail about this soon.

    Posted on Sunday 08 November 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-11-04: Android/Linux's Future and Advancement of Mobile Software Freedom

    Harald Welte knows more about development of embedded systems than I ever will. So, I generally defer completely to his views about software freedom development for embedded systems. However, as you can tell by that opening, I am setting myself up to disagree a little bit with him just this once on the topic. :)

    But first, let me point out where we agree: I think his recent blog post about what Android/Linux is not should be read by everyone interested in software freedom for mobile devices. (Harald's post also refers to a presentation by Matt Porter. I agree with Harald that talk is worth looking at closely.) The primary point Matt and Harald both make is one that Stallman has actually made for years: Linux is an operating system kernel, not a whole system for a user. That's why I started saying Android/Linux to refer to this new phone platform. It's just the kernel, Linux, with a bunch of Java stuff on top. As Matt points out, it doesn't even use a common Linux-oriented C Library, such as uClibc or the GNU C Library; it used a BSD-derived libc called Bionic.

    Indeed, my colleague Aaron Williamson discovered this fact quickly five months ago when he started trying to make a fully FaiF Android/Linux platform on the HTC Dream. I was amazed and aghast when he told me about adb and how there is no real shell on the device by default. It's not a GNU/Linux system, and that becomes quickly and painfully obvious to anyone who looks at developing for the platform. On this much, I agree with Harald entirely: this is a foreign system that will be very strange to most GNU/Linux hackers.

    Once I learned this fact, I immediately pondered: Why did Google build Android in this way? Why not make it GNU/Linux like the OpenMoko? I concluded that there are probably a few reasons:

    • First, while Linux is easy to cram into a small space, particularly with BusyBox and uClibc, if you want things both really small and have a nice GUI API, it's a bit tougher to get right. There is a reason the OpenMoko software stack was tough to get right and still has issues. Maemo, too, has had great struggles in its history that may not be fully overcome.
    • Second, Google probably badly wanted Java as the native application language, due to its ubiquity. I dislike Java more than the average, but there's no denying that nearly all undergraduate Computer Science students of the last ten years did most of their work in Java. Java is more foreign to most GNU/Linux developers than Python, Perl, Ruby and the like, but to the average programmer in the world, Java is the lingua franca.
    • Third, and probably most troubling, Google wanted to have as little GPL'd and LGPL'd stuff in the stack as possible. Their goal isn't software freedom; it is to convince phone carriers and manufacturers to make Google's proprietary applications the default mobile application set. The operating system is pure commodity to sell the proprietary applications. So, from Google's perspective, the more permissively licensed stuff in the Android/Linux base system, the better.

    Once you ponder all this, the obvious next question is: Should we bother with this platform, or focus on GNU/Linux instead? In fact, this very question comes up almost weekly over on the Replicant project's IRC channel (#replicant on freenode). Harald's arguments for GNU/Linux are good ones, and as I tell my fellow Replicant hackers, I don't begrudge anyone who wants to focus on that line of development. However, I think this is the place where I disagree with Harald: I think the freed Android code does have an important future in the advancement of software freedom.

    We have to consider carefully here, as Android/Linux puts us in a place software freedom developers have never been in before. Namely, we have an operating system whose primary deployments are proprietary, but the code is mostly available to us as Free Software, too. Furthermore, this operating system runs on platforms that we don't have a fully working port of GNU/Linux yet. I think these factors make the decision to port GNU/Linux or fork the mostly FaiF release into nearly a coin-flip decision.

    However, when deciding where to focus development effort, I think the slight edge goes to Android/Linux. It's not a huge favorite — maybe 54% (i.e., for my fellow poker players, all-in-prelfop in HE, Android would be the pair, not the unsuited overcards :). Android/Linux deserves the edge primarily because Google and their redistributors (carriers and phone makers) will put a lot of marketing and work into gaining public acceptance of “Android” as an iPhone replacement. We can take advantage of this, and say: What we have is Android too, but you can modify and improve it and run more applications not available in the Android Market! Oh, and if you really really do want that proprietary application from the Market, those will run on our system, too (but we urge you not to use proprietary software). It's simply going to be easier to get people to jailbreak their phones and install a FaiF firmware if it looks almost identical to the one they have, but with a few more features they don't have already.

    So, by all means, if porting GNU/Linux and/or BusyBox/Linux to strange new worlds is your hobby, then by all means make it run on the HTC Dream too. In fact, as a pure user I'll probably prefer it once it's ready for prime time. However, I think the strategic move to get more software freedom in the world is to invest development effort into a completely freedom-respecting fork of Android/Linux. (And, yet another shameless plug, we need driver hacker help on Replicant! :).

    Posted on Wednesday 04 November 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

October

  • 2009-10-26: Software Freedom on Mobile Devices

    I agree pretty completely with Harald Welte's comments regarding Symbian. I encourage everyone to take a look at his comments.

    We are in a very precarious time with regard to the freedom of mobile devices. We currently have no truly Free Software operating system that does the job, and there are multiple companies trying to get our attention with code releases that have some Free Software in them. None of these companies have pro-software-freedom motives about these issues (obviously, they are for-profit companies, who focus solely on their own profits). So, we have to carefully analyze what these proprietary software companies are up to, why they are releasing some code, and determine if we'll be successful forking these platforms to build a fully software freedom phone platform.

    We thus must take care not to burn our developer time on likely hopeless codebases. I think Harald's analysis convinces me that Symbian is such a hopeless codebase. They haven't released software we can build for any known phone for sale, and we don't have a compiler that can build the stuff. It's also under a license that isn't a bad one by any means, but it is however not a widely used license for operating system software. Symbian's release, thus, is purely of academic interest to historians who might want to study what phone software looked like at the turn of the millennium before the advent of Linux-based phones.

    Currently, given the demise of mass-market OpenMoko production, our best hope, in my opinion, is the HTC Dream running a modified version of Android/Linux. We don't have 100% Free Software even for that yet, but we are actively working on it, and the list of necessary-to-work proprietary components is down to two libraries. Plus, the Maemo software (and the new device it runs on, not even released yet) is the only other option, and it has quite an extensive list of proprietary components. As far as we can tell currently, the device may even be unusable without a large amount of proprietary software.

    Even so, Android/Linux isn't a Dream (notwithstanding the name of the most widely used hardware platform). It's developed generally by a closed community, who throw software over the wall when they see fit, and we'll have to maintain forks to really make a fully Free Software version. But this is probably going to be true of any Free Software phone platform that a company releases anyway.

    I'll keep watching and expect my assessment will change if facts change. However, unless I see that giant laundry list of proprietary components in Maemo decreasing quickly, I think I'll stick with the least of all these evils, Android/Linux on the HTC Dream. It's by far the closest to having a fully free software platform. Since the only way to get us to freedom is to replace proprietary components one-by-one, picking the closest is just the best path to freedom. At the very least, we should eliminate platforms for which the code can't even be compiled!

    [ PC was kind enough to make a Belorussian translation of this blog post. I can't speak to its accuracy, of course, since I don't know the language. :) ]

    Posted on Monday 26 October 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-10-16: “Open Core” Is the New Shareware

    [ I originally wrote this essay below centered around the term “Open Core”. Despite that even say below that the terms is somewhat meaningless, I later realized this term was so problematic that it should be abandoned entirely, for use instead of the clearer term “proprietary relicensing”. However, since this blog post was widely linked to, I've nevertheless left the text as it originally was in October 2009. ]

    There has been some debate recently about so-called “Open Core” business models. Throughout the history of Free Software, companies have loved to come up with “innovative” proprietary-like ways to use the FLOSS licensing structures. Proprietary relicensing, a practice that I believe has proved itself to have serious drawbacks, was probably the first of these, and now Open Core is the next step in this direction. I believe the users embracing these codebases may be ignoring a past they're condemned to repeat.

    Like most buzzwords, Open Core has no real agreed-upon meaning. I'm using it to describe a business model whereby some middleware-ish system is released by a single, for-profit entity copyright holder, who requires copyright-assigned changes back to the company, and that company sells proprietary add-ons and applications that use the framework. Often, the model further uses the GPL to forbid anyone but the copyright-holding company to make such proprietary add-on applications (i.e., everyone else would have to GPL their applications). In the current debate, some have proposed that a permissive license structure can be used for the core instead.

    Ultimately, “Open Core” is a glorified shareware situation. As a user, you get some subset of functionality, and may even get the four freedoms with regard to that subset. But, when you want the “good stuff”, you've got to take a proprietary license. And, this is true whether the Core is GPL'd or permissively licensed. In both cases, the final story is the same: take a proprietary license or be stuck with cripple-ware.

    This fact remains true whether the Open Core is under a copyleft license or a permissive one. However, I must admit that a permissive license is more intellectually honest to the users. When users encounter a permissive license, they know what they are in for: they may indeed encounter proprietary add-ons and improvements, either from the original distributor or a third party. For example, Apple users sadly know this all too well; Apple loves to build on a permissively licensed core and proprietarize away. Yet, everyone knows what they're getting when they buy Apple's locked down, unmodifiable, and programmer-unfriendly products.

    Meanwhile, in more typical “Open Core” scenarios, the use of the GPL is actually somewhat insidious. I've written before about how the copyleft is a tool, not an end in itself. Like any tool, it can be misused or abused. I think using the GPL as a tool for corporate control over users, while legally permissible, is ignoring the spirit of the license. It creates two classes of users: those precious few that can proprietarize and subjugate others, and those that can't.1

    This (ab)use of GPL has led folks like Matt Aslett to suggest that the permissive licensing solution would serve this model better. While I've admitted such a change would have some level of increased intellectually honesty, I don't think it's the solution we should strive for to solve the problem. I think Aslett's completely right when he argues that GPL'd “Open Core” became popular because it's Venture Capitalists' way of making peace with freely licensed copyrights. However, heading to an Apple-like permissive only structure only serves to make more Apple-like companies, and that's surely not good for software freedom either. In fact, the problem is mostly orthogonal to licensing. It's a community building problem.

    The first move we have to make is simply give up the idea that the best technology companies are created by VC money. This may be true if your goal is to create proprietary companies, but the best Free Software companies are the small ones, 5-10 employees, that do consulting work and license all their improvements back to a shared codebase. From low-level technology like Linux and GCC to higher-level technology like Joomla all show that this project structure yields popular and vibrant codebases. The GPL was created to inspire business and community models like these examples. The VC-controlled proprietary relicensing and “Open Core” models are manipulations of the licensing system. (For more on this part of my argument, I suggest my discussions on Episode 0x14 of the Software Freedom Law Show.)

    I realize that it's challenging for a community to create these sort of codebases. The best way to start, if you're a small business, is to find a codebase that gets you 40% or so toward your goal and start contributing to the code with your own copyrights, licensed under GPL. Having something that gets you somewhere will make it easier to start your business on a consulting basis without VC, and allow you to be part of one of these communities instead of trying to create an “Open Core” community you can exploit with proprietary licensing. Furthermore, the fact that you hold copyright alongside others will give you a voice that must be heard in decision-making processes.

    Finally, if you find an otherwise useful single-corporate-copyright-controlled GPL'd codebase from one of these “Open Core” companies, there is something simple you can do:

    Fork! In essence, don't give into pressure by these companies to assign copyright to them. Get a group of community developers together and maintain a fork of the codebase. Don't be mean about it, and use git or another DVCS to keep tracking branches of the company's releases. If enough key users do this and refuse to assign copyright, the good version will eventually become community one rather than the company-controlled one.

    My colleague Carlo Piana points out a flaw in this plan, saying the ant cannot drive the elephant. While I agree with Carlo generally, I also think that software freedom has historically been a little bit about ants driving elephants. These semi-proprietary business models are thriving on the fundamental principle of a proprietary model: keep users from cooperating to improve the code on which they all depend. It's a prisoner's dilemma that makes each customer afraid to cooperate with the other for fear that the other will yield to pressure not to cooperate. As the fictional computer Joshua points out, this is a strange game. The only winning move is not to play.

    The software freedom world is more complex than it once was. Ten years ago, we advocates could tell people to look for the GPL label and know that the software would automatically be part of a freedom-friendly, software sharing community. Not all GPL'd software is created equal anymore, and while the right to fork remains firmly in tact, the realities of whether such forks will survive, and whether the entity controlling the canonical version can be trusted is another question entirely. The new advice is: judge the freedom of your codebase not only on its license, but also on the diversity of the community that contributes to it.


    1I must put a fine point here that the only way companies can manipulate the GPL in this example is by demanding full copyright assignment back to the corporate entity. The GPL itself protects each individual contributor from such treatment by other contributors, but when there is only one contributor, those protections evaporate. I must further note that for-profit corporate assignment differs greatly from assignment to a non-profit, as non-profit copyright assignment paperwork typically includes broad legal assurances that the software will never be proprietarized, and furthermore, the non-profit's very corporate existence hinges on engaging only in activity that promotes the public good.

    Posted on Friday 16 October 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-10-11: Denouncing vs. Advocating: In Defense of the Occasional Denouncement

    For the last decade, I've regularly seen complaints when we harder-core software freedom advocates spend some time criticizing proprietary software in addition to our normal work preserving, protecting and promoting software freedom. While I think entire campaigns focused on criticism are warranted in only extreme cases, I do believe that denouncement of certain threatening proprietary technologies is a necessary part of the software freedom movement, when done sparingly.

    Denouncements are, of course, negative, and in general, negative tactics are never as valuable as positive ones. Negative campaigns alienate some people, and it's always better to talk about the advantages of software freedom than focus on the negative of proprietary software.

    The place where negative campaigns that denounce are simply necessary, in my view, is when the practice either (a) will somehow completely impeded the creation of FLOSS or (b) has become, or is becoming, widespread among people who are otherwise supportive of software freedom.

    I can think quickly of two historical examples of the first type: UCITA and DRM. UCITA was a State/Commonwealth-level law in the USA that was proposed to make local laws more consistent regarding software distribution. Because the implications were so bad for software freedom (details of which are beyond scope of this post but can be learned at the link), and because it was so unlikely that we could get the UCITA drafts changed, it was necessary to publicly denounce the law and hope that it didn't pass. (Fortunately, it only ever passed in my home state of Maryland and in Virginia. I am still, probably pointlessly, careful never to distribute software when I visit my hometown. :)

    DRM, for its part, posed an even greater threat to software freedom because its widespread adoption would require proprietarization of all software that touched any television, movie, music, or book media. There was also a concerted widespread pro-DRM campaign from USA corporations. Therefore, grassroots campaigns denouncing DRM are extremely necessary even despite that they are primarily negative in operation.

    The second common need for denouncement when use of a proprietary software package has become acceptable in the software freedom community. The most common examples are usually specific proprietary software programs that have become (or seem about to become) “all but standard” part of the toolset for Free Software developers and advocates.

    Historically, this category included Java, and that's why there were anti-Java campaigns in the Free Software community that ran concurrently with Free Software Java development efforts. The need for the former is now gone, of course, because the latter efforts were so successful and we have a fully FaiF Java system. Similarly, denouncement of Bitkeeper was historically necessary, but is also now moot because of the advent and widespread popularity of Mercurial, Git, and Bazaar.

    Today, there are still a few proprietary programs that quickly rose to ranks of “must install on my GNU/Linux system” for all but the hardest-core Free Software advocates. The key examples are Adobe Flash and Skype. Indeed, much to my chagrin, nearly all of my co-workers at SFLC insist on using Adobe Flash, and nearly every Free Software developer I meet at conferences uses it too. And, despite excellent VoIP technology available as Free Software, Skype has sadly become widely used in our community as well.

    When a proprietary system becomes as pervasive in our community as these have (or looks like it might), it's absolutely time for denouncement. It's often very easy to forget that we're relying more and more heavily on proprietary software. When a proprietary system effectively becomes the “default” for use on software freedom systems, it means fewer people will be inspired to write a replacement. (BTW, contribute to Gnash!) It means that Free Software advocates will, in direct contradiction of their primary mission, start to advocate that users install that proprietary software, because it seems to make the FaiF platform “more useful”.

    Hopefully, by now, most of us in the software freedom community agree that proprietary software is a long term trap that we want to avoid. However, in the short term, there is always some new shiny thing. Something that appeals to our prurient desire for software that “does something cool”. Something that just seems so convenient that we convince ourselves we cannot live without it, so we install it. Over time, short term becomes the long term, and suddenly we have gaping holes in the Free Software infrastructure that only the very few notice because the rest just install the proprietary thing. For example, how many of us bother to install Linux Libre, even long enough to at least know which of our hardware components needs proprietary software? Even I have to admit I don't do this, and probably should.

    An old adage of software development is that software is always better if the developers of it actually have to use the thing from day to day. If we agree that our goal is ultimately convincing everyone to run only Free Software (and for that Free Software to fit their needs), then we have to trailblaze by avoiding running proprietary software ourselves. If you do run proprietary software, I hope you won't celebrate the fact or encourage others to do so. Skype is particularly insidious here, because it's a community application. Encouraging people to call you on Skype is the same as emailing someone a Microsoft Word document: it's encouraging someone to install a proprietary application just to work with you.

    Finally, I think the only answer to the FLOSS community celebrating the arrival of some new proprietary program for GNU/Linux is to denounce it, as a counterbalance to the fervor that such an announcement causes. My podcast co-host Karen often calls me the canary in the software coalmine because I am usually the first to notice something that is bad for the advancement of software freedom before anyone else does. In playing this role, I often end up denouncing a few things here and there, although I can still count on my two hands the times I've done so. I agree that advocacy should be the norm, but the occasional denouncement is also a necessary part of the picture.

    (Note: this blog is part of an ongoing public discussion of a software program that is not too popular yet, but was heralded widely as a win for Free Software in the USA. I didn't mention it by name mainly because I don't want to give it more press than it's already gotten, as it is one of this programs that is becoming a standard GNU/Linux user application (at least in the USA), but hasn't yet risen to the level of ubiquity of the other examples I give above. Here's to hoping that it doesn't.)

    Posted on Sunday 11 October 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

July

  • 2009-07-29: Microsoft Releases GPL'd Software (Again): Does This Change Anything?

    Microsoft has received much undeserved press about their recent release of Linux drivers for their virtualization technology under GPLv2. I say “undeserved” because I don't particularly see why Microsoft should be lauded merely for doing something that is in their own interest that they've done before.

    Most people have forgotten that Microsoft once had a GPL-based product available for Windows NT. It was called Windows Services for UNIX, and AFAICT, remains available today (although perhaps they've transitioned in recent years to no longer include GPL'd software).

    This product was acquired by Microsoft when they purchased Softway Systems. The product was based on GCC, and included a variety of GNU system utilities ported to Windows. Microsoft was a compliant distributor of this software for years, right during the time when they were calling the GPL an unAmerican cancerous virus that eats up software like PacMan. The GPL is not a new license to Microsoft; they only pretend that it is to give bad press to the GPL or to give good press to themselves.

    Another thing that's not new to Microsoft is that they have no interesting in contributing to Free Software unless it makes their proprietary software more desirable. In my old example above, they hoped to entice developers who preferred a Unix development environment to switch to Windows NT. In the recent Linux driver release, they seek to convince developers to switch from Xen and KVM to their proprietary virtualization technology.

    In fact, the only difference in this particular release is that, unlike in the case of Softway's software, Microsoft was apparently (according to Steve Hemminger) out of compliance briefly. According to Steve, Microsoft distributed binaries linked to various GPL parts.

    Meanwhile, Sam Ramji claimed that Microsoft were already planning to release the software before Hemminger and Greg K-H contacted them. I do believe Sam when he says that there was already talk inside Microsoft about releasing the source underway before the Linux developers began their enforcement effort. However, that internal Microsoft talk doesn't mean that there wasn't a problem. As soon as one distributes the binaries of a GPL'd work, one must provide the source (or an offer therefor) alongside those binaries. Thus, if Microsoft released binaries and delayed in releasing source, there was a GPL violation.

    Like all GPL violations (and potential GPL violations), it's left to the copyright holders of the software to engage in enforcement. I think it's great that, according to Steve and related press coverage, the Linux developers used the most common enforcement strategy in the GPL community — quietly contact the company, inform them of their obligations, and help them in a friendly way into compliance. That process almost always works, and the fact that Microsoft came into compliance shows the value of our community's standard enforcement practice.

    Still, there is a more important item of note from a perspective of software freedom. This Linux driver — whether it is released properly under the GPL or kept proprietary in violation of the GPL — is designed to convince users to give up Free virtualization platforms like Xen and KVM and use Microsoft's virtualization technology instead. From that perspective, it matters little that it was released as Free Software: people should avoid the software and use platforms for virtualization that respect their freedom.

    Someday, perhaps, Microsoft will take a proper place among other large companies that actually contribute code that improves the general infrastructure of Free Software. Many companies give generally useful improvements back to Linux, GCC, and various other parts of the GNU/Linux system. Microsoft has never done this: they only contribute code when it improves Free Software interoperability with their proprietary technology. The day that Microsoft actually changes its attitude toward Free Software did not occur last week. Microsoft's old strategy stays the same: try to kill Free Software with patents, and in the meantime, convince as many Free Software users as possible to begin relying on Microsoft proprietary technology.

    Posted on Wednesday 29 July 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-07-17: Microsoft Patent Aggression Continues Against Free Software

    I think this news item from yesterday mostly speaks for itself, but I could not let the incident go by without blogging briefly about it.

    There has been so much talk in the last two weeks that Microsoft has changed with regard to its patent policy toward Free Software. We fool ourselves if we trust any of the window-dressing that Microsoft has put forward to convince us that we can trust them in this regard. Indeed, I spoke extensively about this in my interview on the Linux Outlaws show this week.

    What we see in this agreement between the Melco Group and Microsoft is another little above-water piece of the same patent aggression iceberg that Microsoft has placed in our community's way. They continue to shake down companies that distribute GNU/Linux systems for patent royalties. As I've written about before, it's difficult to judge if these are GPLv2-compliant, but they are almost certainly not GPLv3-compliant. If there were ever a moment for the community to scramble to GPLv3, this would be it, if for no other reason to defend ourselves against the looming aggression.

    In the meantime, we'd be foolish to trust any sort of promises Microsoft has to make about their patents. Would they really make a reliable promise that would prevent their ongoing campaign of patent aggression against Free Software?

    Update: In related news, I was also glad to read FSF's new statement on the issue, which includes some of the same comments I made on Linux Outlaws Episode 102.

    Posted on Friday 17 July 2009 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

June

  • 2009-06-29: Considerations on Patents that Read on Language Infrastructure

    In an essay last Friday entitled Why free software shouldn't depend on Mono or C#, RMS argued a key point that I agree with: the software freedom community should minimize its use of programming language infrastructure that comes primarily from anti-software-freedom companies, notwithstanding FaiF (Free as in Freedom) implementations. I've been thinking about an extension of that argument: that language infrastructure created in a community process is likely more resilient against attacks from proprietary software companies.

    Specifically, I am considering the risk that a patent attack will occur against the language or its canonical implementation. We know that the USPTO appears to have no bounds in constantly granting so-called “software patents”, most of which are invalid within their own system, and the rest may be like the RSA patent, and will force our community to invent around them, or (as we had to do with RSA), “wait them out”. I'd like to consider how these known facts apply to the implementation of language infrastructure in the Free Software world.

    Programming languages and their associated standard libraries and implementations evolve in three basic ways:

    • A Free Software community designs and implements the language in a grassroots fashion. Perl, PHP, and Python are a few examples.
    • A single corporate entity controls the language and its canonical implementation. They perhaps also convince some standards body to adopt it, but usually retain complete control. C# and Java a few examples.
    • A single corporate entity controlled the language initially, but more than 20 years have passed and the language now has many proprietary and Free Software implementations. C and C++ are a few examples.

    The patent issues in each of these situations deserves different consideration, primarily related to the dispersion of patents that likely read on the given language implementation. We have to assume that the USPTO has granted many patents that read on any software a person can conceivably write. The question is always: of all the things you can write, which has the most risk of patent attack from the patent holders in question?

    In the case of the community-designed and Free-Software-implemented languages, the patent risk is likely spread across many companies, and mitigated by the fact that few have probably filed patents applications designed specifically to read on the language and its implementation. Since various individuals and companies contributed to the development and design, and because it was a process run by the community, it's unlikely there was a master plan by one entity to apply specifically for patents on the language. So, while there are likely many patents that read on the implementation, a single holder is unlikely to hold all the patents, and those patents were probably not crafted for the specific language. Only some of these many patent-holding entities will have a desire to attack Free Software. It is therefore less likely that a user of the language will be sued; a patent troll would have to do some work to acquire the relevant patent. If that unlikely event does anyway occur, the fact that the patent was not specifically designed to read on the language implementation may indeed help, either by easing the process of “inventing around” or by making it more difficult for the patent troll to show the patent reads on the language implementation. Finally, if the implementation is under a license like GPL, or the Apache License (or any license with a patent grant), those companies that did contribute to the language implementation may have granted a patent license already.

    Of course, these are all relative arguments against the alternative: a language designed by a single company. If a single corporate entity designed and implemented the language more recently than 20 years ago, that company likely filed many yet-unexpired patents throughout the process of designing and implementing the language and its infrastructure. When the Free Software community implements fresh versions of the language from scratch, it's very likely that it will generate software that reads on those patents. Thus, the community must live in constant and direct fear of that company. We must assume the patents exist, and we know who holds them, and we know they filed them with this very language in mind. It may be tough to invent around them and still keep the Free Software implementation compatible. This is why I and other Free Software advocates have insisted for years the all companies who claim to support software freedom should grant GPL-compatible patent licenses for all their patents. (I still await Sam Ramji's response on my call for Microsoft to do so.)

    Without that explicit patent license, we certainly should prefer the community-driven and Free-Software-developed languages over those developed by companies (like Microsoft) that have a history of anti-Free Software practices. Regarding companies with a more ambiguous history toward Free Software, some might argue that patents consolidated in a “friendly” company is safest of all alternatives. They might argue that with all those patents consolidated, patent trolls will have a tough time acquiring patents and attacking FaiF implementations. However, while this can sometimes be temporarily true, one cannot rely on this safety. Java, for example, is in a precarious situation now. Oracle is not a friend to Free Software, and soon will hold all Sun's Java patents — a looming threat to FaiF Java implementations. While I think it's more likely that Microsoft will attack FaiF C# implementations with its patents eventually, an Oracle attack on FaiF Java is a possibility. (We should also not forget that Sun in the late 1990s was very opposed to Free Software implementations of Java; the corporate winds always change and we should not throw ourselves to them.)

    The last case in my list deserves at least a brief mention. Languages like C (which was a purely AT&T endeavor initially) have reached the age that the early patents would have now expired, and such languages have slowly moved into community and standards-driven control. Thus, over long periods of time, history shows us that companies do loosen their iron grip of proprietary control of language implementations. However, during that first 20 year period, we should face them with great trepidation and stick with languages developed by the Free Software community itself.

    Finally, I close with important advice: don't be paralyzed with fear over software patents. There are likely some USA patents that read on any software you write. Make good choices (like avoiding C#, as RMS suggests, and favoring languages like Perl, Python, PHP and C), and get on with your work. If, as a non-profit Free Software developer, someone writes you a threatening letter about patents or sues you for patent infringement, of course seek help from an attorney.

    Update:While my analysis was focused on the patent issues around languages, I couldn't resist this orthogonal topic posted by David Siegel with some very helpful suggestions to developers who wish to limit the use of C#. FLOSS is about using good software development to help solve legal, social and technological impediments to freedom. David is right on course with his suggestions.

    Posted on Monday 29 June 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-06-01: Response to NTEN's Holly Ross' Anti-Software-Freedom Remarks

    [ This post was not actually placed here until 2011-11-16, but I've put it in proper sequence with when the bulk of it was written. (Some of you may find it new in your RSS feeds as of 2011-11-16, however.) I originally posted it as a comment on an NTEN Blog post and my comment appeared on NTEN's website for some time, but NTEN deleted it from their site eventually. On 2011-11-16, I wanted to make reference to my original comment in this identica thread, and at that time discovered that my response had been deleted from NTEN's website and was no longer archived anywhere online, so I put it here. ]

    In May 2009, Holly Ross, NTEN's Executive Director attacked software freedom, arguing that:

    Open Source is Dead. … The code was free, but we paid tens of thousands of dollars to get our implementation up and running. … I try to use solutions that reflect our values as an organization, but at the end of the day, I just need it to work. Community support can be great, but you're no less beholden to the whims of the community for support and updates than you are to any paid vendor.…

    open source code isn't necessarily any better than proprietary code. The costs, in time and money, are just placed elsewhere. It's a difference in how we budget for software more than anything else. So, the old arguments for open source software adoption are dead to me.…

    [Open Source and Free Software] is great to have as options. I just don't accept the argument that we have to support them simply because the code is available to everybody.

    — Holly Ross, 2009-05-28

    First of all, Holly completely confuses free as in freedom and free as in price even while she's attempting to indicate she understands that there are “values” involved. But more to the point, she shuns software freedom as a social justice cause. This led me to write the following response at the time, that NTEN ultimately deleted from their website:

    The software freedom movement started primarily as an effort for social justice for programmers and users. The goal is to avoid the helplessness and lock-in that proprietary software demands, and to treat users and developers equally in freedom.

    Perhaps there was a time (hopefully now long ago) when non-profits that focused on non-environmental issues would say things like "there's a place for non-recycled paper; it looks nicer and is cheaper". I doubt any non-profit would say that now to their colleagues in the environmental movement. Yet, it's common for non-profit leaders outside of the FLOSS world to say that the issue of software freedom is not relevant and that they need not consider the ethical and moral implications of software choices in the way that they do with their choices about what paper to buy.

    I'm curious, Holly, if you had said “recycled paper isn't necessarily better than virgin tree paper”, what reaction would you expect from the environmental non-profits? Indeed, would you think it's appropriate for a non-profit to refuse to recycle because their geographical area charges more for it? I guess you wouldn't think that's appropriate, and I am left wondering why you feel that your colleagues in the software freedom movement simply don't deserve the same respect as those in the environmental movement.

    I have hoped for a long time that this attitude would change, and I will continue to hope. I am sad to see that it hasn't change yet, at least at NTEN.

    — Bradley M. Kuhn, 2009-06-01

    Note that Holly didn't even both to respond to me in the least. I am again left wondering; if someone from a respected environmental movement organization had pointed out one of her blog posts was anti-recycling, would she have bothered to respond?

    Posted on Monday 01 June 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

May

  • 2009-05-12: Support Your Friendly Neighborhood FLOSS Charities

    I don't think we talk enough in the FLOSS community about the importance of individual support of FLOSS-related charitable organizations. On a recent podcast episode, Karen and I discuss with Stormy Peters how important it is for geeks — who may well often give lots of code to many FLOSS projects — also should consider giving a little bit of financial funding to FLOSS organizations as well.

    Of course, it's essential that people give their time to the charities and the causes that they care about. In the FLOSS world, we typically do that by giving code or documentation to our favorite FLOSS project. I think that's led us all into the classic “I gave at the office” feeling. Indeed, I know that I too have fallen into this rut at times myself.

    I suppose I could easily claim that, more than most people, I've given enough at the office. Working at various non-profit organizations since the 1990s, I've always made substantially less in salary than I would in the for-profit industry for similar work. I also have always volunteered my time in addition to my weekly work schedule. For example, I currently get paid for my 40 hour/week job at the SFLC, but I also donate about 20 hours of work for the Software Freedom Conservancy each week.

    Still, I don't believe that this is enough. There are many, many FLOSS non-profits that deserve support — more than I have time to give. Meanwhile, very small amounts of money, aggregated over many people giving, makes a world of difference in a number of ways to these organizations.

    Non-profits that are funded by a broad base of supporters are much more stable and have greater longevity than other types of non-profits that are funded primarily by corporate donations. This is because one donor or even a few disappearing is not disaster. Also, through these donations, organizations build a constituency of supporters that truly represent the people that the non-profit seeks to serve.

    Traditionally (with a few notable exceptions), non-profits in the FLOSS world have relied primarily on corporate donations. I generally think this is not ideal for a community that wishes to be fully represented by the non-profits that embody the projects we care about. We want these projects to represent the interest of developers and users, not necessarily the for-profit corporate interests. Plus, we want the organizations to survive even when companies stop supporting FLOSS or just simply go out of business.

    If we all contribute, it doesn't take that much for each individual to be a part of making a real difference. I believe that if each person who has benefited seriously from FLOSS gave $200/year, we'd make a substantial change and a wonderful positive impact on the non-profit organizations that shepherd and keep these FLOSS projects alive. I'm not suggesting giving to any specific organization: just to take $200/year and divide in the way you think is best across 2-4 different FLOSS non-profits that sponsor project you personally care about or benefit from.

    Think about it: $200/year breaks down to $16/month. For me (and likely for most people in a major city), $16/month means one fewer dinner at a restaurant each month. Can't we all eat at home one more time per month, and share that savings to help FLOSS non-profits?

    If you are looking for a list of non-profits that could use your support, the FLOSS Foundations Directory is a good place to start. FWIW, in addition to my volunteer work with Conservancy, here's the list of non-profits that I'm supporting with a total of $200 this year (in alphabetical order): The Free Software Foundation, GNOME Foundation, The Parrot Foundation, and The Twisted Project. Which ones will you give to this year?

    Posted on Tuesday 12 May 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

April

  • 2009-04-24: Fork Well: It Could Be The Last, Best Hope for Community

    I have faced with much trepidation the news of Oracle's looming purchase of Sun. Oracle has never shown any interest in community development, particularly in the database area. They are the largest proprietary database vendor on the planet, and they probably have very simple plans for MySQL: kill it.

    That's why I read with relief this post by Monty (co-founder of the MySQL project) this week, wherein Monty plans (and encourages others, too) to put their full force behind a MySQL “fork” that will be centered outside of Oracle.

    Monty is undoubtedly correct when he says I don't think that anyone can own an open source project; the projects are defined by the de-facto project leaders and the developers that are working on the project. and that [w]ith Oracle now owning MySQL, I think that the need for an independent true Open Source entity for MySQL is even bigger than ever before.

    I don't find the root of this problem in that one company has sold itself to another, pursuant to the the greater glory of the Ferengi Rules of Acquisition. Instead, I think the error is that projects inside Sun did not have a non-profit entity to shepherd them. When a single for-profit company is in control of a project's copyrights, its trademarks, and employs nearly all its core developers, there is a gross imbalance. The community around the project isn't healthy, and can easily be disrupted by the winds of corporate change, which blow in service of the only goal of for-profit existence: higher profits.

    I encourage Monty, as well as core developers of VirtualBox, OpenOffice, OpenSolaris, Sun's Java, and any other project that is currently under the full control of Sun (or indeed any other for-profit corporation) to think about this idea. Non-profits, particularly 501(c)(3)'s, are fundamentally different than for-profits. They exist to serve a community or a constituency and the public good, never profit. Therefore, the health of the codebase, the diversity of the developer and user community, and the advancement of software freedom can be the clear mission of a non-profit that houses a FLOSS project. A non-profit ensures that while corporate funding comes and goes, the mission of the project and its institutional embodiment stay stable. For example, just like shareholders have a duty to fire a CEO when he fails to make enough profit (i.e., the for-profit company is not reaching its maximal goal), boards of directors and/or memberships of non-profits must fire the President and/or Executive Director when they fail to serve the community well. Instead of the “profit motive”, 501(c)(3)'s have the “community motive”.

    Yet, the challenge of focusing on such goals remains difficult for projects that did not spawn from a community to start. GNU and Linux were both started by individual developers that built strong communities before there was any for-profit corporate interest in the software. When a project started inside a company with profit in mind, shoehorning community principles into the project can rarely succeed. I believe that a community must usually evolve from the ashes of some incident that wakes everyone up to realize the project will come to harm due to strict adherence to the profit motive.

    I should probably remind everyone that I'm not opposed to capitalism per se. Indeed, I've often fought on the other side of this equation when licenses (such as MySQL's own very early pre-GPL license) permit noncommercial use but prohibit commercial use. I believe that commercial and non-commercial activity with the code should be equally permitted in a non-discriminatory way. However, the center of gravity for developers, where the copyrights and trademarks live, and how core work on the codebase is funded are all orthogonal questions to the question of the software's license.

    My experience has anecdotally taught me that FLOSS communities function best when the following two things are true: (a) the codebase is held neutrally, either in the hands of the individual developers who wrote the code, or in a 501(c)(3) non-profit, and (b) not too many core developers share the same employer. I believe that reaching that state should be Job One of any for-profit seeking to build a FLOSS community. Sadly, this type of community health is often at direct odds with the traditional capitalist thinking of for-profit shareholders. I'm thus not surprised when FLOSS community managers in for-profit companies can only do so much. The rest is really up to the community of developers to fork and demand that a non-profit or other neutral and diverse developer-controlled management team exist. Attempts at this, sadly, fail much more often than they succeed.

    Monty's post likely had more hope in it than this one. Monty didn't jump to my conclusion that Oracle will kill MySQL; Monty considers it also possible that Oracle might sell MySQL or (and here's the possibility I really doubt) that Oracle will change into a community-driven FLOSS company. I love Monty's optimism in even considering this possible. I honestly hope my pragmatism about this is shown to be sheer pessimism. In the meantime, focusing on the MySQL forks and pressuring Oracle to engage the FLOSS community in a genuine way is the best strategy no matter what outcome you think is most likely.

    Update (on 17 May 2009): Monty announced an industry consortium that will seek to be a neutral space for MySQL development. I tend to prefer charitable non-profits to trade associations, but better the latter than hoping for Oracle to do the right thing.

    Posted on Friday 24 April 2009 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2009-04-16: TomTom/Microsoft: A Wake-Up Call for GPLv3 Migration

    There has been a lot of press coverage about the Microsoft/TomTom settlement. Unfortunately, so far, I have seen no one speak directly about the dangers that this deal could pose to software freedom, and what our community should consider in its wake. Karen and I discussed some of these details on our podcast, but I thought it would be useful to have a blog post about this issue as well.

    Most settlement agreements are sealed. This means that we won't ever actually know what TomTom agreed to and whether or not it violates GPLv2. The violation, if one exists, would likely be of GPLv2's § 7. The problem has always been that it's difficult to actually witness a v2§7 violation occurring (due in large part to less than perfect wording of that section). To find a violation v2§7, you have to discover that there were conditions imposed on [TomTom] ... that contradict the conditions of [GPLv2]. So, we won't actually know if this agreement violates GPLv2 unless we read the agreement itself, or if we observe some behavior by Microsoft or TomTom that shows that the agreement must be in violation.

    To clarify the last statement, consider the hypothetical options. For TomTom to have agreed to something GPLv2-compliant with Microsoft, the agreement would have needed to either (a) not grant a patent license at all (perhaps, for example, Microsoft conceded in the sealed agreement that the patents aren't actually enforceable on the GPLv2'd components), or (b) give a patent license that was royalty-free and permitted all GPLv2-protected activities by all recipients of patent-practicing GPLv2'd code from TomTom, or downstream from TomTom.

    It's certainly possible Microsoft either capitulated regarding the unenforceability (or irrelevancy) of its patents on the GPLv2'd software in question, or granted some sort of license. We won't know directly without seeing the agreement, or by observing a later action by Microsoft. If, for example, Microsoft later is observed enforcing the FAT patent against a Linux distributor, one might successfully argue that the user must have the right to practice those Microsoft patents in the GPLv2 code, because otherwise, how was TomTom able to distribute under GPLv2? (Note, BTW, that any redistributor of Linux could make themselves downstream from TomTom, since TomTom distributes source on their website.) If no such permission existed, TomTom would then be caught in a violation — at least in my (perhaps minority) reading of GPLv2.0

    Many have argued that GPLv2 § 7 isn't worded well enough to verify this line of thinking. I and a few other key GPL thinkers disagree, mainly because this reading is clearly the intent of GPLv2 when you read the Preamble. But, there are multiple interpretations of GPLv2's wording on this issue, and, the wording was written before the drafters really knew exactly how patents would be used to hurt Free Software. We'll thus probably never really have complete certainty that such patent deals violate GPLv2.

    This TomTom/Microsoft deal (and indeed, probably dozens of others like it whose existence is not public, because lawsuits aren't involved) almost surely plays into this interpretation ambiguity. Microsoft likely convinced TomTom that the deal is GPLv2-compliant, and that's why there are so many statements in the press opining about its likely GPLv2 compliance. I, Jeremy Allison, and others might be in the minority in our belief of the strength of GPLv2 § 7, but no one can disagree with the intent of the section, as stated in the Preamble. Microsoft is manipulating the interpretation disagreements to convince smaller companies like Novell, TomTom, and probably others into believing that these complicated patent licensing deals and/or covenants are GPLv2-compliant. Since most of them are about the kernel named Linux, and the Linux copyright holders are the only ones with power to enforce, Microsoft is winning on this front.

    Fortunately, the GPLv3 clarifies this issue, and improves the situation. Therefore, this is a great moment in our community to reflect on the importance of GPLv3 migration. The drafters of GPLv3, responding to the Microsoft/Novell deal, considered carefully how to address these sorts of agreements. Specifically, we have these two paragraphs in GPLv3:

    If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.

    A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.

    Were Linux under GPLv3 (but not GPLv2), these terms, particularly those in the second paragraph, would clearly and unequivocally prohibit TomTom from entering into any arrangement with Microsoft that doesn't grant a license to any Microsoft patent that reads on Linux. Indeed, even what has been publicly said about this agreement seems to indicate strongly that this deal would violate GPLv3. While the Novell/Microsoft deal was grandfathered in (via the date above), this new agreement is not. Yet, the most frustrating aspect of the press coverage of this deal is that few have taken the opportunity to advocate for GPLv3 adoption by more projects. I hope now that we're a few weeks out from the coverage, project leaders will begin again to consider adding this additional patent protection for their users and redistributors.

    Toward the goal of convincing GPLv2 users to switch to GPLv3, I should explain a bit why special patent licensing deals like this are bad for software freedom; it's not completely obvious. To do so, we can look specifically at what TomTom and Microsoft said in the press coverage of their deal: The agreement protects TomTom's customers under the patents …, the companies said (Microsoft, TomTom Settle Patent Dispute, Ina Fried).

    Thus, according to Microsoft and TomTom, the agreement gives some sort of “patent protection” to TomTom customers, and presumably no one else. This means that if someone buys a GNU/Linux-based TomTom product, they have greater protection from Microsoft's patents than if they don't. It creates two unequal classes of users: those who pay TomTom and those who don't. The ones who don't pay TomTom will have to worry if they will be the next ones sued or attacked in some other way by Microsoft over patent infringement.

    Creating haves and have-nots in the software licensing space is precisely what all versions of the GPL seek to prevent. This is why the Preamble of GPLv2 said: any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary.

    Further to this point, in the Rationale Document for the Third Discussion Draft of GPLv3, a similar argument is given in more detail:

    The basic harm that such an agreement can do is to make the free software subject to it effectively proprietary. This result occurs to the extent that users feel compelled, by the threat of the patent, to get their copies in this way. So far, the Microsoft/Novell deal does not seem to have had this result, or at least not very much: users do not seem to be choosing Novell for this reason. But we cannot take for granted that such threats will always fail to harm the community. We take the threat seriously, and we have decided to act to block such threats, and to reduce their potential to do harm. Such deals also offer patent holders a crack through which to split the community. Offering commercial users the chance to buy limited promises of patent safety in effect invites each of them to make a separate peace with patent aggressors, and abandon the rest of our community to its fate.

    It's true that one can blissfully use, redistribute, sell and modify some patent-covered software for years without ever facing a patent enforcement action. But, particularly in situations where known patents have been asserted, those without a patent license often live in fear of copying, modifying and sharing code that exercises the teachings of the patent. We saw this throughout the 1990s with RSA, and today most commonly with audio and video codecs. Microsoft and other anti-Free Software companies have enough patents to attack if we let them. The first steps in stopping it are to (a) adopt GPLv3, LGPLv3 and AGPLv3 with the improved patent provisions, and (b) condemning GPLv2-only deals that solve a patent problem for some users but leave the rest out in the cold, and (c) pointing out that the purported certainty that such deals are GPLv2-compliant is definitely in question.

    Patents always remain a serious threat, and, while the protection under GPLv2 has probably been underestimated, we cannot overestimate the additional protection that GPLv3 gives us in this regard. Microsoft clearly knows that the GPLv3 terms will kill their patent aggression business model, and have therefore focused their attacks on GPLv2-licensed code. Shouldn't we start to flank them by making less GPLv2 code available for these sorts of deals?

    Finally, I would like to draw specific attention the fact that TomTom, as a company, is not necessarily an ally of software freedom. They are like most for-profit companies; they use FLOSS when it is convenient for them, and give back when the licenses obligate them to do so, or when it behooves them in some way. As a for-profit company, they made this deal to please their shareholders, not the Free Software community. Admittedly, their use of the FLOSS in their products was done legitimately (that is, once their GPLv2 non-compliance was corrected by Harald Welte in 2004). However, I do not think we should look upon TomTom as a particularly helpful member of the community. Indeed, most of the patents that Microsoft asserted against TomTom were on their proprietary components, not their FLOSS ones. Thus, most of this dispute was a proprietary software company arguing with another proprietary software company over patents that read on proprietary software. Our community should tell TomTom that if they want to join and support the FLOSS world, they should release their software under a FLOSS license — including software that they aren't obligated to do so by the licenses. Wouldn't it be quite interesting if TomTom's mapping display software were available under, say, GPLv3?

    (Added later): Even if TomTom fails to release their mapping applications as Free Software, our minimal demand should be a license to their patents for use in Free Software. Recall that TomTom countersued Microsoft, also alleging patent infringement on TomTom's patents. TomTom has still yet to offer a public license on those patents for use by the Free Software community. If they are actually not hostile to software freedom, wouldn't they allow us to at least practice the teachings of their patents in GPL'd software?


    0Update: Andrew Tridgell pointed out that my verb tenses in my hypothetical example made the text sound more broadly worded than I intended. I've thus corrected the text in the hypothetical example to be clearer. Thanks for the clarification, Tridge!

    Posted on Thursday 16 April 2009 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2009-04-08: Neary on Copyright Assignment: Some Thoughts

    Dave Neary found me during breakfast at the Linux Collaboration Summit this morning and mentioned that he was being flamed for a blog post he made, Copyright assignment and other barriers to entry. Or, as some might title it in a Computer Science academic tradition: Copyright Assignment Considered Harmful. I took a look at Dave's post, and I definitely think it's worth reading and considering, regardless of whether you agree with it or flame it. For my part, I think I agree with most of his points.

    One of the distinctions that Dave is making that some might miss is the difference between non-profit, community-controlled copyright assignment assignees and for-profit copyright assignees. He quotes Luis Villa to make the point that companies, ultimately, aren't the best destinations as a final home of FLOSS copyrights. If copyright assignment is looked only through the lens of a for-profit corporate entity — with only the duty to its shareholders to determine its future — then indeed it's a dangerous situation for many of the reasons that Dave raises.

    I believe strongly that assigning copyright to a for-profit corporate entity is usually problematic. As Dave points out, corporations aren't really community members proper of a Free Software community; rather, their employees typically are. I have always felt that either copyrights should be assigned to a transparently-run non-profit 501(c)(3) entity, or they should be held by individual contributors. Indeed, the Samba project even has a policy to accept absolutely no corporate copyrights in their codebase, and I would love to see more projects adopt that policy.

    I trust 501(c)(3) non-profits more than for-profits not only because I've spent most of my career in the former, and have enjoyed that time more than my time at the latter. I trust non-profits more because their charters and founding documents require a duty to a public-benefiting mission and to a community. They are failing to act properly under their charters if they put the needs of a for-profit entity ahead of the needs of the community and the public. This is exactly the correct alignment of incentives for a consolidation of FLOSS copyrights.

    Some projects don't like centralized copyright for various reasons. While I do prefer it myself, I can understand this desire among individuals to each keep their stake of control in the project. Thus, I don't object to projects that want each individual contributor to have their own copyright. In this situation, the incentives are still properly aligned, because individuals who helped make the project happen have the legal control. While these individuals have no required commitment to the public good like a non-profit, they are members of a community and are much more likely to put the community needs above the profit motive that controls all for-profit entities.

    When Dave says copyright assignment might be harmful, he seems to talk primarily about for-profit corporate assignment. I agree with him on that point. however, when he mentions that it's unnecessary, I don't completely agree, but he raises well the points that I would raise as to why it's important.

    However, in the middle of Dave's post is the bigger concern that deserves special mention. The important task is keeping a clear record of the copyright provenance about where the work came from, and who might have a copyright claim. Copyright assignment is a short-hand way to do this in an organized and clear fashion. It's a simple solution with some overhead, and sometimes projects over the years have been annoyed with (and even ridiculed) that overhead. However, the more complex solutions have overhead, too. If you don't do assignment, you must keep careful track of every contributor, what their employer agreements say, and whether they have the right to submit patches under their own copyrights to the project. Some projects do this better than others.

    Regardless, all of this is hard work. For years, I've seen it as a personal task of mine to help develop systems and recommendations that help make either process (assignment or good copyright record-keeping) less burdensome. I haven't worked on this task as much as I should have, but I have not forgotten that it needs attention. I envision integrated hooks and systems with revision control systems that help with this. I think we eventually need something that makes it trivial for hackers to implement and easy to maintain. I understand that the last thing any Free Software hacker wants to do is sit and contemplate the legal implications of contributions they've received. As such, all of us who follow this issue hope to make it easier for projects to do the work. In the meantime, I think discussion about this is good, and I'm thankful for Dave to raising the issue again.

    Posted on Wednesday 08 April 2009 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

March

January

  • 2009-01-27: Welcome (Finally!) to the GCC Runtime Library Exception

    For the past sixteen months, I participated in a bit of a “mini-GPLv3 process” among folks at the FSF, SFLC, the GNU Compiler Collection Steering Committee (GCC SC), and the GCC community at large. We've been drafting an important GPLv3 license exception (based on a concept by David Edelsohn and Eben Moglen, that they invented even before the GPLv3 process itself started). Today, that GCC Runtime Library Exception for GPLv3 went into production.

    I keep incessant track of my hours spent on various projects, so I have hard numbers that show I personally spent 188 hours — a full month of 40-hour weeks — on this project. I'm sure my colleagues have spent similar amounts, too. I am proud of this time, and I think it was absolutely worthwhile. I hope the discussion gives you a flavor of why FLOSS license exception drafting is both incredibly important and difficult to get right without the greatest of care and attention to detail.

    Why GPL Exceptions Exist

    Before I jump into discussion of this GCC Runtime Library exception, some background is needed. Exceptions have been a mainstay of copyleft licensing since the inception of the GNU project, and once you've seen many examples over many years, they become a standard part of FLOSS licensing. However, for the casual FLOSS developer who doesn't wish to be a licensing wonk (down this path lies madness, my friends, run screaming with your head covered!), exceptions are a rare discovery in a random source file or two, and they do not command great attention. An understandable reaction, but from a policy perspective, they are an essential part of the copyleft system.

    From the earliest days of the copyleft, it was understood that copyleft was a merely a strategy to reach the goal of software freedom. The GPL is a tool that implements this strategy, but like any tool, it doesn't fit every job.

    In some sense, the LGPL was the earliest and certainly the most widely known “GPL exception”. (Indeed, my friend Richard Fontana came up with the idea to literally make LGPL an exception to GPLv3, although in the v2 world, LGPLv2 was a fully separate license from GPLv2.) Discussions on why the LGPL exists are beyond the scope of this blog post (although I've written about them before). Generally speaking, though, LGPL is designed to be a tool when you don't want the full force of copyleft for all derivative works. Namely, you want to permit the creation of some proprietary (or partly proprietary) derivative works because allowing those derivations makes strategic sense in pursuing the goal of software freedom.

    Aside from the LGPL, the most common GPL exceptions are usually what we generally categorize as “linking exceptions”. They allow the modifier to take some GPL'd object code and combine it in some way with some proprietary code during the compilation process. The simplest of these exceptions is found when you, for example, write a GPL'd program in a language with only a proprietary implementation, (e.g., VisualBasic) and you want to allow the code to combine with the VisualBasic runtime libraries. You use your exclusive right as copyright holder on the new program to grant downstream users, redistributors and modifiers the right combine with those proprietary libraries without having those libraries subject to copyleft.

    In essence, copyleft exceptions are the scalpels of copyleft. They allow you to create very carefully constructed carve-outs of permission when pure copyleft is too blunt an instrument to advance the goal of software freedom. Many software freedom policy questions require this fine cutting work to reach the right outcome.

    The GCC Exception

    The GCC Exception (well, exceptions, really) have always been a particularly interesting and complex use of a copyleft exception. Initially, they were pragmatically needed to handle a technological reality about compilers that interacts in a strange way with copyright derivative works doctrine. Specifically, when you compile a program with gcc, parts of GCC itself, called the runtime library (and before that, crt0), are combined directly with your program in the output binary. The binary, therefore, is both a derivative work of your source code and a derivative work of the runtime library. If GCC were pure GPL, every binary compiled with GCC would need to be licensed under the terms of GPL.

    Of course, when RMS was writing the first GCC, he realized immediately this licensing implication and created an exception to avoid this. Versions of that exception has been around and improved since the late 1980s. The task that our team faced in late 2007 was to update that exception, both to adapt it to the excellent new GPLv3 exceptions infrastructure (as Fontana did for LGPLv3), and to handle a new policy question that has been kicking around the GCC world since 2002.

    The Plugin Concern

    For years, compiler experimentalists and researchers have been frustrated by GCC. It's very difficult to add a new optimization to GCC because you need quite a deep understanding of the codebase to implement one. Indeed I tried myself, as a graduate student in programming languages in the mid-1990s, to learn enough about GCC to do this, but gave up when a few days of study got me nowhere. Advancement of compiler technology can only happen when optimization experimentation can happen easily.

    To make it easy to try new optimizations out, GCC needs a plugin architecture. However, the GCC community has resisted this because of the software freedom implications of such an architecture: if plugins are easy to write, then it will be easy to write out to disk a version of GCC's internal program representation (sometimes called the intermediate representation, or IR). Then, proprietary programs could be used to analyze and optimize this IR, and a plugin could be used to read the file back into GCC.

    From a licensing perspective, such an optimizing proprietary program will usually not be a derivative work of GCC; it merely reads and writes some file format. It's analogous to OpenOffice reading and writing Microsoft Word files, which doesn't make it a derivative of Word by any means! The only parts that are covered by GPL are the actual plugins to GCC to read and write the format, just as OpenOffice's Word reader and writer are Free Software, but Microsoft Word is not.

    This licensing implication is a disaster for the GCC community. It would mean the advent of “compilation processes” that were “mixed”, FaiF and proprietary. The best, most difficult and most interesting parts of that compilation process — the optimizations — could be fully proprietary!

    This outcome is unacceptable from a software freedom policy perspective, but difficult to handle in licensing. Eben Moglen, David Edelsohn, and a few others, however, came up with an innovative idea: since all binaries are derivative of GCC anyway, set up the exception so that proprietary binary output from GCC is permitted only when the entire compilation process involves Free Software. In other words, you can do these proprietary optimization plugins all you want, but if you do, you'll not be able to compile anything but GPL'd software with them!

    The Drafting and the Outcome

    As every developer knows, the path from “innovative idea” to “working implementation” is a long road. It's just as true with licensing policy as it is with code. Those 188 hours that I've spent, along with even more hours spent by a cast of dozens, have been spent making a license exception that implements that idea accurately without messing up the GCC community or its licensing structure.

    With jubilation today, I link to the announcement from the FSF, the FAQ and Rationale for the exception and the final text of the exception itself. This sixteen-month long cooperation between the FSF, the SFLC, the GCC SC, and the GCC community has produced some fine licensing policy that will serve our community well for years to come. I am honored to have been a part of it, and a bit relieved that it is complete.

    Posted on Tuesday 27 January 2009 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2009-01-15: Launchpad's License Will Be AGPLv3

    Last week, I asked Karl Fogel, Canonical's newly hired Launchpad Ombudsman, if Launchpad will use the AGPLv3. His eyes said “yes” but his words were something like: Canonical hasn't announced the license choice yet. I was excited to learn this morning from him that Launchpad's license will be AGPLv3.

    This is exciting news. Launchpad is precisely the type of application that we designed the AGPLv3 for, and Launchpad is rapidly becoming a standard in the next generation of Free Software project hosting. Over the last year, I've felt much trepidation that Launchpad would be “another SourceForge”: that great irony of a proprietary platform becoming the canonical method for Free Software project hosting. It seems now the canonical and the Canonical method for hosting will be Launchpad, and it will respect the freedom of network users of the service.

    Given that they'd already announced plans to liberate Launchpad, it's not really surprising that Canonical has selected the AGPLv3. I would guess their primary worry about releasing the source was ensuring that competitors don't sprout up and fail to share their improvements back with the community of users. AGPLv3 is specifically designed for this situation.

    I'm glad we've made a license that is getting adoption by top-tier Free Software projects like this one. Critics keep saying that AGPLv3 is a marginal license of limited interest. I hope this license choice by Canonical will show them again that they continue to be mistaken.

    Thanks to Karl, Matthew Revell, Mark Shuttleworth himself, and all the others at Canonical who are helping make this happen.

    Posted on Thursday 15 January 2009 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2009-01-14: LGPL'ing of Qt Will Encourage More Software Freedom

    The decision between the GPL or LGPL for a library is a complex one, particularly when that library solves a new problem or an old problem in a new way. TrollTech faced this decision for the Qt library, and Nokia (who acquired Trolltech last year) has now reconsidered the question and come to a different conclusion. Having followed this situation since even before Qt was GPL'd, I was glad that we have successfully encouraged the reconsideration of this decision.

    Years ago, RMS wrote what many consider the definitive essay on this subject, entitled Why you shouldn't use the Lesser GPL for your next library. A few times a year, I find myself rereading that essay because I believe it puts forward some good points to think about when making this decision.

    Nevertheless, there is a strong case for the LGPL in many situations. Sometimes, pure copyleft negatively impacts the goal of maximal software freedom. The canonical example, of course, is the GNU C Library (which was probably the first program ever LGPL'd).

    Glibc was LGPL'd, in part, because it was unlikely at the time that anyone would adopt a fully FaiF (Free as in Freedom) operating system that didn't allow any proprietary applications. Almost every program on a Unix-like system combines with the C library, and if it were GPL'd, all applications would be covered by the GPL. Users of the system would have freedom, but encouraging the switch would be painful because they'd have to give up all proprietary software all at once.

    The GNU authors knew that there would be proprietary software for quite some time, as our community slowly replaced each application with freedom-respecting implementations. In the meantime, better that proprietary software users have a FaiF C library and a FaiF operating system to use (even with proprietary applications) while work continued.

    We now face a similar situation in the mobile device space. Most mobile devices used today are locked down, top to bottom. It makes sense to implement the approach we know works from our two decades of experience — liberate the operating system first and the applications will slowly follow.

    This argument informs the decision about Qt's licensing. Qt and its derivatives are widely used as graphics toolkits in mobile devices. Until now, Qt was licensed under GPL (and before that various semi-Free licenses). Not only did the GPL create a “best is the enemy of the good” situation, but those companies that rejected the GPL could simply license a proprietary copy from TrollTech, which further ghettoized the GPL'd versions. All that is now changing.

    Beyond encouraging FaiF mobile operating systems, this change to LGPL yields an important side benefit. While the proprietary relicensing business is a common and legitimate business model to fund further development, it also has some negative social side effects. The codebase often lives in a silo, discouraging contributions from those who don't receive funding from the company who controls the canonical upstream.

    A change to LGPL sends a loud and clear message — the proprietary relicensing business for Qt is over. Developers who have previously rejected Qt because it was not community-developed might want to reconsider that position in light of this news. We don't know yet how the new Qt community will be structured, but it's now clear that Nokia, Qt's new copyright holder, no longer has a vested interest in proprietary relicensing. The opportunity for a true software freedom community around Qt's code base has maximum potential at this moment. A GUI programmer I am not; but I hope those who are will take a look and see how to create the software freedom development community that Qt needs.

    Posted on Wednesday 14 January 2009 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

2008

December

  • 2008-12-24: It's a Wonderful FLOSS!

    I suppose it's time for me to confess. For a regular humbug who was actually memory-leak-hunting libxml2 at the office until 21:30 on December 24th, I'm still quite a sucker for Frank Capra movies. Most people haven't seen any of them except It's a Wonderful Life. Like a lot of people, I see that film annually one way or the other, too.

    Fifteen years ago, I wrote a college paper on Capra's vision and worldview; it's not surprising someone who has devoted his life to Free Software might find resonance in it. Capra's core theme is simple (some even call it simplistic): An honest, hard-working idealist will always overcome if he never loses sight of community and simply refuses any temptation of corruption.

    I don't miss the opportunity to watch It's a Wonderful Life when it inevitably airs each year. (Meet John Doe sometimes can be found as well around this time of year — catch that one too if you can.) I usually perceive something new in each viewing.

    (There are It's a Wonderful Life spoilers below here; if you actually haven't seen it, stop here.)

    This year, what jumped out at me was the second of the three key speeches that George Bailey gives in the film. This occurs during the bank run, when Building and Loan investors are going to give up on the organization and sell their shares immediately at half their worth. I quote the speech in its entirety:

    You're thinking of this place all wrong. As if I had the money back in a safe. The money's not here. Your money's in Joe's house; that's right next to yours. And in the Kennedy house, and Mrs. Macklin's house, and a hundred others. Why, you're lending them the money to build, and then, they're going to pay it back to you as best they can. Now what are you going to do? Foreclose on them?

    [Shareholders decide to go to Potter and sell. Bailey stops the mob.]

    Now wait; now listen. Now listen to me. I beg of you not to do this thing. If Potter gets hold of this Building and Loan there'll never be another decent house built in this town. He's already got charge of the bank. He's got the bus line. He got the department stores. And now he's after us. Why?

    Well, it's very simple. Because we're cutting in on his business, that's why, and because he wants to keep you living in his slums and paying the kind of rent he decides. Joe, you had one of those Potter houses, didn't you? Well, have you forgotten? Have you forgotten what he charged you for that broken-down shack?

    Ed, you know! You remember last year when things weren't going so well, and you couldn't make your payments? You didn't lose your house, did you? Do you think Potter would have let you keep it?

    Can't you understand what's happening here? Don't you see what's happening? Potter isn't selling. Potter's buying! And why? Because we're panicking and he's not. That's why. He's picking up some bargains. Now, we can get through this thing all right. We've got to stick together, though. We've got to have faith in each other.

    Perhaps this quote jumped out on me because all the bank run jokes made this year. However, that wasn't the first thing that came to mind. Instead, I thought immediately of Microsoft's presence at OSCON this year and the launch of their campaign to pretend they haven't spent the last ten years trying destroy all of Free Software and Open Source.

    In the film, Potter eventually convinces George to come by his office for a meeting, offers him some fine cigars, and tells him that George's ship has come in because Potter is ready to give him a high paying job. George worries that the Building and Loan will fail if he takes the job. Potter's (non)response is: Confounded, man, are you afraid of success!?

    It's going to get more tempting to make deals with Microsoft. We're going to feel like their sudden (seemingly) positive interest in us — like Potter's sudden interest in George — is something to make us proud. It is, actually, but not for the obvious reason. We're finally a viable threat to the future of proprietary software. They've reached the stage where they know they can't kill us. They are going to try to buy us, try to corrupt us, try to do anything they can to convince us to give up our principles just to make our software a little better or a little more successful. But we can do those things anyway, on our own, in the fullness of time.

    Never forget why they are making the offer. Microsoft is unique among proprietary software companies: they are the only ones who have actively tried to kill Open Source and Free Software. It's not often someone wants to be your friend after trying to kill you for ten years, but such change is cause for suspicion. George was smart enough to see this and storm out of Potter's office, saying: You sit around here and spin your little webs and think the whole world revolves around you and your money! Well, it doesn't, Mr. Potter!. To Microsoft, I'd say: and that goes for you, too!

    Posted on Wednesday 24 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-12-09: One gpg --gen-key per Decade

    Today is an interesting anniversary (of sorts) for my cryptographic infrastructure. Nine years ago today, I generated the 1024 bit DSA key, DB41B387, that has been my GPG key every day since then. I remember distinctly that on the 350 MhZ machine I used at the time, it took quite a while to generate, even though I made sure the entropy pool remained nice and full by pounding on the keyboard.

    The horribleness of the recent Debian vulnerability meant that I have spent a much time this year pondering the pedigree my personal cryptographic infrastructure. Of course, my key was far too old to have been generated on a Debian-based system that had that particular vulnerability. However, the issue that really troubled me this past summer was this:

    Some DSA keys may be compromised by only their use. A strong key (i.e., generated with a ‘good’ OpenSSL) but used locally on a machine with a ‘bad’ OpenSSL must be considered to be compromised. This is due to an ‘attack’ on DSA that allows the secret key to be found if the nonce used in the signature is reused or known.

    Not being particularly hard core on cryptographic knowledge — most of my expertise comes from only one class I took 11 years ago on Encryption, Compression, and Secure Hashing in graduate school — I found this alarming and tried my best to do some ancillary reading. It seems that DSA keys, in many ways, are less than optimal. It seems (to my mostly uneducated eye) in skimming academic papers that DSA keys are tougher to deploy right and keep secure, which leads to these sorts of possible problems.

    I've resolved to switch entirely to RSA keys. The great thing about RSA is its simplicity and ease of understanding. I grok factoring and understand better the complexity situation of the factoring problem (this time, from the two graduate courses I took on Complexity Theory, so my comfort is more solid :). I also find it intriguing that a child can learn how to factor in grade school, yet we can't teach a computer to do it efficiently. (By contrast, I didn't learn the discrete logarithm problem until my Freshman year of college, and I still have to look up the details to remind myself.) So, the “simplicity brings clarity” idea hints that RSA is a better choice.

    Fact is, there was only one reason why I revoked my ancient RSA keys and generated DSA ones in the 1990s. The RSA patent and the strict licensing of that patent by RSA Data Security, Inc. made it impossible to implement RSA in Free Software back then. So, when I switched from proprietary PGP to GPG, my keys wouldn't import. Indeed, that one RSA patent alone set back the entire area of Free Software cryptography at least ten years.

    So, when I decided this evening that I'd need to generate a new key and begin promulgating it at key-signing parties sometime before DB41B387 turns ten, I realized I actually have the freedom to choose my encryption algorithm now! Sadly, it took almost these entire nine years to get there. Our community did not only have to wait out this unassailable patent. (RSA is among the most novel and non-obvious ideas that most computer professionals will ever seen in their lives). Once the RSA patent finally expired0, we had to then slowly but surely implement and deploy it in cryptographic programs, from scratch.

    I'm still glad that we're free of the RSA patent, but I fear among the mountain of “software patents” granted each year, that the “new RSA” — a perfectly valid, non-obvious and novel patent that reads on software and fits both the industry's and patent examiner's definition of “high quality” — is waiting to be discovered and used as a weapon to halt Free Software again. When I finally type gpg --gen-key (now with --expert mode!) for the first time in nine years, I hope I'll only experience the gladness of being able to generate an RSA key, and succeed in ignoring the fact that RMS' old essay about this issue remains a cautionary tale to this very day. Software patents are a serious long-term threat and must be eradicated entirely for the sake of software freedom. The biggest threat among them will always be the “valid”, “high quality” software patents, not the invalid, poor quality ones.


    0 Technically speaking, RSA didn't need to expire. In a seemingly bizarre move, RSA Data Security, Inc. granted a Free license to the patent a few weeks before the actual expiration date. To this day, I believe the same theory I espoused at the time: their primary goal in doing this was merely to ruin all the “RSA is Free” parties that had been planned.

    Posted on Tuesday 09 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-12-04: The FLOSS License Drafter's Responsibility to the Community

    I finally set aside some time to read my old boss' open letter responding to criticisms of the FDL process. I read gladly his discussion of the responsibilities of software freedom license stewardship.

    I've been involved with the drafting of a number of FLOSS licenses (and exceptions to existing licenses). For example, I helped RMS a little with the initial FDL 1.0 drafting (the license at issue here); I was a catalyst for the creation of Artistic 2.0 and advised that process; and, I was heavily involved with the creation of the AGPL, and somewhat with the GPLv3. From these experiences, I know that, just like when a core developer gets annoyed when kibitzed by a user who just downloaded the program and is missing something obvious, we license drafters are human and often have the “did this person even read all the stuff we've written on this issue?” knee-jerk response to criticism. However, we all try to put that aside, and be ready to respond and take seriously any reasonable criticism. I am glad that RMS has done so here. The entity that controls future versions of a license for which authors often use an “or later” term holds great power. As the clichéd Spiderman saying goes, with great power, comes great responsibility.

    The FSF as a whole, and RMS in particular, have always know this well and take it very seriously. Indeed, years ago, when I was still at FSF, RMS and I wrote an essay together on a closely related issue. This recent response on FDL reiterates some of those points, but with a real-world example explaining the decision making process regarding the reasonable exercise of that power to, in turn, grant rights and freedoms rather than take them away.

    The key quote from his letter that stands out to me is: our commitment is that our changes to a license will stick to the spirit of that license, and will uphold the purposes for which we wrote it. This point is fundamental. As FLOSS license drafters, we must always, as RMS says, abide by the highest ethical standards to uphold the spirit that spurred the creation of these licenses.

    Far from being annoyed, I'm grateful for those who assume the worst of intentions and demand that we justify ourselves. For my part, I try to answer every question I get at conferences and in email about licensing policy as best I can with this point in mind. We in the non-profit licensing sector of the FLOSS world have a duty to the community of FLOSS users and programmers to defend their software freedom. I try to make every decision, on licensing policy (or, indeed, any issue) with that goal in mind. I know that my colleagues at the FSF and at the many other not-for-profit organizations always do the same, too.

    Posted on Thursday 04 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-12-01: AGPL Declared DFSG-Free

    Crossposted with autonomo.us.

    Late last week, the FTP Masters of Debian — who, absent a vote of the Debian developers, make all licensing decisions — posted their ruling that AGPLv3 is DFSG-Free. I was glad to see this issue was finally resolved after months of confusion; the AGPLv3 is now approved by all known FLOSS licensing ruling bodies (FSF, OSI, and Debian).

    It was somewhat fitting that the AGPLv3 was approved by Debian within a week of the one year anniversary of AGPLv3's release. This year of AGPLv3 has shown very rapid adoption of the AGPL. Even conservative numbers show an adoption rate of 15 projects per month. I expect the numbers to continue a steady, linear climb as developers begin to realize that the AGPL is the “copyleft of the Cloud”.

    Posted on Monday 01 December 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

November

  • 2008-11-20: podjango: A Minimalist Django Application for Podcast Publishing

    I had yet to mention in my blog that I now co-host a podcast at SFLC. I found myself, as we launched the podcast last week, in a classic hacker situation of having one project demand the need to write code for a tangentially related project.

    Specifically, we needed a way to easily publish show notes and otherwise make available the podcast on the website and in RSS feeds. Fortunately, we already had a few applications we'd written using Django. I looked briefly at django podcast, but the interface was a bit complicated, and I didn't like its (over)use of templates to do most of the RSS feeding.

    The small blogging application we'd hacked up for this blog was so close to what we needed, that I simply decided to fork it and make it into a small podcast publisher. It worked out well, and I've now launched a Free Software project called podjango under the AGPLv3.

    Most of the existing code will be quite obvious to any Django hacker. The only interesting thing to note is that I made some serious effort for the RSS feeds. First, I heavily fleshed out the minimal example for an iTunesFeed generator in the Django documentation. It's currently a bit specific to this podcast, but should be easily abstracted. I did a good amount of research on the needed fields for the iTunes RSS and Media RSS and what should be in them. (Those feedforall.com tutorials appear to be the best I could find on this.)

    Second, I did about six hours of work to build what I called SFLC's ominbus RSS feed. The most effort went into building an RSS feed that includes disparate Django application components, but this thread on query set manipulation from django-users referenced from Michael Angela's blog was very helpful. I was glad, actually, that the ultimate solution centered around complicated features of Python. Being an old-school Perl hacker, I love when the solution is obvious once you learn a feature of the language that you didn't know before. (Is that the definition of programming language snobbery? ;)

    It also turns out that Fabian Scherschel (aka fabsh) had started working on on a Django podcast application too, and he's going to merge in his efforts with podjango. I preemptively apologize publicly, BTW, that I didn't reach out to the django-podcast guys before starting a new project. However, I'm sure fabsh and I both would be happy to cooperate with them if they want to try to merge the codebases (although I don't want to use a non-Free software platform like Google Code to host any project I work on ;). Anyway, I really think RSS feeds should be implemented using generators in Python code rather than in templates, though, and I think the user interface should be abstracted away from as many details for the DTD fields as possible. Thus, it may turn out that we and django-podcast have incompatible design goals.

    Anyway, I hope the code we've released is useful, and I'm glad for Fabian to take over as project lead. I need to move onto other projects, and hope that others will be interested in generalizing and improving the code under Fab's leadership. I'm happy to help it along.

    Posted on Thursday 20 November 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-11-13: GPLv3/AGPLv3 Adoption: If It Happened Too Fast, I'd Be Worried

    Since the release of GPLv3, technology pundits have been opining about how adoption is unlikely, usually citing Linux's still-GPLv2 status as (often their only) example. Even though I'm a pro-GPLv3 (and, specifically, pro-AGPLv3) advocate, I have never been troubled by slow adoption, as long as it remained on a linear upswing from release day onward (which it has).

    Only expecting linear growth is a simple proposition, really. Free, Libre and Open Source Software (FLOSS) projects do not always have the most perfectly organized of copyright inventories, nor is the licensing policy of the project the daily, primary focus of the developers. Indeed, most developers have traditionally seen a licensing decision as something you think about once and never revisit!

    In some cases, such as with many of the packages in FSF's GNU project, there is a single entity copyright holder with a policy agenda, and such organizations can (and did) immediately relicense large codebases under GPLv3. However, in most projects, individual contributors keep their own copyrights, and the relicensing takes time and discussion, which must compete with the daily work of making better code.

    Relicensing from GPLv2-or-later

    GPLv2-or-later packages can be relicensed to GPLv3-or-later, or GPLv3-only, basically instantaneously. However, wholesale relicensing by a project leader would be downright rude. We're a consensus-driven community, and any project leader worth her title would not unilaterally relicense without listening to the community. In fact, it's somewhat unlikely a project leader would relicense any existing GPLv2-or-later copyrights under GPLv3-only (or GPLv3-or-later, for that matter) without the consent of the contributor who holds those copyrights. Even though that consent isn't needed, getting it anyway is a nice, consensus-building thing to do.

    In fact, I think most projects prefer to slowly change the license in various subparts of the work, as those parts are changed and improved. That approach saves time from having to do a “bombing run” patch that changes all the notices across the project, and also reflects reality a bit better0.

    Of course, once you change one copyrightable part of a larger work to GPLv3-or-later, the effective license of the whole work is GPLv3-or-later, even if some parts could be extracted and distributed under GPLv2-or-later. So, in essence, GPLv2-or-later projects that have started taking patches licensed under GPLv3-or-later have effectively migrated to GPLv31. This fact alone, BTW, is why I believe strongly that GPLv3 adoption statistics sites (like Palamida's) have counts that underestimate adoption. Such sites are almost surely undercounting this phenomena. (It's interesting to note that even with such likely undercounting, Palamida's numbers show a sure and steady linear increase in GPLv3 and AGPLv3 adoption.)

    Relicensing from GPLv2-only

    Relicensing from GPLv2-only is a tougher case, and will take longer for a project that undertakes it. Such relicensing requires some hard work, as a project leader will have to account for the copyright inventory and ensure that she has permission to relicense. This job, while arduous, is not impossible (as many pundits have suggested).

    But even folks like Linus Torvalds himself are thinking about how to get this done. Recently, I began using git more regularly. I noticed that Linus designed git's license to leave open an easily implemented possibility for future GPLv3 licensing. Even the bastion of GPLv2-only-ville wants options for GPLv3-relicensing left open.

    Not Rushing Is a Good Thing

    Software freedom licenses define the rules for our community; they are, in essence, a form of legislation that each project constructs for itself. One “country” (i.e., the GNU project) has changed all its “laws” quickly because it's located on the epicenter of where those “laws” were drafted. Indeed, most of us who were deeply involved with the GPLv3 process were happy to change quickly, because we watched the license construction happen draft-by-draft, and we understood deeply the policy questions and how they were addressed.

    However, most FLOSS developers aren't FLOSS licensing wonks like I and my colleagues at the FSF are. So, we always understood that developers would need time to grok the new license, and that they would prefer to wait for its final release before they bothered. (Not everyone wants to “run the daily snapshot in production”, after all.) The developers should indeed take their time. As a copyleft advocate, I'd never want a project to pick new rules they aren't ready for, or set legal terms they don't fully understand yet.

    The adoption rate of GPLv3 and AGPLv3 seems to reflect this careful and reasoned approach. Pundits can keep saying that the new license has failed, but I'm not going take those comments seriously until the pundits can prove that this linear growth — a product of each project weighing the options slowly and carefully to come a decision and then starting the slow migration — has ended. For the moment, though, we seem right on course.


    0Merely replacing the existing GPLv2-or-later notice to read “GPLv3-or-later” (or GPLv3-only) has little effect. In our highly-archived Internet world, the code that was under GPLv2-or-later will always be available somewhere. Since GPLv2 is irrevocable, you can't take away someone's permanent right to copy, modify, distribute the work under GPLv2. So, until you actually change the code, the benefit of a relicense is virtually non-existent. Indeed, its only actual value is to remind your co-developers of the plan to license as GPLv3-or-later going forward, and make it easy for them to license their changes under GPLv3-or-later.

    1I also suspect that many projects that are doing this may not be clearly explaining the overall licensing of the project to their users. A side-project that I work on during the weekends called PokerSource is actually in the midst of slow migration from GPLv3-or-later to AGPLv3-or-later. I have carefully explained our license migration and its implications in the toplevel LICENSE file, and encourage other projects to follow that example.

    Posted on Thursday 13 November 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

September

  • 2008-09-20: A Day to Focus on Software Freedom and Reject Proprietary Software

    Today is International Software Freedom Day. I plan to spend the whole day writing as much Free Software as I can get done. I have read about lots of educational events teaching people how to use and install Free Software, and those sound great. I am glad to read stories about how well the day is being spent by many, and I can only hope to have contributed as much as people who spend the day, for example, teaching kids to use GNU/Linux.

    What troubles me, though, is the some events today are sponsored by companies that produce proprietary software. I notice that even the official Software Freedom Day site lists various proprietary (or semi-proprietary) software companies as sponsors. Indeed, I declined an invitation to an event sponsored and hosted by a proprietary software company.

    Today is about saying no to proprietary software, at least for one day. We live in the real world, of course, and some days we have to be willing to set our political beliefs aside to negotiate with proprietary software companies. But, on Software Freedom Day, I hope that our community will send a message to proprietary (or semi-proprietary) software companies that we reject user subjugation and favor software freedom instead.

    Posted on Saturday 20 September 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-09-04: GPL, The 2-clause BSD of Network Services

    Crossposted with autonomo.us.

    So often, a particular strategy becomes dogma. Copyleft licensing constantly allures us in this manner. Every long-term software freedom advocate I have ever known — myself included — has spent periods of time slipping on the comfortable shoes of belief that copyleft is the central catalyst for software freedom.

    Copyleft indeed remains a successful strategy in maximizing software freedom because it backs up a community consensus on software sharing with the protection of the law. However, most people do not comply with the GPL merely because they fear the consequences of copyright infringement. Rather, they comply for altruistic reasons: because it advances their own freedom and the freedom of the people around them.

    Indeed, it is so important to remember that many of the FLOSS programs we use every day are not copylefted, and do not actually have any long-term proprietary forks (for me, Subversion, Trac and Twisted come to mind quickly). Examples like this helped me to again re-eradicate some clouded thinking about copyleft as central tenant.

    With this mindset fresh, Mike Linksvayer and I had an excellent discussion last month that solidified this connection to network services, and specifically, the licenses for network services software. Many GPL'd network service software give no source to users, but that may have little to do with the authors' “failure to upgrade” to the AGPL. In other words, the non-source availability of network service applications that are otherwise licensed in freedom is probably unrelated to the lack of network-freedom provisions in the license.

    In fact, more likely, the network service world now mimics the early days of the BSD licenses. Deployers are “proprietarizing” by default merely because there is no social effect to encourage release of modified source. Often, they likely haven't considered the complex issues of network service freedom, and are following the common existing practices. Advent of the GPL did help encourage software sharing in the community, but the general change in social standards that accompanied the GPL probably had a more substantial impact.

    Therefore, improved social standards will help improve source sharing in network services. We need to encourage, and more importantly, make it easy for network service deployers to make source of network applications available, regardless of their particular FLOSS license. No existing non-AGPL FLOSS licenses prohibit making the source available to network users. Network providers can and should simply do it voluntarily out of respect for their users. Developers of network service software, even if they do not choose the AGPL, should make it easy for the deployers to give source to their users. I hope to assist in this regard more directly before the end of 2008.

    Posted on Thursday 04 September 2008 by Bradley M. Kuhn.

    Comment on this post in this identi.ca conversation.

  • 2008-09-02: GNU's Birthday

    Twenty-five years ago this month, I had just gotten my first computer, a Commodore 64, and was learning the very basics (quite literally) of programming. Unfortunately for my education, it would be a full eight years before I'd be permitted to see any source code to a computer program that I didn't write myself. I often look back at those eight years and consider that my most formative years of programming learning were wasted, since I was not permitted to study the programs written by the greatest minds.

    Fortunately for all the young programmers to come after me, something else was happening in an office at an MIT building in September 1983 that would make sure everyone would have the freedom to study code, and the freedom to improve it and contribute to the global library of software development knowledge. Richard Stallman announced that he would start the GNU project, a complete operating system that would give all its users freedom.

    I got involved with Free Software in 1992. At the time, I was the one student in my university who had ever heard of GNU and the recently released kernel named Linux. My professors knew of “that Stallman guy” but were focused primarily on academic research. Fortunately for me, they nevertheless gave me free reign over the systems to turn them into what might have been, in late 1992, one of the first Computer Science labs running entirely Free Software.

    Much more has happened since even then. To commemorate all that has come since Stallman's announcement, my colleagues at the FSF, home of the GNU project, released a video for this historic 25 year anniversary. It took twenty-five years, and a fight at the BBC over DRM, but now even a famous, accomplished actor like Stephen Fry is interested in the work that Stallman began way back in a year when Michael Jackson was a musical phenomenon and not merely a punchline of a joke.

    These days, I have almost weekly moments of surprise that people outside of the Software Freedom Movement have actually heard of what I do for a living. When Matt Lee (whom I got to know when he came up through the ranks in the 2000's as I did in the 1990's as a new FSF volunteer) told me a few months ago that Stephen Fry had enthusiastically and immediately agreed to make this video, it was yet another moment of surprise. We now live in a movement that impacts everyone in the industrialized world, because nearly everyone who has access to electricity also must use a computer to interact with daily life. So many people are impacted by the problems of proprietary software that Stallman noticed in 1983 impacting his small developer community. Thanks to the work of thousands, we now have the opportunity to welcome new groups into a computing world that can give them freedom. I'm happy that the friendly face of a talented and accomplished entertainer and world-class actor is here to welcome them.

    Posted on Tuesday 02 September 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

August

  • 2008-08-20: Compliance Advice Core-Dumped

    For ten years, I've been building up a bunch of standard advice on GPL compliance. Usually, I've found myself repeating this advice on the phone, again and again, to another new GPL violator who screwed it all up, just like the last one did. In the hopes that we will not have to keep giving this advice one-at-a-time to each violator, my colleagues and I have finally gotten an opportunity to write out in detail our best advice on the subject.

    Somewhere around 2004 or so, I thought that all of the GPL enforcement was going to get easier. After Peter Brown, Eben Moglen, David Turner and I had formalized FSF's GPL Compliance Lab, and Dan Ravicher and I had taught a few CLE classes to lawyers in the field, we believed that the world was getting a clue about GPL compliance. Many people did, of course, and we constantly welcome new groups of well-educated people in the commercial space who comply with the GPL correctly and who interact positively with our community.

    However, the interest in FLOSS keeps growing, rapidly. So, for every new citizen who does the research ahead of time and learns the rules, there are dozens who don't. The education effort is therefore forever ongoing because the newbies always seem to outnumber the old hands. It's our own copyleft version of Eternal September. The whole space is now big enough that one-by-one education in our traditional way can no longer scale.

    Hopefully, publishing some guidelines for GPL compliance will help the education effort scale. If you redistribute GPL'd software commercially in any way, or you are a lawyer who represents people that do, please spend the time to familiarize yourself with this information. If you have ideas on how we can expand this document, we would of course love to hear from you.

    Update (on 2008-08-26): Thanks for all the feedback we've gotten from the community. We've been glad to update the document to incorporate your suggestions.

    Posted on Wednesday 20 August 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-08-16: If The Worst of Us Wins, The Best of Us Surely Will

    There has been much chatter and coverage about the court decision related to the Artistic License decision last week. Having spent a decade worrying about the Artistic License, I was surprised and relieved to see this decision.

    One of the first tasks I undertook in the late 1990s in the world of Software Freedom licenses were issues surrounding the Artistic License. My first Software Freedom community was the Perl one, but my second was the licensing wonks. Therefore, I walked the line for many years, as I considered the poor drafting of the Original Artistic License. As the Perl6 process started in 2000, I chaired the Licensing Committee, and wrote all of the licensing RFCs in the Perl6 process, including RFC 211, which collected all the historical arguments about bad drafting of the Artistic License and argued that we change the Artistic License.

    Last year, I was silent about the lower court decision, because I'd known for years that the Original Artistic License was a poorly drafted and confusing license. I frankly was not surprised that a court had considered it problematic. Of course, I was glad for the appeal, and that there was a widely supported amicus brief arguing that the Artistic License should be treated appropriately as a copyright license. However, I had already prepared myself to live with the fact that the my greatest licensing fears had come true: the most poorly drafted FLOSS license had been the first for a USA court to consider, and that court had seen what we all saw — a license that was confusing and could not be upheld due to lack of clarity.

    I was overjoyed last week to see that the Federal Circuit ruled that even a poorly drafted copyright license like that must be taken seriously and that the copyright holder could seek remedies under copyright law. Now that I have seen this decision, I feel confident that the rest of our licenses will breeze through the courts, should the need arise. We've been arguing for a decade that the Artistic license is problematic, and even Larry Wall (its author) admitted that his intent wasn't necessarily to draft a good license but to inspire people to contact him for additional permissions outside the GPL. Nevertheless, he drafted a license that the USA courts clearly see as a valid copyright license. The bottom bar has been set, and since all our other licenses are much clearer, it will be smooth sailing here on out.

    (Please note, if you are a fan of the Artistic License, the Artistic License 2.0 is a much better option and is recommended. Despite the decision, we should still cease using the Original Artistic License now that we have 2.0.)

    Posted on Saturday 16 August 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

July

  • 2008-07-23: When Will Hosting Sites Allow AGPLv3 Code?

    At the OSCON Google Open Source Update, Chris Dibona reiterated his requirement to see significant adoption before code.google.com will host AGPLv3 projects (his words). I asked him to tell us how tall we in the AGPLv3 community need to be to ride this ride, but unfortunately he reiterated only the bar of “significant adoption”. I therefore am redoubling my efforts to encourage projects to switch to the AGPLv3, and for our community to build a list of AGPLv3'd projects, so that we can convince them.

    Chris argues that including AGPLv3 would encourage of license proliferation. On their surface, his arguments seem to be valid. I don't like license proliferation, either. Indeed, I have been a proponent of reducing license proliferation since around 2000 — long before it was fashionable, and when the OSI itself was the primary purveyor of license proliferation. I'm very glad that everyone has gotten on the same page about this, and would certainly not want to change my position now that we've reached consensus.

    However, AGPLv3 is not an example of license proliferation for three reasons. First, AGPLv3 is a license published by an organization (my old employers, the FSF) that has a 24 year history of publishing — indeed, inventing — the most popular and major licenses available in the FLOSS world. To compare them to (as some have) Nokia, who published merely a vanity license with an OSI rubber stamp is simply not a valid comparison.

    Second, the history of AGPL itself shows that proliferation is not at work here. AGPL was first drafted and published in early 2002, and has been in constant use since then. It filled a niche for users who were clamoring for a specific license to address a clear concern related to software freedom. I grant that the license is adopted by a small community, but GPL itself started with minimal interest (i.e., only in the GNU project). Also, licenses that are “GPL plus various special exceptions” that deal with tightly confined areas are, similar to AGPLv3, of interest to only small groups currently. There is no reason to reject a license that has a strong level of interest in a small community, particularly if it is — as GPL+exceptions and AGPLv3 are — compatible with existing licenses like GPLv3. In these cases, we should understand the reasons its user community picks it. In the APGLv3 case, the license addresses important FLOSS principles under serious study by our community. Any license that is actually redundant couldn't pass this test; AGPLv3 can.

    Finally, the AGPLv3 is the outcome of a public process in which Google itself (as well as many others) participated. Indeed, it was the original intent of the GPLv3 drafters to include the Affero clause in the GPLv3 itself. The committees (on which Google served) convinced RMS and other drafters to not include the clause, and that is why it was put into a separate license. We must consider the fairness issue: some members of the community asked us to not include the Affero clause in GPLv3; others wanted it. The parts of the community who didn't want the clause should be accepting of the idea that another publicly-audited license to address this concern should be published for the slighted community.

    Therefore, in this post, I am asking for help: will someone maintain a website that specifically tracks AGPLv3 adoption (as opposed to other sites that try to track everything)? I was going to do it myself, but since I'm the author of the Affero clause and a primary advocate in AGPLv3 adoption, I think it would better if someone else did it. Please email me if you are interested in this volunteer task. I'll update this post once we have a team of folks willing to work on this.

    Posted on Wednesday 23 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-07-22: Welte Receives Open Source Award for GPL Enforcement

    About two hours ago, Harald Welte received the 2008 Open Source Award entitled the Defender of Rights. (Open Source awards are renamed for each individual who receives them.) This award comes on the heels of the FSF Award for the Advancement of Free Software in March. I am glad that GPL enforcement work is now receiving the recognition it deserves.

    When I started doing GPL enforcement work in 1999, and even when, two years later, it became a major center of my work (as it remains today), the violations space was a very lonely place to work. During that early period, I and my team at FSF were the only people actively enforcing the GPL on behalf of the Software Freedom Movement. When Harald started gpl-violations.org in 2004, it was a relief to finally see someone else taking GPL violations as seriously as I and my colleagues at the FSF had been for so many years.

    Of course, it was no surprise when Harald received the FSF award earlier this year. This Open Source Award now shows a broader recognition. In fact, I hope that this award is a harbinger to indicate that the larger FLOSS world has realized the tremendous value in consistent and serious GPL enforcement that some of us have done for so long. The copyleft is meaningless if it is not defended against those who ignore it, and I am glad that more of the FLOSS world has begun to see that.

    Posted on Tuesday 22 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-07-14: Autonomo.us Computing

    The Network Services committee that I alluded to recently in various interviews is now officially public and named: Autonomo.us. (Thanks to one of the committee members, Evan Prodromou, who donated the domain name. ) Autonomo.us is officially endorsed by the FSF.

    I've written before about how discussions began at FSF in January 2002 to address the “ASP loophole of the GPL”. In those months that followed, when I came up with the idea for what would (later be named) the Affero clause, I naïvely thought that a license term for the software would “solve” the Software as a Service (SaaS) problem. Indeed, I considered the problem fully addressed upon publication of the original AGPL, and it was much later before I realized the problem was more complex.

    The AGPLv3 is only one (albeit essential) part of what must be a multi-pronged strategy to address the freedom implications and concerns of SaaS. At Auotonomo.us, we have published The Franklin Street Statement on Freedom and Network Services (named for the place it was declared — the location of post-Temple-Place FSF offices). The Statement is a manifesto (of sorts) outlining the concerns that must be addressed and the beginnings of some ideas for solutions. I hope you will read it and begin considering this issue if you haven't already, and that you will endorse the statement if you already understand the issue. We hope to be publishing more on that site as the year goes on!

    Posted on Monday 14 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-07-03: Like Twitter, but with Freedom Inside

    A company called Control Yourself, led by Evan Prodromou (who serves with me and many others on the FSF-endorsed Freedom for Network Services Committee) yesterday launched a site called identi.ca. It's a microblogging service similar to Twitter, but it is designed to respect the rights and freedoms of its users.

    I'm personally excited because the software for the system, Laconica, is under the license that I originally drafted back in 2002, the Affero GPL (which was updated as part of the GPLv3 process, and is now available as AGPLv3). This marks the first time I've seen a company release its product under a network service freedom-defending license from the start.

    His launch comes at an interesting time. Twitter has had no Jabber-based updates for more than a month, and Identica allows updates via Jabber. Thus, in a way, it's more fully featured than Twitter is right now!

    Posted on Thursday 03 July 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

June

  • 2008-06-28: Does This Mean We've “Made It” as a Social Cause?

    I got a phone call yesterday from someone involved with one of the many socially responsible investment houses. It appears that in some (thus far, small) corners of the socially responsible investment community, they've begun the nascent stages of adding “willingness to contribute to FLOSS” to the consideration map of social responsibility. This is an issue that has plagued me personally for many years, and I was excited to receive the call.

    When I graduated high school and read my first book on personal financial management, I learned how to invest for retirement in mutual funds. The book mentioned the (then) somewhat new practice of “socially responsible investing”, which immediately intrigued me. The author argued, however, that it was silly to make investment decisions based on personal beliefs. I immediately disagreed with that, but I discovered that his secondary point was actually accurate: beyond the Big Issues (weapons manufacturing, tobacco, etc.), it was tough to find a fund that actually shared your personal beliefs.

    Once I did some research, I discovered that it wasn't actually as bad as that, because there actually is a pretty good consensus on what is and is not socially responsible (or, at least, the general consensus in this regard seems to match my personal beliefs, anyway). However, I did discover a gaping hole in the social responsible investment agenda. The biggest social issue in my personal life — the issue of software freedom — was never on others' radar screens as a “socially responsible issue”.

    For example, in 1996, when I had my first opportunity to roll a 401(k) into an investment of my own choosing, I discovered a troubling fact. Every single socially responsible fund, when I looked at their stocks held (sorted by percentage), Microsoft was always in the top ten, and Oracle in the top twenty. Indeed, on most socially responsible axes, Microsoft and Oracle look good: they treat their employees reasonably well, they don't generally build products that actively kill people (although many of us die inside a little bit every time we use proprietary software), and, heck, if they use more DRM, they can ship their software and documentation via the network and won't even ship as many CDs to fill up landfills. This kind of thinking about “socially responsible” ignores how the proprietariness of the company's technology negatively impacts people outside of the company. Nevertheless, for years, I've held my nose and put my retirement money in these funds, content on the compromised idea that at least I don't have my retirement savings in oil companies.

    I tell this backstory to communicate how glad I was to get the call from an employee of a socially responsible investment house. This fellow was actually investigating the FLOSS credentials of various companies and trying to bring it forward as a criterion when considering how socially responsible their practices are. He seemed genuinely interested in bringing this forward as part of a social agenda for his company. I told him: every great idea starts as a conversation between two people, and enthusiastically answered his queries.

    It was clear FLOSS considerations are new and not widely adopted as a factor in the socially responsible investing world, but I am glad that at least someone in that world is thinking about these questions. Of course, I agree that in grand scheme, FLOSS issues should not be ranked too highly — certainly issues of environmental sustainability and human rights have a higher and more immediate social impact0. However, given that Microsoft so often ends up in the top ten of “good socially responsible investments”, FLOSS issues are clearly ranked far too low in the calculation.

    Hopefully, this phone call I took yesterday shows we're entering an era where FLOSS issues are on the socially responsible criteria list for investors. I further hope this blog entry doesn't stop socially responsible investors and fund managers from contacting me in the future to get advice on how socially responsible various companies are. I debated whether to write about this call publicly, but ultimately went for it, since it's an issue I think deserves some net.attention. So many of us, FLOSS fans included, must now must manage our own retirement accounts, since pension funds have generally given way to self-directed retirement savings options. If you have a fund with a socially responsible investment company, take this opportunity to give them a call or send them a letter to tell them you'd like to see FLOSS issues on the criteria list. If you don't yet invest in with a socially responsible company, consider switching to one, as they clearly will be the first to add FLOSS-related criteria to their investing agenda.


    0I have never believed myself that FLOSS is the most important social justice issue in the grand scheme. I struggled for years with the question of whether to devote my career to a social cause that wasn't top priority; things like human rights and environmental sustainability certainly deserve more immediate attention. However, it turned out that my skills, knowledge, background and talent are clearly uniquely tuned to Computer Science in general and FLOSS in particular, and therefore I can have the greatest positive impact focusing on this rather than would-be higher priority causes. If only we could get people in these other movements to at least see that they are better off not using Microsoft for their own operations (in my experience, NGOs and NPOs are more likely to stick with proprietary software than for-profit companies), but that's an agenda for another blog entry.

    Posted on Saturday 28 June 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-06-20: Stop Obsessing and Just Do It: VoIP Encryption Is Easier than You Think

    Ian Sullivan showed me an article that he read about eavesdropping on Internet telephony calls. I'm baffled at the obsession about this issue on two fronts. First, I am amazed that people want to hand their phone calls over to yet another proprietary vendor (aka Skype) using unpublished, undocumented non-standard protocols and who respects your privacy even less than the traditional PSTN vendors. Second, I don't understand why cryptography experts believe we need to develop complicated new technology to solve this problem in the medium term.

    At SFLC, I set up the telephony system as VoIP with encryption on every possible leg. While SFLC sometimes uses Skype, I don't, of course, because it is (a) proprietary software and (b) based on an undocumented protocol, (c) controlled by a company that has less respect for users' privacy than the PSTN companies themselves. Indeed, security was actually last on our list for reasons to reject Skype, because we already had a simple solution for encrypting our telephony traffic: All calls are made through a VPN.

    Specifically, at SFLC, I set up a system whereby all users have an OpenVPN connection back to the home office. From there, they have access to register a SIP client to an internal Asterisk server living inside the VPN network. Using that SIP phone, they could call any SFLC employee, fully encrypted. That call continues either on the internal secured network, or back out over the same VPN to the other SIP client. Users can also dial out from there to any PSTN DID.

    Of course, when calling the PSTN, the encryption ends at SFLC's office, but that's the PSTN's fault, not ours. No technological solution — save using a modem to turn that traffic digital — can easily solve that. However, with minimal effort, and using existing encryption subsystems, we have end-to-end encryption for all employee-to-employee calls.

    And it could go even further with a day's effort of work! I have a pretty simple idea on how to have an encrypted call to anyone who happens to have a SIP client and an OpenVPN client. My plan is to make a public OpenVPN server that accepts connection from any host at all, that would then allow encrypted “phone the office” calls to any SFLC phone with any SIP client anywhere on the Internet. In this way, anyone wishing end-to-end phone encryption to the SFLC need only connect to that publicly accessible OpenVPN and dial our extensions with their SIP client over that line. This solution even has the added bonus that it avoids the common firewall and NAT related SIP problems, since all traffic gets tunneled through the OpenVPN: if OpenVPN (which is, unlike SIP, a single-port UDP/IP protocol) works, SIP automatically does!

    The main criticism of this technique regards the silliness of two employees at a conference in San Francisco bouncing all the way through our NYC offices just to make a call to each other. While the Bandwidth Wasting Police might show up at my door someday, I don't actually find this to be a serious problem. The last mile is always the problem in Internet telephony, so a call that goes mostly across a single set of last mile infrastructure in a particular municipality is no worse nor better than one that takes a long haul round trip. Very occasionally, there is a half second of delay when you have a few VPN-based users on a conference call together, but that has a nice social side effect of stopping people from trying to interrupt each other.

    Finally, the article linked above talks about the issue of variable bit rate compression changing packet size such that even encrypted packets yield possible speech information, since some sounds need larger packets than others. This problem is solved simply for us with two systems: (a) we use µ-law, a very old, constant bit rate codec, and (b) a tiny bit of entropy is added to our packets by default, because the encryption is occurring for all traffic across the VPN connection, not just the phone call itself. Remember: all the traffic is going together across the one OpenVPN UDP port, so an eavesdropper would need to detangle the VoIP traffic from everything else. Indeed, I could easily make (b) even stronger by simply having the SIP client open another connection back to the asterisk host and exchange payloads generated from /dev/random back and forth while the phone call is going on.

    This is really one of those cases where the simpler the solution, the more secure it is. Trying to focus on “encryption of VoIP and VoIP only” is what leads us to the kinds of vulnerabilities described in that article. VoIP isn't like email, where you always need an encryption-unaware delivery mechanism between Alice and Bob. I believe I've described a simple mechanism that can allow anyone with an Asterisk box, an OpenVPN server, and an Internet connection to publish to the world easy instructions for phoning them securely with merely a SIP client plus and OpenVPN client. Why don't we just take the easy and more secure route and do our VoIP this way?

    Posted on Friday 20 June 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

April

  • 2008-04-10: The GPL is a Tool to Encourage Freedom, Not an End in Itself

    I was amazed to be involved in yet another discussion recently regarding the old debate about the scope of the GPL under copyright law. The debate itself isn't amazing — these debates have happened somewhere every six months, almost on cue, since around 1994 or so. What amazed me this time is that some people in the debate believed that the GPL proponents intend to sneakily pursue an increased scope for copyright law. Those who think that have completely misunderstood the fundamental idea behind the GPL.

    I'm disturbed by the notion that some believe the goal of the GPL is to expand copyrightability and the inclusiveness of derivative works. It seems that so many forget (or maybe they never even knew) that copyleft was invented to hack copyright — to turn its typical applications to software inside out. The state of affairs that software is controlled by draconian copyright rules is a lamentable reality; copyleft is merely a tool that diffuses the proprietary copyright weaponry.

    But, if it were possible to really consider reduction in copyright control over software, then I don't know of a single GPL proponent who wouldn't want to bilaterally reduce copyright's scope for software. For example, I've often proposed, since around 2001, that perhaps copyright for software should only last three years, non-renewable, and that it require all who wished to distribute non-public-domain software to register the source with the Copyright Office. At the end of the three years, the Copyright Office would automatically publish that now public-domain source to the world.

    If my hypothetical system were the actual (and only) legal regime for software, and were equally applied to all software — from the fully Free to the most proprietary — I'd have no sadness at all that opportunities for GPL enforcement ended after three years, and that all GPL'd software fell into the public domain on that tight schedule, because proprietary software and FLOSS would have the same treatment. Meanwhile, great benefit would be gained for the freedom of all software users. In short, GPL is not an end in itself, and I wouldn't want to ignore the actual goal — more freedom for software users — merely to strengthen one tool in that battle.

    In one of my favorite films, Kevin Smith's Dogma, Chris Rock's character, Rufus, argues that it's better to have ideas than beliefs, because ideas can change when the situation does, but beliefs become ingrained and are harder to shake. I'm not a belief-less person, but I certainly hold the GPL and the notion of copyleft firmly in the “idea” camp, not the “belief” one. It's unfortunate that the entrenched interests outside of software are (more or less) inadvertently strengthening software copyright, too. Thus, in the meantime, we must hold steadfast to the GPL going as far as is legally permitted under this ridiculously expansive copyright system we have. But, should a real policy dialogue open on the reduction software copyright's scope, GPL proponents will be the first in line to encourage such bilateral reduction.

    Posted on Thursday 10 April 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

January

  • 2008-01-24: When your apt-mirror is always downloading

    When I started building our apt-mirror, I ran into a problem: the machine was throttled against ubuntu.com's servers, but I had completed much of the download (which took weeks to get multiple distributions). I really wanted to roll out the solution quickly, particularly because the service from the remote servers was worse than ever due to the throttling that the mirroring created. But, with the mirror incomplete, I couldn't so easily make available incomplete repositories.

    The solution was to simply let apache redirect users on to the real servers if the mirror doesn't have the file. The first order of business for that is to rewrite and redirect URLs when files aren't found. This is a straightforward Apache configuration:

       RewriteEngine on
       RewriteLogLevel 0
       RewriteCond %{REQUEST_FILENAME} !^/cgi/
       RewriteCond /var/spool/apt-mirror/mirror/archive.ubuntu.com%{REQUEST_FILENAME} !-F
       RewriteCond /var/spool/apt-mirror/mirror/archive.ubuntu.com%{REQUEST_FILENAME} !-d
       RewriteCond %{REQUEST_URI} !(Packages|Sources)\.bz2$
       RewriteCond %{REQUEST_URI} !/index\.[^/]*$ [NC]
       RewriteRule ^(http://%{HTTP_HOST})?/(.*) http://91.189.88.45/$2 [P]
     

    Note a few things there:

    • I have to hard-code an IP number, because as I mentioned in the last post on this subject, I've faked out DNS for archive.ubuntu.com and other sites I'm mirroring. (Note: this has the unfortunate side-effect that I can't easily take advantage of round-robin DNS on the other side.)

    • I avoid taking Packages.bz2 from the other site, because apt-mirror actually doesn't mirror the bz2 files (although I've submitted a patch to it so it will eventually).

    • I make sure that index files get built by my Apache and not redirected.

    • I am using Apache proxying, which gives me Yet Another type of cache temporarily while I'm still downloading the other packages. (I should actually work out a way to have these caches used by apt-mirror itself in case a user has already requested a new package while waiting for apt-mirror to get it.)

    Once I do a rewrite like this for each of the hosts I'm replacing with a mirror, I'm almost done. The problem is that if for any reason my site needs to give a 403 to the clients, I would actually like to double-check to be sure that the URL doesn't happen to work at the place I'm mirroring from.

    My hope was that I could write a RewriteRule based on what the HTTP return code would be when the request completed. This was really hard to do, it seemed, and perhaps undoable. The quickest solution I found was to write a CGI script to do the redirect. So, in the Apache config I have:

    ErrorDocument 403 /cgi/redirect-forbidden.cgi
    

    And, the CGI script looks like this:

    #!/usr/bin/perl
    
    use strict;
    use CGI qw(:standard);
    
    my $val = $ENV{REDIRECT_SCRIPT_URI};
    
    $val =~ s%^http://(\S+).sflc.info(/.*)$%$2%;
    if ($1 eq "ubuntu-security") {
       $val = "http://91.189.88.37$val";
    } else {
       $val = "http://91.189.88.45$val";
    }
    
    print redirect($val);
    

    With these changes, the user will be redirected to the original when the files aren't available on the mirror, and as the mirror gets more accurate, they'll get more files from the mirror.

    I still have problems if for any reason the user gets a Packages or Sources file from the original site before the mirror is synchronized, but this rarely happens since apt-mirror is pretty careful. The only time it might happen is if the user did an apt-get update when not connected to our VPN and only a short time later did one while connected.

    Posted on Thursday 24 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-01-16: apt-mirror and Other Caching for Debian/Ubuntu Repositories

    Working for a small non-profit, everyone has to wear lots of hats, and one that I have to wear from time to time (since no one else here can) is “sysadmin”. One of the perennial rules of system administration is: you can never give users enough bandwidth. The problem is, they eventually learn how fast your connection to the outside is, and then complain any time a download doesn't run at that speed. Of course, if you have a T1 or better, it's usually the other side that's the problem. So, I look to use our extra bandwidth during off hours to cache large pools of data that are often downloaded. With a organization full of Ubuntu machines, the Ubuntu repositories are an important target for caching.

    apt-mirror is a program that mirrors large Debian-based repositories, including the Ubuntu ones. There are already tutorials available on how to set it up. What I'm writing about here is a way to “force” users to use that repository.

    The obvious way, of course, is to make everyone's /etc/apt/sources.list point at the mirrored repository. This often isn't a good option. Save the servers, the user base here is all laptops, which means that they will often be on networks that may actually be closer to another package repository and perhaps I want to avoid interfering with that. (Although given that I can usually give almost any IP number in the world better than the 30kbs/sec that ubuntu.com's servers seem to quickly throttle to, that probably doesn't matter so much).

    The bigger problem is that I don't want to be married to the idea that the apt-mirror is part of our essential 24/7 infrastructure. I don't want an angry late-night call from a user because they can't install a package, and I want the complete freedom to discontinue the server at any time, if I find it to be unreliable. I can't do this easily if sources.list files on traveling machines are hard-coded with the apt-mirror server's name or address, especially when I don't know when exactly they'll connect back to our VPN.

    The easier solution is to fake out the DNS lookups via the DNS server used by the VPN and the internal network. This way, user only get the mirror when they are connected to the VPN or in the office; otherwise, the get the normal Ubuntu servers. I had actually forgotten you could fake out DNS on a per host basis, but asking my friend Paul reminded me quickly. In /etc/bin/named.conf.local (on Debian/Ubuntu), I just add:

    zone "archive.ubuntu.com"      {
            type master;
            file "/etc/bind/db.archive.ubuntu-fake";
    };
    

    And in /etc/bind/db.archive.ubuntu-fake:

    $TTL    604800
    @ IN SOA archive.ubuntu.com.  root.vpn. (
           2008011001  ; serial number                                              
           10800 3600 604800 3600)
         IN NS my-dns-server.vpn.
    
    ;                                                                               
    ;  Begin name records                                                           
    ;                                                                               
    archive.ubuntu.com.  IN A            MY.EXTERNAL.FACING.IP
    

    And there I have it; I just do one of those for each address I want to replace (e.g., security.ubuntu.com). Now, when client machines lookup archive.ubuntu.com (et al), they'll get MY.EXTERNAL.FACING.IP, but only when my-dns-server.vpn is first in their resolv.conf.

    Next time, I'll talk about some other ideas on how I make the apt-mirror even better.

    Posted on Wednesday 16 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-01-09: Postfix Trick to Force Secondary MX to Deliver Locally

    Suppose you have a domain name, example.org, that has a primary MX host (mail.example.org) that does most of the delivery. However, one of the users, who works at example.com, actually gets delivery of <user@example.org> at work (from the primary MX for example.com, mail.example.com). Of course, a simple .forward or /etc/aliases entry would work, but this would pointlessly push email back and forth between the two mail servers — in some cases, up to three pointless passes before the final destination! That's particularly an issue in today's SPAM-laden world. Here's how to solve this waste of bandwidth using Postfix.

    This tutorial here assumes you have a some reasonable background knowledge of Postfix MTA administration. If you don't, this might go a bit fast for you.

    To begin, first note that this setup assumes that you have something like this with regard to your MX setup:

    $ host -t mx example.org
    example.org mail is handled by 10 mail.example.org.
    example.org mail is handled by 20 mail.example.com.
    $ host -t mx example.com
    example.com mail is handled by 10 mail.example.com.
    

    Our first task is to avoid example.org SPAM backscatter on mail.example.com. To do that, we make a file with all the valid accounts for example.org and put it in mail.example.com:/etc/postfix/relay_recipients. (For more information, read the Postfix docs or various tutorials about this.) After that, we have something like this in mail.example.com:/etc/postfix/main.cf:

    relay_domains = example.org
    relay_recipient_maps = hash:/etc/postfix/relay_recipients
    
    And this in /etc/postfix/transport:
    example.org     smtp:[mail.example.org]
    

    This will give proper delivery for our friend <user@example.org> (assuming mail.example.org is forwarding that address properly to <user@example.com>), but mail will push mail back and forth unnecessarily when mail.example.com gets a message for <user@example.org>. What we actually want is to wise up mail.example.com so it “knows” that mail for <user@example.org> is ultimately going to be delivered locally on that server.

    To do this, we add <user@example.org> to the virtual_alias_maps, with an entry like:

    user@example.org      user
    
    so that the key user@example.org resolves to the local username user. Fortunately, Postfix is smart enough to look at the virtual table first before performing a relay.

    Now, what about aliases like <user.lastname@example.org>, that actually forwards to <user@example.org>? That will have the same pointless forwarding from server-to-server unless we address it specifically. To do so, we use the transport file. of course, we should already have that catch-all entry there to do the relaying:

    example.org     smtp:[mail.example.org]
    

    But, we can also add email address specific entries for certain addresses in the example.org domain. Fortunately, email address matches in the transport table take precedence over whole domain match entries (see the transport man page for details.). Therefore, we simply add entries to that transport file like this for each of user's aliases:

    user.lastname@example.org    local:user
    
    (Note: that assumes you have a delivery method in master.cf called local. Use whatever transport you typically use to force local delivery.)

    And there you have it! If you have (those albeit rare) friendly and appreciative users, user will thank you for the slightly quicker mail delivery, and you'll be glad that you aren't pointlessly shipping SPAM back and forth between MX's unnecessarily.

    Posted on Wednesday 09 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2008-01-01: Apache 2.0 -> 2.2 LDAP Changes on Ubuntu

    I thought the following might be of use to those of you who are still using Apache 2.0 with LDAP and wish to upgrade to 2.2. I found this basic information around online, but I had to search pretty hard for it. Perhaps presenting this in a more straightforward way might help the next searcher to find an answer more quickly. It's probably only of interest if you are using LDAP as your authentication system with an older Apache (e.g., 2.0) and have upgraded to 2.2 on an Ubuntu or Debian system (such as upgrading from dapper to gutsy.)

    When running dapper on my intranet web server with Apache 2.0.55-4ubuntu2.2, I had something like this:

         <Directory /var/www/intranet>
               Order allow,deny
               Allow from 192.168.1.0/24 
    
               Satisfy All
               AuthLDAPEnabled on
               AuthType Basic
               AuthName "Example.Org Intranet"
               AuthLDAPAuthoritative on
               AuthLDAPBindDN uid=apache,ou=roles,dc=example,dc=org
               AuthLDAPBindPassword APACHE_BIND_ACCT_PW
               AuthLDAPURL ldap://127.0.0.1/ou=staff,ou=people,dc=example,dc=org?cn
               AuthLDAPGroupAttributeIsDN off
               AuthLDAPGroupAttribute memberUid
    
               require valid-user
        </Directory>
    

    I upgraded that server to gutsy (via dapper → edgy → feisty → gutsy in succession, just because it's safer), and it now has Apache 2.2.4-3build1. The methods to do LDAP authentication is a bit more straightforward now, but it does require this change:

        <Directory /var/www/intranet>
            Order allow,deny
            Allow from 192.168.1.0/24 
    
            AuthType Basic
            AuthName "Example.Org Intranet"
            AuthBasicProvider ldap
            AuthzLDAPAuthoritative on
            AuthLDAPBindDN uid=apache,ou=roles,dc=example,dc=org
            AuthLDAPBindPassword APACHE_BIND_ACCT_PW
            AuthLDAPURL ldap://127.0.0.1/ou=staff,ou=people,dc=example,dc=org
    
            require valid-user
            Satisfy all
        </Directory>
    

    However, this wasn't enough. When I set this up, I got rather strange error messages such as:

    [error] [client MYIP] GROUP: USERNAME not in required group(s).
    

    I found somewhere online (I've now lost the link!) that you couldn't have standard pam auth competing with the LDAP authentication. This seemed strange to me, since I've told it I want the authentication provided by LDAP, but anyway, doing the following on the system:

    a2dismod auth_pam
    a2dismod auth_sys_group
    

    solved the problem. I decided to move on rather than dig deeper into the true reasons. Sometimes, administration life is actually better with a mystery about.

    Posted on Tuesday 01 January 2008 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

2007

November

  • 2007-11-21: stet and AGPLv3

    Many people don't realize that the GPLv3 process actually began long before the November 2005 announcement. For me and a few others, the GPLv3 process started much earlier. Also, in my view, it didn't actually end until this week, the FSF released the AGPLv3. Today, I'm particularly proud that SFLC released the first software covered by the terms of that license.

    The GPLv3 process focused on the idea of community, and a community is built from bringing together many individual experiences. I am grateful for all my personal experiences throughout this process. Indeed, I would guess that other GPL fans like myself remember, as I do, the first time the heard the phrase “GPLv3”. For me, it was a bit early — on Tuesday 8 January 2002 in a conference room at MIT. On that day, Richard Stallman, Eben Moglen and I sat down to have an all-day meeting that included discussions regarding updating GPL. A key issue that we sought to address was (in those days) called the “Application Service Provider (ASP) problem” — now called “Software as a Service (SaaS)”.

    A few weeks later, on the telephone with Eben one morning, as I stood in my kitchen making oatmeal, we discussed this problem. I pointed out the oft-forgotten section 2(c) of the GPL [version 2]. I argued that contrary to popular belief, it does have restrictions on some minor modifications. Namely, you have to maintain those print statements for copyright and warranty disclaimer information. It's reasonable, in other words, to restrict some minor modifications to defend freedom.

    We also talked about that old Computer Science problem of having a program print its own source code. I proposed that maybe we needed a section 2(d) that required that if a program prints its own source to the user, that you can't remove that feature, and that the feature must always print the complete and corresponding source.

    Within two months, Affero GPLv1 was published — an authorized fork of the GPL to test the idea. From then until AGPLv3, that “Affero clause” has had many changes, iterations and improvements, and I'm grateful for all the excellent feedback, input and improvements that have gone into it. The result, the Affero GPLv3 (AGPLv3) released on Monday, is an excellent step forward for software freedom licensing. While the community process indicated that the preference was for the Affero clause to be part of a separate license, I'm nevertheless elated that the clause continues to live on and be part of the licensing infrastructure defending software freedom.

    Other than coining the Affero clause, my other notable personal contribution to the GPLv3 was management of a software development project to create the online public commenting system. To do the programming, we contracted with Orion Montoya, who has extensive experience doing semantic markup of source texts from an academic perspective. Orion gave me my first introduction to the whole “Web 2.0” thing, and I was amazed how useful the result was; it helped the leaders of the process easily grok the public response. For example, the intensity highlighting — which shows the hot spots in the text that received the most comments — gives a very quick picture of sections that are really of concern to the public. In reviewing the drafts today, I was reminded that the big red area in section 1 about “encryption and authorization codes” is substantially changed and less intensely highlighted by draft 4. That quick-look gives a clear picture of how the community process operated to get a better license for everyone.

    Orion, a Classics scholar as an undergrad, named the software stet for its original Latin definition: “let it stand as it is”. It was his hope that stet (the software) would help along the GPLv3 process so that our whole community, after filing comments on each successive draft, could look at the final draft and simply say: Stet!

    Stet has a special place in software history, I believe, even if it's just a purely geeky one. It is the first software system in history to be meta-licensed. Namely, it was software whose output was its own license. It's with that exciting hacker concept that I put up today a Trac instance for stet, licensed under the terms of the AGPLv3 [ which is now on Gitorious ] 1.

    Stet is by no means ready for drop-in production. Like most software projects, we didn't estimate perfectly how much work would be needed. We got lazy about organization early on, which means it still requires a by-hand install, and new texts must be carefully marked up by hand. We've moved on to other projects, but hopefully SFLC will host the Trac instance indefinitely so that other developers can make it better. That's what copylefted FOSS is all about — even when it's SaaS.


    1Actually, it's under AGPLv3 plus an exception to allow for combining with the GPLv2-only Request Tracker, with which parts of stet combine.

    Posted on Wednesday 21 November 2007 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

August

  • 2007-08-24: More Xen Tricks

    In my previous post about Xen, I talked about how easy Xen is to configure and set up, particularly on Ubuntu and Debian. I'm still grateful that Xen remains easy; however, I've lately had a few Xen-related challenges that needed attention. In particular, I've needed to create some surprisingly messy solutions when using vif-route to route multiple IP numbers on the same network through the dom0 to a domU.

    I tend to use vif-route rather than vif-bridge, as I like the control it gives me in the dom0. The dom0 becomes a very traditional packet-forwarding firewall that can decide whether or not to forward packets to each domU host. However, I recently found some deep weirdness in IP routing when I use this approach while needing multiple Ethernet interfaces on the domU. Here's an example:

    Multiple IP numbers for Apache

    Suppose the domU host, called webserv, hosts a number of websites, each with a different IP number, so that I have Apache doing something like1:

    Listen 192.168.0.200:80
    Listen 192.168.0.201:80
    Listen 192.168.0.202:80
    ...
    NameVirtualHost 192.168.0.200:80
    <VirtualHost 192.168.0.200:80>
    ...
    NameVirtualHost 192.168.0.201:80
    <VirtualHost 192.168.0.201:80>
    ...
    NameVirtualHost 192.168.0.202:80
    <VirtualHost 192.168.0.202:80>
    ...
    

    The Xen Configuration for the Interfaces

    Since I'm serving all three of those sites from webserv, I need all those IP numbers to be real, live IP numbers on the local machine as far as the webserv is concerned. So, in dom0:/etc/xen/webserv.cfg I list something like:

    vif  = [ 'mac=de:ad:be:ef:00:00, ip=192.168.0.200',
             'mac=de:ad:be:ef:00:01, ip=192.168.0.201',
             'mac=de:ad:be:ef:00:02, ip=192.168.0.202' ]
    

    … And then make webserv:/etc/iftab look like:

    eth0 mac de:ad:be:ef:00:00 arp 1
    eth1 mac de:ad:be:ef:00:01 arp 1
    eth2 mac de:ad:be:ef:00:02 arp 1
    

    … And make webserv:/etc/network/interfaces (this is probably Ubuntu/Debian-specific, BTW) look like:

    auto lo
    iface lo inet loopback
    auto eth0
    iface eth0 inet static
     address 192.168.0.200
     netmask 255.255.255.0
    auto eth1
    iface eth1 inet static
     address 192.168.0.201
     netmask 255.255.255.0
    auto eth2
    iface eth2 inet static
     address 192.168.0.202
     netmask 255.255.255.0
    

    Packet Forwarding from the Dom0

    But, this doesn't get me the whole way there. My next step is to make sure that the dom0 is routing the packets properly to webserv. Since my dom0 is heavily locked down, all packets are dropped by default, so I have to let through explicitly anything I'd like webserv to be able to process. So, I add some code to my firewall script on the dom0 that looks like:2

    webIpAddresses="192.168.0.200 192.168.0.201 192.168.0.202"
    UNPRIVPORTS="1024:65535"
    
    for dport in 80 443;
    do
      for sport in $UNPRIVPORTS 80 443 8080;
      do
        for ip in $webIpAddresses;
        do
          /sbin/iptables -A FORWARD -i eth0 -p tcp -d $ip \
            --syn -m state --state NEW \
            --sport $sport --dport $dport -j ACCEPT
    
          /sbin/iptables -A FORWARD -i eth0 -p tcp -d $ip \
            --sport $sport --dport $dport \
            -m state --state ESTABLISHED,RELATED -j ACCEPT
    
          /sbin/iptables -A FORWARD -o eth0 -s $ip \
            -p tcp --dport $sport --sport $dport \
            -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
        done  
      done
    done
    

    Phew! So at this point, I thought I was done. The packets should find their way forwarded through the dom0 to the Apache instance running on the domU, webserv. While that much was true, I now have the additional problem that packets got lost in a bit of a black hole on webserv. When I discovered the black hole, I quickly realized why. It was somewhat atypical, from webserv's point of view, to have three “real” and different Ethernet devices with three different IP numbers, which all talk to the exact same network. There was more intelligent routing needed.3

    Routing in the domU

    While most non-sysadmins still use the route command to set up local IP routes on a GNU/Linux host, iproute2 (available via the ip command) has been a standard part of GNU/Linux distributions and supported by Linux for nearly ten years. To properly support the situation of multiple (from webserv's point of view, at least) physical interfaces on the same network, some special iproute2 code is needed. Specifically, I set up separate route tables for each device. I first encoded their names in /etc/iproute2/rt_tables (the numbers 16-18 are arbitrary, BTW):

    16      eth0-200
    17      eth1-201
    18      eth2-202
    

    And here are the ip commands that I thought would work (but didn't, as you'll see next):

    /sbin/ip route del default via 192.168.0.1
    
    for table in eth0-200 eth1-201 eth2-202;
    do
       iface=`echo $table | perl -pe 's/^(\S+)\-.*$/$1/;'`
       ipEnding=`echo $table | perl -pe 's/^.*\-(\S+)$/$1/;'`
       ip=192.168.0.$ipEnding
       /sbin/ip route add 192.168.0.0/24 dev $iface table $table
    
       /sbin/ip route add default via 192.168.0.1 table $table
       /sbin/ip rule add from $ip table $table
       /sbin/ip rule add to 0.0.0.0 dev $iface table $table
    done
    
    /sbin/ip route add default via 192.168.0.1 
    

    The idea is that each table will use rules to force all traffic coming in on the given IP number and/or interface to always go back out on the same, and vice versa. The key is these two lines:

       /sbin/ip rule add from $ip table $table
       /sbin/ip rule add to 0.0.0.0 dev $iface table $table
    

    The first rule says that when traffic is coming from the given IP number, $ip, the routing rules in table, $table should be used. The second says that traffic to anywhere when bound for interface, $iface should use table, $table.

    The tables themselves are set up to always make sure the local network traffic goes through the proper associated interface, and that the network router (in this case, 192.168.0.1) is always used for foreign networks, but that it is reached via the correct interface.

    This is all well and good, but it doesn't work. Certain instructions fail with the message, RTNETLINK answers: Network is unreachable, because the 192.168.0.0 network cannot be found while the instructions are running. Perhaps there is an elegant solution; I couldn't find one. Instead, I temporarily set up “dummy” global routes in the main route table and deleted them once the table-specific ones were created. Here's the new bash script that does that (lines that are added are emphasized and in bold):

    /sbin/ip route del default via 192.168.0.1
    for table in eth0-200 eth1-201 eth2-202;
    do
       iface=`echo $table | perl -pe 's/^(\S+)\-.*$/$1/;'`
       ipEnding=`echo $table | perl -pe 's/^.*\-(\S+)$/$1/;'`
       ip=192.168.0.$ipEnding
       /sbin/ip route add 192.168.0.0/24 dev $iface table $table
    
       /sbin/ip route add 192.168.0.0/24 dev $iface src $ip
    
       /sbin/ip route add default via 192.168.0.1 table $table
       /sbin/ip rule add from $ip table $table
    
       /sbin/ip rule add to 0.0.0.0 dev $iface table $table
    
       /sbin/ip route del 192.168.0.0/24 dev $iface src $ip
    done
    /sbin/ip route add 192.168.0.0/24 dev eth0 src 192.168.0.200
    /sbin/ip route add default via 192.168.0.1 
    /sbin/ip route del 192.168.0.0/24 dev eth0 src 192.168.0.200
    

    I am pretty sure I'm missing something here — there must be a better way to do this, but the above actually works, even if it's ugly.

    Alas, Only Three

    There was one additional confusion I put myself through while implementing the solution. I was actually trying to route four separate IP addresses into webserv, but discovered that I got found this error message (found via dmesg on the domU): netfront can't alloc rx grant refs. A quick google around showed me that the XenFaq, which says that Xen 3 cannot handled more than three network interfaces per domU. Seems strangely arbitrary to me; I'd love to hear why cuts it off at three. I can imagine limits at one and two, but it seems that once you can do three, n should be possible (perhaps still with linear slowdown or some such). I'll have to ask the Xen developers (or UTSL) some day to find out what makes it possible to have three work but not four.


    1Yes, I know I could rely on client-provided Host: headers and do this with full name-based virtual hosting, but I don't like to do that for good reason (as outlined in the Apache docs).

    2Note that the above firewall code must run on dom0, which has one real Ethernet device (its eth0) that is connected properly to the wide 192.168.0.0/24 network, and should have some IP number of its own there — say 192.168.0.100. And, don't forget that dom0 is configured for vif-route, not vif-bridge. Finally, for brevity, I've left out some of the firewall code that FORWARDs through key stuff like DNS. If you are interested in it, email me or look it up in a firewall book.

    3I was actually a bit surprised at this, because I often have multiple IP numbers serviced from the same computer and physical Ethernet interface. However, in those cases, I use virtual interfaces (eth0:0, eth0:1, etc.). On a normal system, Linux does the work of properly routing the IP numbers when you attach multiple IP numbers virtually to the same physical interface. However, in Xen domUs, the physical interfaces are locked by Xen to only permit specific IP numbers to come through, and while you can set up all the virtual interfaces you want in the domU, it will only get packets destine for the IP number specified in the vif section of the configuration file. That's why I added my three different “actual” interfaces in the domU.

    Posted on Friday 24 August 2007 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

June

  • 2007-06-12: Virtually Reluctant

    Way back when User Mode Linux (UML) was the “only way” the Free Software world did anything like virtualization, I was already skeptical. Those of us who lived through the coming of age of Internet security — with a remote root exploit for every day of the week — became obsessed with the chroot and its ultimate limitations. Each possible upgrade to a better, more robust virtual environment was met with suspicion on the security front. I joined the many who doubted that you could truly secure a machine that offered disjoint services provisioned on the same physical machine. I've recently revisited this position. I won't say that Xen has completely changed my mind, but I am open-minded enough again to experiment.

    For more than a decade, I have used chroots as a mechanism to segment a service that needed to run on a given box. In the old days of ancient BINDs and sendmails, this was often the best we could do when living with a program we didn't fully trust to be clean of remotely exploitable bugs.

    I suppose those days gave us all rather strange sense of computer security. I constantly have the sense that two services running on the same box always endanger each other in some fundamental way. It therefore took me a while before I was comfortable with the resurgence of virtualization.

    However, what ultimately drew me in was the simple fact that modern hardware is just too darn fast. It's tough to get a machine these days that isn't ridiculously overpowered for most tasks you put in front of it. CPUs sit idle; RAM sits empty. We should make more efficient use of the hardware we have.

    Even with that reality, I might have given up if it wasn't so easy. I found a good link about Debian on Xen, a useful entry in the Xen Wiki, and some good network and LVM examples. I also quickly learned how to use RAID/LVM together for disk redundancy inside Xen instances. I even got bonded ethernet working with some help to add additional network redundancy.

    So, one Saturday morning, I headed into the office, and left that afternoon with two virtual servers running. It helped that Xen 3.0 is packaged properly for recent Ubuntu versions, and a few obvious apt-get installs get you what you need on edgy and feisty. In fact, I only struggled (and only just a bit) with the network, but quickly discovered two important facts:

    • VIF network routing in my opinion is a bit easier to configure and more stable than VIF bridging, even if routing is a bit slower.
    • sysctl -w net.ipv4.conf.DEVICE.proxy_arp=1 is needed to make the network routing down into the instances work properly.

    I'm not completely comfortable yet with the security of virtualization. Of course, locking down the Dom0 is absolutely essential, because there lies the keys to your virtual kingdom. I lock it down with iptables so that only SSH from a few trusted hosts comes in, and even services as fundamental as DNS can only be had from a few trusted places. But, I still find myself imagining ways people can bust through the instance kernels and find their way to the hypervisor.

    I'd really love to see a strong line-by-line code audit of the hypervisor and related utilities to be sure we've got something we can trust. However, in the meantime, I certainly have been sold on the value of this approach, and am glad it's so easy to set up.

    Posted on Tuesday 12 June 2007 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

May

  • 2007-05-08: Tools for Investigating Copyright Infringement

    Nearly all software developers know that software is covered by copyright. Many know that copyright covers the expression of an idea fixed in a medium (such as a series of bytes), and that the copyright rules govern the copying, modifying and distributing of the work. However, only a very few have considered the questions that arise when trying to determine if one work infringes the copyright of another.

    Indeed, in the world of software freedom, copyright is seen as a system we have little choice but to tolerate. Many Free Software developers dislike the copyright system we have, so it is little surprise that developers want to spend minimal time thinking about it. Nevertheless, the copyright system is the foremost legal framework that governs software1, and we have to live within it for the moment.

    My fellow developers have asked me for years what constitute copyright infringement. In turn, for years, I have asked the lawyers I worked with to give me guidelines to pass on to the Free Software development community. I've discovered that it's difficult to adequately describe the nature of copyright infringement to software developers. While it is easy to give pathological examples of obvious infringement (such as taking someone's work, removing their copyright notices and distributing it as your own), it quickly becomes difficult to give definitive answers in many real world examples whether some particular activity constitutes infringement.

    In fact, in nearly every GPL enforcement cases that I've worked on in my career, the fact that infringement had occurred was never in dispute. The typical GPL violator started with a work under GPL, made some modifications to a small portion of the codebase, and then distributed the whole work in binary form only. It is virtually impossible to act in that way and still not infringe the original copyright.

    Usually, the cases of “hazy” copyright infringement come up the other way around: when a Free Software program is accused of infringing the copyright of some proprietary work. The most famous accusation of this nature came from Darl McBride and his colleagues at SCO, who claimed that something called “Linux” infringed his company's rights. We now know that there was no copyright infringement (BTW, whether McBride meant to accuse the GNU/Linux operating system or the kernel named Linux, we'll never actually know). However, the SCO situation educated the Free Software community that we must strive to answer quickly and definitively when such accusations arise. The burden of proof is usually on the accuser, but being able to make a preemptive response to even the hint of an allegation is always advantageous when fighting FUD in the court of public opinion.

    Finally, issues of “would-be” infringement detection come up for companies during due diligence work. Ideally, there should be an easy way for companies to confirm which parts of their systems are derivatives of Free Software systems, which would make compliance with licenses easy. A few proprietary software companies provide this service; however there should be readily available Free Software tools (just as there should be for all tasks one might want to perform with a computer).

    It is not so easy to create such tools. Copyright infringement is not trivially defined; in fact, most non-trivial situations require a significant amount of both technical and legal judgement. Software tools cannot make a legal conclusion regarding copyright infringement. Rather, successful tools will guide an expert's analysis of a situation. Such systems will immediately identify the rarely-found obvious indications of infringement, bring to the forefront facts that need an exercise of judgement, and leave everything else in the background.

    In this multi-part series of blog entries, I will discuss the state of the art in these Free Software systems for infringement analysis and what plans our community should make for the creation Free systems that address this problem.


    1 Copyright is the legal system that non-lawyers usually identify most readily as governing software, but the patent system (unfortunately) also governs software in many countries, and many non-Free Software licenses (and a few of the stranger Free Software ones) also operate under contract law as well as copyright law. Trade secrets are often involved with software as well. Nevertheless, in the Software Freedom world, copyright is the legal system of primary attention on a daily basis.

    Posted on Tuesday 08 May 2007 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2007-05-05: Walnut Hills, AP Computer Science, 1998-1999

    I taught AP Computer Science at Walnut Hills High School in Cincinnati, OH during the 1998-1999 school year.

    I taught this course because:

    • They were desperate for a teacher. The rather incompetent teacher who was scheduled to teach the course quit (actually, frighteningly enough, she got a higher paying and higher ranking job in a nearby school system) a few weeks before the school year was to start.
    • The environment was GNU/Linux using GCC's C++ compiler. I went to the job interview because a mother of someone in the class begged me to go, but I was going to walk out as soon as I saw I'd have to teach on Microsoft (which I assumed it would be). My jaw literally dropped when I saw:
    • The students had built their own lab, which even got covered in the Cincinnati Post. I was quite amazed that some of the most brilliant high school students I've ever seen were assembled there in one classroom.

    It became quite clear to me that I owed it to these students to teach the course. They'd discovered Free Software before the boom, and built their own lab despite the designate CS teacher obviously knowning a hell of lot less about the field than they did. There wasn't a person qualified and available , in my view, in all of Cincinnati to teach the class. High school teacher wages are traditionally pathetic. So, I joined the teacher's union and took the job.

    Doing this work delayed my thesis and graduation from the Master's program at University of Cincinnati for yet another year, but it was worth doing. Even almost a decade later, it ranks in my mind on the top ten list of great things I've done in my life, even despite all the exciting Free Software work I've been involved with in my positions at the FSF and the Software Freedom Conservancy.

    I am exceedingly proud of what my students have accomplished. It's clear to me that somehow we assembled an incredibly special group of Computer Science students; many of them have gone on to make interesting contributions. I know they didn't always like that I brought my Free Software politics into the classroom, but I think we had a good year, and their excellent results on that AP exam showed it. Here are a few of my students from that year who have a public online life:

    If you were my student at Walnut Hills and would like a link here, let me know and I'll add one.

    Posted on Saturday 05 May 2007 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

April

  • 2007-04-17: Remember the Verbosity (A Brief Note)

    I don't remember when it happened, but sometime in the past four years, the Makefiles for the kernel named Linux changed. I can't remember exactly, but I do recall sometime “recently” that the kernel build output stopped looking like what I remember from 1991, and started looking like this:

    CC arch/i386/kernel/semaphore.o
    CC arch/i386/kernel/signal.o

    This is a heck of a lot easier to read, but there was something cool about having make display the whole gcc command lines, like this:

    gcc -m32 -Wp,-MD,arch/i386/kernel/.semaphore.o.d -nostdinc -isystem /usr/lib/gcc/i486-linux-gnu/4.0.3/include -D__KERNEL__ -Iinclude -include include/linux/autoconf.h -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -ffreestanding -Os -fomit-frame-pointer -pipe -msoft-float -mpreferred-stack-boundary=2 -march=i686 -mtune=pentium4 -Iinclude/asm-i386/mach-default -Wdeclaration-after-statement -Wno-pointer-sign -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(semaphore)" -D"KBUILD_MODNAME=KBUILD_STR(semaphore)" -c -o arch/i386/kernel/semaphore.o arch/i386/kernel/semaphore.c
    gcc -m32 -Wp,-MD,arch/i386/kernel/.signal.o.d -nostdinc -isystem /usr/lib/gcc/i486-linux-gnu/4.0.3/include -D__KERNEL__ -Iinclude -include include/linux/autoconf.h -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -ffreestanding -Os -fomit-frame-pointer -pipe -msoft-float -mpreferred-stack-boundary=2 -march=i686 -mtune=pentium4 -Iinclude/asm-i386/mach-default -Wdeclaration-after-statement -Wno-pointer-sign -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(signal)" -D"KBUILD_MODNAME=KBUILD_STR(signal)" -c -o arch/i386/kernel/signal.o arch/i386/kernel/signal.c

    I never gave it much thought, since the new form was easier to read. I figured that those folks who still eat kernel code for breakfast knew about this change well ahead of time. Of course, they were the only ones who needed to see the verbose output of the gcc command lines. I could live with seeing the simpler CC lines for my purposes, until today.

    I was compiling kernel code and for the first time since this change in the Makefiles, I was using a non-default gcc to build Linux. I wanted to double-check that I'd given the right options to make throughout the process. I therefore found myself looking for a way to see the full output again (and for the first time). It was easy enough to figure out: giving the variable setting V=1 to make gives you the verbose version. For you Debian folks like me, we're using make-kpkg, so the line we need looks like: MAKEFLAGS="V=1" make-kpkg kernel_image.

    It's nice sometimes to pretend I'm compiling 0.99pl12 again and not 2.6.20.7. :) No matter which options you give make, it is still a whole lot easier to bootstrap Linux these days.

    Posted on Tuesday 17 April 2007 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2007-04-10: User-Empowered Security via encfs

    One of my biggest worries in using a laptop is that data can suddenly become available to anyone in the world if a laptop is lost or stolen. I was reminded of this during the mainstream media coverage1 of this issue last year.

    There's the old security through obscurity perception of running GNU/Linux systems. Proponents of this theory argue that most thieves (or impromptu thieves, who find a lost laptop but decide not to return it to its owner) aren't likely to know how to use a GNU/Linux system, and will probably wipe the drive before selling it or using it. However, with the popularity of Free Software rising, this old standby (which never should have been a standby anyway, of course) doesn't even give an illusion of security anymore.

    I have been known as a computer security paranoid in my time, and I keep a rather strict regiment of protocols for my own personal computer security. But, I don't like to inflict new onerous security procedures on the otherwise unwilling. Generally, people will find methods around security procedures when they aren't fully convinced they are necessary, and you're often left with a situation just as bad or worse than when you started implementing your new procedures.

    My solution for the lost/stolen laptop security problem was therefore two-fold: (a) education among the userbase about how common it is to have a laptop lost or stolen, and (b) providing a simple user-space mechanism for encrypting sensitive data on the laptop. Since (a) is somewhat obvious, I'll talk about (b) in detail.

    I was fortunate that, in parallel, my friend Paul and one of my coworkers discovered how easy it is to use encfs and told me about it. encfs uses the Filesystem in Userspace (FUSE) to store encrypted data right in a user's own home directory. And, it is trivially easy to set up! I used Paul's tutorial myself, but there are many published all over the Internet.

    My favorite part of this solution is that rather than an onerous mandated procedure, encfs turns security into user empowerment. My colleague James wrote up a tutorial for our internal Wiki, and I've simply encouraged users to take a look and consider encrypting their confidential data. Even though not everyone has taken it up yet, many already have. When a new security measure requires substantial change in behavior of the user, the measure works best when users are given an opportunity to adopt it at their own pace. FUSE deserves a lot of credit in this regard, since it lets users switch their filesystem to encryption in pieces (unlike other cryptographic filesystems that require some planning ahead). For my part, I've been slowly moving parts of my filesystem into an encrypted area as I move aside old habits gradually.

    I should note that this solution isn't completely without cost. First, there is no metadata encryption, but I am really not worried about interlopers finding out how big our nameless files and directories are and who created them (anyway, with an SVN checkout, the interesting metadata is in .svn, so it's encrypted in this case). Second, we've found that I/O intensive file operations take approximately twice as long (both under ext3 and XFS) when using encfs. I haven't moved my email archives to my encrypted area yet because of the latter drawback. However, for all my other sensitive data (confidential text documents, IRC chat logs, financial records, ~/.mozilla, etc.), I don't really notice the slow-down using a 1.6 Ghz CPU with ample free RAM. YMMV.


    1 BTW, I'm skeptical about the FBI's claim in that old Washington Post article which states “review of the equipment by computer forensic teams has determined that the data base remains intact and has not been accessed since it was stolen”. I am mostly clueless about computer forensics; however, barring any sort of physical seal on the laptop or hard drive casing, could a forensics expert tell if someone had pulled out the drive, put it in another computer, did a dd if=/dev/hdb of=/dev/hda, and then put it back as it was found?

    Posted on Tuesday 10 April 2007 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

2005

May

  • 2005-05-10: CP Technologies CP-UH-135 USB 2.0 Hub

    I needed to pick a small, inexpensive, 2.0-compliant USB hub for myself, and one for any of the users at my job who asked for one. I found one, the “CP Technologies Hi-Speed USB 2.0 Hub”, which is part number CP-UH-135. This worked great with GNU/Linux without any trouble (using Linux 2.6.10 as distributed by Ubuntu), at least at first.

    Image of the CP UH 135 USB Hub with the annoying LED coming right at you

    I used this hub without too much trouble for a number of months. Then, one day, I plugged in a very standard PS-2 to USB converter (a cable that takes a standard PS-2 mouse and PS-2 keyboard and makes them show up as USB devices). The hub began to heat up and the smell of burning electronics came from it. After a few weeks, the hub began to generate serious USB errors from the kernel named Linux, and I finally gave up on it. I don't recommend this hub!

    Finally, it has one additional annoying drawback for me: the blue LED power light on the side of thing is incredibly distracting. I put a small piece of black tape over it to block it, but it only helped a little. Such a powerful power light on a small device like that is highly annoying. I know geeks are really into these sorts of crazy blue LEDs, but for my part, I always feel like I am about to be assimilated by a funky post-modern Borg.

    I am curious if there are any USB hubs out there that are more reliable and don't have annoying lights. I haven't used USB hubs in the past so I don't know if a power LED is common. If you find one, I'd encourage you to buy that one instead of this one. Almost anywhere you put the thing on a desk, the LED catches your eye.

    Posted on Tuesday 10 May 2005 by Bradley M. Kuhn.

    Submit comments on this post to <bkuhn@ebb.org>.

  • 2005-05-04: IBM xSeries EZ Swap Hard Drive Trays