Skip to content

Ruby, Bion, and Software as Sociality

by

Rubyists impress me as a group of people who are broadly literate, so I was only slightly surprised that twice in one week the subject of psychoanalysis (of all things) came up in the context of Ruby. In the first instance, I was sitting next to a gentleman at the monthly pdx.rb meeting and I made a comment about passion and libido in the context of Ruby (Ruby is fun as they say) and he mentioned to me that he was a working psychoanalyst. And just a few days later, a gentleman who was interviewing me for a Ruby job asked me if I am familiar with the Slovenian Psychoanalytic Philosopher Slavoj Zizek. (I am).

The Ruby psychoanalyst and I engaged a wide ranging conversation in email after the meeting and it strikes me that our conversation is relevant to and indicative of a development in the culture and reality of software. Ruby in particular and Open Source software in general are what I might call software as sociality, or software that fosters literacy. And literacy fosters collaboration and sociality. This loop is a kind of virtuous helical spiral, or a kind of cultural and intellectual bootstrapping process

Somehow, a very interesting psychoanalytic thinker by the name of Wilfred Bion came up. Bion was an immensely literate person who dedicated his life to helping people develop intellectually and emotionally. He was himself a person of very wide ranging interest and someone capable of both rigorous mathematical and philosophical thought and someone capable of being both pithy and very funny. In short, in my estimation he had wisdom and wit. Bion was an excellent example of someone conversant and felicitous in many domains. He was, I might say, a Rubyist before Ruby existed as the language and culture we know today.

Writing something on Bion, in fact, will help me explain some about how Ruby as a language walks the balance necessary to be productive and at the same to promote literacy and understanding. Bion had a way of putting things cogently, of distilling wisdom that I find satisfying and so I want to credit him where he is due, without falling into the trap of Bion said this and Bion said that, which in any event would not be very Bionian. He spends a considerable amount of time in his seminars reminding his listeners to let the jargon go once one gets an understanding, to not hold tightly on to jargon. (Think about that in the context of Ruby.)

And there is something more here in this tension between respect for, love of the work that others have done and the calcification of jargon. How can we become literate and learned in a technical or other domain without becoming stultified by jargon? How can we wield the artifacts of civilization with felicity? Languages, whether technical or otherwise are living, social artifacts and they need context, collaboration, and a kind of commerce to grow and stay lively.

And now, while I’m thinking through this, another association comes to mind. Somewhere, in one of the seminars, Bion waters up an idea from Freud on “interpretation as construction.” In fact, there is an essay from the Standard Edition in which Freud distills his own thoughts on constructions and interpetation. But it is Bion who brings home the insight that simply giving an interpretation to someone as a finished and shrink-wrapped piece of jargon does little for them. An interpretive act takes place in a context and in a history, and it is the context and history that give it meaning and life if it is to have any real life at all. Building up or constructing an interpretation using a living context or history gives the other person an opportunity to think along and come to something of an understanding of what an interpretation means. Construction in this sense gives interpretation efficacy and life.

Bion mentions, tangentially, how Milton’s poetry was not English Literature (for him). We can fall into the trap of thinking that we are teaching or being taught English Literature when encountering Milton, but Milton was “saying what he had to say” in the form that most suited his understanding. Literature in a shrink-wrapped or even in a Wikipedia sense was not his concern. We are better readers of Milton and will understand and participate in more of his “genius” if we allow ourselves to think along with him and we place ourselves in the context of “Milton is expressing and working through an understanding” in his poetry. (Incidentally, this is the approach to interpeting Milton that a good literary critic such as Harold Bloom takes and Bloom uses Freud to high-powered effect in his acts of interpretation.)

Slices of jargon are as Bion says, “very compressed statements which have a considerable penumbra of associations.” Jargon is useful and convenient short-hand, but it detaches itself from context and from specific meaning. Jargon is especially dangerous when it becomes calcified and appears to be doing the work of thinking or meaning. In the context of this little post I would define a construction as an interpretation that uses a context to build or prove itself, an interpretation that forms using the details and the context of life that might go unnoticed without a well formed construction. A construction “says what it has to say” in the “right” form, and uses context to give itself meaning and relevance. A construction uses unformed context or detail to give form to context and understanding to detail. In fact, details or context are not even noticed without some understanding or some construction.

The above paragraph strikes me now as a not too poor elucidation of what “bootstrapping” consists of and there a number of philosophers who have extended that kind of little play on context and interpretation to make some conjectures about how the mind and cognition work. We can discuss more of that later. It might be interesting to work through some of the explicit and implicit aspects of extending this little piece of wisdom, if I might call it that. I see some interesting possibilities. I can, in fact, use this frame to talk about or elucidate Open Source development in general and Ruby and the Ruby community in specific. Open Source software in general eschews the shrink-wrap model and exposes more of its context.

What can be seen in some instances as less convenient and more context dependent (Open Source), actually has the virtue of providing enough context through which to learn and through which to promote literacy. Free and Open Source software as an environment in which literacy is preserved, promoted, and developed is something I’ve been going on about for over twelve years. It seems more and more evident to me every day. Collaboration and conversation are some of the watchwords of the day and they have very relevant uses and meanings in the context of Open Source development. The compression in an Open Source scripting language such as Ruby is very convenient and very productive without necessarily removing context. In fact, in ambiguous or difficult to understand places, context can be brought in, either through unpacking of terse or ambiguous statements, or through well-written comments and thorough test coverage both.

Discussion of community is also very relevant not because we Open Sourcerers stand around a campfire singing free software songs (however appealing or appalling the idea), but because all language is social. Free software is social in a true sense. I’ve unpacked that sentence before, so I’ll just leave it there in all its compressed ambiguity for now. I will only say in this paragraph that free software exposes itself as language, as “speech act”, in a way that fosters conversation, literacy, and community. The image of the anti-social and flat coder who speaks or writes an isolate, discreet (machine-only) readable language or jargon is an artifact of a world in which software is shrink-wrapped and cut off from language in general and from the context in which it might find use and life.

This post is, in fact, a program. It is as much a program as a program written in Ruby. That statement either pushes the boundaries of what is thought of as a program or it confirms and participates in the literate community of software developers who both speak and write articulately in their own native languages and in Ruby. I have confidence that many in the Ruby community understand what I mean by that. They understand it because they live the fact that the wall between language as language and language as code is more and more open; they understand it because of the general intellect and literacy that the Ruby culture embodies.

Here’s to more conversation and collaboration.

Integrating R into your Ruby services and applications

by

R is an excellent tool for statistical analysis and machine learning. It is designed with affordances for statistical and data analysis and for graphical facility in mind and there is quite a resource in the CRAN, the Comprehensive R Archive Network. Any researcher interested in data analysis will benefit from using R. It is, however, a specialized tool that should ideally be integrated into larger toolsets or more general purpose environments (for example Ruby or Python).

Recently Randall Thomas of Evil Martini and Engine Yard came to pdxruby to give a talk on machine learning and Ruby. I enjoyed Randall’s tour through the basics of statistical analysis with R and I also enjoyed the fact that even though the talk was billed as a Ruby talk, it was essentially a short talk on R and statistical analysis to a Ruby crowd. The ease with which Randall introduced R to Rubyists speaks volumes in my opinion about the maturing of the open source communities and the ease with which the “artifacts” of open source development are now wielded. That ease, in no small part, is due to the collaboration in the various communities and between them.

PDXRuby is an excellent example of a thriving community. I had heard that Portland had the best and most enthusiastic Open Source user groups, and I was really impressed with the Portland Ruby Group. Randall concurred that the Portland group was impressive. I can see why companies want to move here.

I installed RSRuby and RPy2 along with R 2.10 on CentOS 5.4 which went fairly smoothly. As in many areas of scientific computing, the Python communities are stronger and more advanced than the Rubyists are, and in terms of bridging to R Python is still the leader and is canonical. Therefore, RPy2 is the reference implementation and RSRuby is modeled on the Python implementation.

I built R 2.10.1 from source with


./configure --enable-R-shlib
make
sudo make intstall

The configure script for R is mature and will help advise you to install any libraries such as BLAS that are desirable, or any missing compilers, etc.

If you are working with Ruby 1.9.1 for now I’d recommend either getting the gem from Alex Gutteridge’s github account or cloning and installing with setup.rb.

The gem install did not work for me even when I loaded the R_HOME variable properly in my .bashrc before running the gem command as per the instructions.


export R_HOME=/usr/local/lib/R

the following did work:

First I cloned:


git clone git://github.com/alexgutteridge/rsruby.git

Then:


ruby setup.rb config -- --with-R-dir=$R_HOME
ruby setup.rb setup
sudo ruby setup.rb install

When the install completed, I fired up the irb


irb(main):001:0> require 'rsruby'
=> true
irb(main):002:0> r = RSRuby.instance
=> ##, "T"=>true, "TRUE"=>true, "F"=>false, "FALSE"=>false, "parse"=>#, "eval"=>#, "NA"=>-2147483648, "NaN"=>NaN, "help"=>#, "helpfun"=>#}>
irb(main):003:0>

Now we are ready. Fortunately, there is an AI researcher named Peter Lane who has a blog called Ruby for Scientific Research who has already posted some on working with R from Ruby and I’ve linked directly to his RSRuby posts.

Below I am going to modify his first script to use the R exp() function and plot a line:

require 'rsruby'

r = RSRuby.instance

# mod of Peter Lane's Ruby for Scientific Research plot example
# construct data to plot, graph of x vs exp(x)
xs = 10.times.collect {|i| i}
ys = xs.collect {|x| r.exp(x)}

r.png("exp_example.png")  # tell R we will create png file
r.plot(:x => xs,
     :y => ys,            # (x,y) coordinates to plot
     :type=> "o",         # draw a line through points
     :col=> "blue",       # colour the line blue
     :main=> "Plot of x against exp(x)",  # add title to graph
   :xlab => "x", :ylab => "exp(x)")     # add labels to axes
r.eval_R("dev.off()")          # finish the plotting

And here is the plot:
Plot of x against exp(x)

Randall covered some of the basics of initial data analysis in R and perhaps if there is interest we can discuss some of the issues that Randall brought up. I do remember that there was an interesting discussion about initial methods of analysis and first looks at distributions of data.

Continuous Innovation: Startups and Open Source software

What follows is tangential in the sense of following a tangent:

In my forays down the rabbit holes of general internet inquiry I am struck by and remember the posts or comments that actually get to a point. Too often a post never delivers on the promise in its title or its lead graph (this reality is as evident in print and official journalism as blogging.)

One advantage of the blogging paradigm is that a comment can occasionally redeem the promise of an unrealized catchy title. In those cases a tangent emerges in which a comment tropes on a theme in a post, and the comment takes the promise of a title or a first graph in a completely different and frankly more interesting direction than the original post.

I do, dear reader, have an example in mind and a point or two to make, so please bear with me while I give you some of the background.

A few weeks ago, Chris Dixon wrote a post (cutely titled Every Time an Engineer joins Google a Startup Dies )about how innovation which he defines narrowly as the creation of new consumer products is best fostered by new startups and entrepeneurs. The post was somewhat rambling and was more in the way of a polemic that attempted to say that there is no “reason we should assume venture-backed innovation can’t be dramatically increased.” Huzzah to that. There was not, however, in my opinion too much more of interest to the post. It was the title, nonetheless, that sparked an interesting comment on software engineers, innovation, and startups versus big companies (Google).

Here is the comment in near full form: one user named Krave wrote:

The question we’re trying to answer is: does an engineer generate more value (defined as innovation, rather than just money) inside of Google, or out? On the plus side, being outside of Google gives you infinitely more freedom to maneuver, fewer organizational taxes, exposure to a broader range of the stack, and an increased probability that your ideas will at least enter the world rather than being trapped inside of a closed organization. But being inside of Google gives you an insanely good infrastructure to build on, an incredible interchange of ideas and knowledge with world-class people inside of the organization, and the ability to share the things you do launch with a large built-in audience. It’s hard to say either side is an obvious winner here. From the engineer’s perspective, they have a much higher expected value payout at Google, I’d think, and a much easier lifestyle.

Now my interest is piqued. And if it is not immediately evident how much the comment troped on or tweaked the original post, I will simply point to the fact that the commenter makes a value distinction between innovation as innovation and profit. Cdixon makes no such distinction. I like the question as a question so much that I’m going to repeat it in a slightly different form: Does an engineer generate more real world and public innovation inside or outside of Google?

There is no coherent or complete answer to that question, but I find it interesting nonetheless, because it is a question asked in an environment and a culture, the “net”, in which innovation is “continuous.” Software engineers are constantly tweaking and troping upon previous innovation. Innovation in software is now continuous, I argue, in no small part due to the rise of Open Source and Free Software. That may seem obvious to many a software engineer and I would argue that that thesis is central to the success of Google as an enterprise (more on that later.) If I were to put on my sleuth or literary critic hat, I would say that neither Cdixon nor Krave are software engineers. If they were, they would both be seeing much more innovation both inside and and outside of large institutions than they are. Krave as an ex-Googler, at least gets what the “engineer” might find of value inside of Google and some of what they might gain from being outside of such an institution.

But neither of them get what the innovation and incentive landscape looks like from an engineer’s standpoint, nor what the landscape is starting to look like from the standpoints of entrepeneurs, investors, and users. The innovations are so continuous that from an engineer’s viewpoint, whether one is inside or outside of a large institution, innovation is not only possible but in many way’s inevitable. And that continuous innovation is exciting, interesting and motivating to engineers. There are numerous Open Source projects that have not yet found wide commercial application or production use but that display tremendous amounts of innovation.

My response to Krave is as follows:

There is certainly a tradeoff for an individual engineer who contemplates the inside or outside of Google question and there is quite a bit of innovation that takes place at Google that only exists as innovation within the technical culture of the company itself, as Krave pointed out. With an infrastructure and an elite audience, certain kinds of higher order problems that are satisfying to work on and that motivate the proverbial engineer are easily at hand.

However, Google itself and every clever Internet institution of the last twelve years is built upon an existing infrastructure of free and open source software and common knowledge that is in a state of continuous innovation and that thrives both inside and outside of institutions. Much of the engineering and intellectual “infrastructure” has been virtuously externalized as open source “artifacts” that startups and large institutions use as a base upon which to innovate. This underlying reality of what I’d like to call continuous innovation is much bigger than Google or even the sum total of startups.

I’ve consistently chosen start-ups as places to work because they have been environments that have embraced the use of open source “artifacts” to innovate in the sense that Chris Dixon means it. However, innovation in the sense that an engineer qua engineer might mean it, is inspired and is thriving extra and intra institutionally. Institutions here are the universities, the big companies, and the startups as well. Innovation in that sense is its own drive and reward, and it could be said to be using the cultural institutions as much as the institutions are using it to achieve quite extraordinary things. The big companies, IBM and Google, etc. are as much a part of the open source ecosystem as the startups and the universities.

To get back to the thrust of what krave said at the end and what is I think of direct interest to Chris Dixon and other entrepeneurs who are not quite yet hip to continuous innovation: Startups are a particular instance of innovation, with a unique set of problems and opportunities. How efficiently startups manage to leverage existing “artifacts” (i.e. Free and Open Source software and common knowledge) will determine their capacity to innovate successfully.

New commercial products and services are foam upon a sea of continuous innovation that is Open Source software and knowledge in the commons.

The Need to Give: Free Software

by

I wrote the following essay in August of 1998 and it was published in ASCII Culture and the Revenge of Knowledge by Autonomedia. I just reread it because I’ve been thinking about how much the markets and Open Source and Free Software have developed in the last 12 years. I am going to post it first and then follow up with some more recent thoughts in a new post.

Here it is:

In late August, 1998, O’Reilly Publishing sponsored an Open Source Developer Day in downtown San Jose-emerald city as ghost town-in a hotel that conventions only partially fill. In a ballroom-conference room with a raised stage for speakers and a few hundred filled seats, the big figures in open source came together to discuss the “movement.” Eric Raymond was the keynote speaker. His talk focused on the “enterprise market” and Linux. Linux, the phenomenon, has made recent notice in the economic press, as have several other free software projects. Raymond delivered an entertaining tour through some of the more recent achievements of Linux. But it was limited to the entrance of Linux as a serious player in the corporate server and high-end markets. It’s an interesting story, and one that can be measured somewhat. But the Linux phenomenon is much larger-a worldwide spread into private and social use at the personal and PC level as well as small networks.

This vast market is of no financial significance in Silicon Valley at the moment but may prove to be of social and even economic significance globally. There was little discussion by any of the participants of the larger social impact of free software; instead, discussions centered on business models and legal licensing issues. The calm was, however, punctuated by Richard Stallman’s declaration that John Ousterhout was a “parasite” on the free software movement. Ousterhout was on the business models panel, describing his company, Scriptics’, planned support of the open source core of Tcl, the language he nursed to adolescence, and their simultaneous planned development of proprietary closed tools for Tcl as well as closed applications. During an open-mic period, Stallman said it was interesting to see IBM, a representative for which was on the panel, entering in to the free software community by supporting the Apache project while John was planning to make the fruits of the community into closed and in his view, harmful, proprietary products. Some people clapped, others jeered. Without Stallman’s provocation, the “conference” may have ended as a press conference rather than a town meeting for the free software community. Some of the more official attendees were said to be embarrassed by Stallman. Most seemed baffled by the dissension and controversy. Many of the old-timers just groaned, “Oh, there goes Stallman again.” Some were worried that the hackers would be bear the brunt in the press.

A week later a vice president from a software company thinking about going open source talked to me after he got a full report about the conference. “Stallman is a Communist,” he said. “He is not!” I laughed. “He’s not even a Marxist.” The closest Stallman ever came to talking about politics was to mention the U.S. Bill of Rights. Software developers aren’t known for articulated or nuanced views of political economics; many aren’t quite sure how to deal with subjects other than technical capacity or profits-let alone with the possibility that dissension and debate might be good. Stallman’s very presence makes some in the free software communities uncomfortable, like a cousin that shows up at the wrong time, is too loud, and says the things no one dares to say. Foremost amongst the traits that make the denizens of Silicon Valley uncomfortable is Stallman’s contempt for the commercial. He is indeed contemptuous of it, of profit for its own sake-especially when it’s at the expense of the free circulation of ideas and software. This is what many executives, hip though they may be, find so unsettling about him: expressing his views in Silicon Valley is like declaring contempt for gambling in Las Vegas. But his antics make perfect sense in the context and community of free software developers.

It strikes me as a mark of consistency and mental precision that he persists in his strict interpretation of free software. His legally technical discussions of the GNU General Public License are brilliant expositions of what some call “viral” licenses-one that legally binds would be vendors and distributors of free software to keep any modifications in the source code free and open to further modification. The GPL has been very good to Linux: the GNU project spent considerable time and money crafting a clear and legally binding and it has served as a haven for many a free software developer. Linus Torvalds among them was spared the need to craft a license and set a precedent for the open and distributed development of his project.

Stallman’s GNU project has done incalculable good for free software. No one in the communities denies it; but his tenacity makes many of them nervous. And he doesn’t make the “suits” comfortable either-nor does he want to. He doesn’t carry a business card; he carries a “pleasure card,” with his name and what appears to be a truncated personals ad, or a joke, “sharing good books, good food…tender embraces…unusual sense of humor.” He clearly isn’t looking for a job or a deal. Friends perhaps or “community,” but not a deal. He’s not against others making a profit from free software, though; in fact, he encourages people to make profitable businesses and make substantive contributions to free software and free documentation. Like every other “hacker” at that conference I talked to, he is a pragmatic thinker. He knows that no business would come near free software if it did not offer a successful business model for them. He’s just not willing to compromise with those who try to combine open source with closed and proprietary software: if an open source project is cannibalized or “parasitized” by the development of closed products, he argues, it will hinder the free flow of ideas and computing.

John Ousterhout’s plans for Tcl are just plans at the moment. He’s playing with the possibility of supporting the open source development of Tcl while developing proprietary tools on top of it. He acknowledges that there will be some tension between Scriptics’s investors’ demand for profits and the community’s need for substantive free development of Tcl. Veering too far in either direction will preclude contributions from the other: investment and connections or contributions and support. The tension between Ousterhout and Stallman is representative of the conflicting economies and social realities the free software communities face. While investors and capitalists struggle to understand just how free software has become so successful and how they can somehow profit from it, hackers and developers are trying to maintain the integrity of free and open source computing in the face of new attention and interest. Mainstream media interest in open source was piqued by the success of companies that serve and support the free software communities. The growing user base is spending a lot of money on support, commercially supported versions of free software products, and documentation.

Commercial Linux vendors are making significant revenues; C2net’s commercial, strong encryption version of Apache will earn the small company some US $15 million dollars in revenue this year; O’Reilly Publishing will earn over US $30 million on documentation of free software this year. These figures are, of course, dwarfed by the figures that proprietary software companies earn. (these figures 12 years later seem so small!)

Bill Gates, the emblematic persona of commercial software, has a personal fortune that exceeds the combined wealth of the entire bottom forty percent of the United States population; and Microsoft, the synecdoche of success in the software business, is the second wealthiest company in the world behind the mammoth General Electric. As large as Microsoft looms, it would be a mistake to credit them with spurring the development of free software. Free software has it’s own trajectory and its own history; both predate Microsoft. Free software isn’t a creature of necessity, it’s a child of abundance-that is, of the free flow of ideas the academy and in hacker communities, amongst an elite of developers and a fringe of hobbyists and enthusiasts. These communities lie outside the bonds of business as usual and official policy. The fact that this abundance has reached a significant enough mass to support business models has much less to do with presence of clay-footed proprietary monsters than with the superior and more engaging model that free software offers users and developers. Microsoft is, as Eric Raymond says, merely the most successful example of the closed, proprietary model of software development.

But it is the model in general, not Microsoft in particular, that open source and free software offer an alternative to. This alternative is not as profitable; it makes better software. Enough people have begun to recognize this to present a threat to proprietary software wherever the two models compete. For now, it’s hard to imagine anything that might threaten Microsoft, except for something outside of its model. Recently, a number of companies have embraced open source software in various ways and to varying degrees. Does this stem from a sense of abundance or is it an act of desperation? To those within the free software communities, the answer is obvious, the move to free software comes from an abundance. But, for many others, when a large commercial company decides to go open source (for example, Netscape) it’s often seen as a desperate act to shore up marketshare or mindshare while frosting widgets. The rising stars of the free software communities-Cygnus, Red Hat Software, and so on-had the community before they developed a business model. It’s much harder for a company to start with a business model and try to create a community-in no small part because the sense of abundance that marks free software communities is often alien to company logic.

Free software as both a specter and a possibility, nonetheless, has forced companies to consider alternative business models. For example, IBM’s bundling of the Apache webserver allows them to earn revenue from supporting the free product on their systems, not from creating a closed product. IBM, of course, did not open the source code for any of its own proprietary products. It sought to leverage the community and the brand name of Apache, but it will, true to the model, contribute substantively to the open source.

Some of the most visible internet companies rely entirely on free software for their infrastructure and their software stack; a good example is Yahoo. Often, these companies use and even develop open source technologies; but, they stop positioning themselves as technology enterprises per se. Richard Stallman pointed out quite a few years ago that the effects of free and open source computing are more social and educational than merely technological. I believe he meant that free and open source computing shifts emphasis from technology for technology’s sake and focuses it on what the possibilities that computing and networking open up, the development of community and the education of people.

Free software projects develop devoted communities that are explicitly extra-monetary and extra-institutional. Once-obscure theories about a gift economy, first set forth in Essai sur le Don (1920) by the French anthropologist Marcel Mauss, have become more than merely popular metaphors: they now form some of the basic tenets of the free software movement. The extra-market and extra-institutional communities of free software are novel social forms whose nearest analogy are the “phratries” that Mauss describes: phratries are deep bonds developed with those outside of one’s own family or clan; strangers become brothers through gift exchange.. A process that was fundamental to the theory of the gift economy and that is especially apt as an analogy for free software and the nets today is the potlatch, a term that describes the gift-giving ceremonies of the Northwest Coast Tribes of North America. The potlatch is a “system for the exchange of gifts,” a “festival,” and a very conspicuous form of public consumption. The potlatch is also the place of “being satiated”: one feels rich enough to give up hoarding, to give away. A potlatch cannot take place without the sense that one is overrich. It does not emerge from an economics of scarcity. Marshall Sahlins’s Stone Age Economics of 1972 is, more than a study of gift economics, a critique of the economics of scarcity. Scarcity is the “judgement decreed by our economy” and the “axiom of our economics.” Sahlins’s and others’ research has revealed that “subsistence” became a problem for humanity only with the rise of underprivileged classes within the developed markets of industrial and “postindustrial” cultures.

Poverty, is as Sahlins says, an invention of civilization, of urban development. The sentence to a “life of hard labor” is an artifact of industrialism. The mere “subsistence scrabblers” of the past had–hour for hour, calorie for calorie–more “leisure” time that we can imagine: time for ceremony, time for play, time to communicate freely. Sahlins’s presentation of “the original affluent society” should not be confused with the “long boom” recently popularized by Wired and other organizations, the specious celebration of some kind of information or network economy that will miraculously save us from scarcity and failure. His ethnographic descriptions of communal and environmental surplus and public consumption of surplus through gift-giving are a rebuke of the failures of “progress” to deliver the goods, not a description of some information-age marvel.

The gift-giving amongst an elite of programmers is an example of how collaborative and distributed projects can create wonderful results and forge strong ties within a networked economy; it certainly isn’t an adequate representation of the successes of the information age as a whole. It is an ideal; given its recent achievements, however, it seems reasonable to ask what further developments free software communities might achieve. And, in asking that, we might ask where the limits of open source logic presently lie. At the developers’ conference I opened with, Stallman pointed out an important limitation: we lack good open source documentation projects for free software. This is crucial, because free software develops rapidly: it needs timely and well crafted documentation. Tim O’Reilly already copylefted a book on Linux, but didn’t sell well. Perhaps it is time he tried again. The market is much bigger than it was even a few years ago. But, as O’Reilly points out, writers don’t want to copyleft their books as much developers want to participate in free software projects.

The authors of these books and of traditional books, for the most part, are individuals and do not work collaboratively with networked groups of writers to produce a text. Perhaps some may be inspired, as many indeed are, to experiment, as O’Reilly said he may be willing to. “Let him experiment!,” Stallman intoned after the conference. The phenomenon of free software is probably bigger than anyone of us realizes. We can’t really measure it because all the ways of tracking these kind of phenomena are economic, and the “small footprint” operating systems, Linux and FreeBSD, are flowing through much more numerous and difficult to track lines, lines through which move people just like the ones the who built them. There are a few hints. In August, cdrom.com broke the record for the largest FTP download of software for a single day, surpassing the previous record which had been set by Microsoft for one of its Windows releases. All of cdrom.com’s software is free and open source. Cdrom.com reports that much of the download is to points outside of the United States and the E.U.–to areas where, industry wisdom tells us, intellectual property laws aren’t respected.

What happens when soi-disant “pirates” become users who avidly, even desperately, want to learn, to receive, and even to give? What will be the social and economic effects of free and open source computing? Do the successful collaborative free software projects prefigure other kinds of collaborative projects? Will the hau, the gift spirit of free software spread into other areas of social and intellectual life? I hope so. There is a connection between the explosion in the use of networked computing and the recent rise to prominence of free software. And this connection may foretell new forms of community and free collaboration on scales previously unimagined, but it certainly won’t happen by itself. It will take the concerted efforts of many individual wills and the questioning of many assumptions about the success and quality of the collaborative, the open, and the freely given.

On Migration, Form, and Agility

by

In the process of migrating some of my content to the WordPress format specifically and the blogging paradigm more generally, I have the occasion to meditate on how I use or think of migration, of form, and of agility.

I am testing and proving to myself both my interest in the blogging paradigm and the mutual fit of my interests and my skills as a writer with the blogging format. As a professional writer and open source software developer of some fifteen years, I have a tremendous backlog of extant writing and software as well as many thoughts and idiomatic expressions that are well formed and at hand. After putting so much time into a craft, the tools are wielded handily; one can play rigorously and one can engage in working through or thinking through “higher order” questions or interests. One can adapt to and collaborate with both the task at hand and with others who one is working with.

Migration always involves some kind of transformation, and rather than treat this opportunity as an occassion to merely port or repurpose content, I’d rather use it as a way to rethink and articulate anew some of my understanding. Every occasion is a chance to distill and hone accrued understanding and wisdom. By migration I mean the moving of an existing artifact to a new format or paradigm. By using the word “form”, I am simply emphasizing that principles of form and shape are of concern at every level.

Agility or “agile” is a term of art in software development that has some larger usefulness. Agility as a concept encompasses adroitness and flexibility as well as a concern for form and a kind of ease or lightness with which artifacts are wielded. For those not initiated into the use of the word “agile” in the context of software development, it might simply be likened to a pragmatic approach.  I have much more to say about pragmatism but I will defer that exposition until after I follow through a little more on the promise implied in the title of this post.

Now that I have made some small attempt to clarify and to establish something of a context for migration, form and agility, it is perhaps time to discuss why I’ve combined them and what my purpose is for doing so. I am attempting define what I think the essence of agility is and I’m using the occasion of a migration of content and the adoption of a new format for content as an opportunity to do that. But why form? Why the emphasis on form?

Form is the shape or structure that an artifact or idea takes. Is the form, structure and language of an artifact a mere bridge to an underlying content? Is the form merely a means of presentation? Or is form inseperable from content? Pragmatically, I would articulate an answer to this question from the other direction. A thought or an idea is inseperable from the form and context in which it is expressed. And the form-content is always collaborative, whether one is aware of the collaborative basis of all artifacts or not.

The essence of agile methodology in my opinion is explicitly collaborative and rapid evolution or development. How collaboration is fostered and maintained is another matter. Specific forms for collaboration are necessary, and those forms themselves are subject to agile adaptation. For example, one form of agile development that Phosphene uses, which is a kind of distributed agile development, leans heavily on distributed forms of and tools for communcation and collaboration. An emphasis can in this case is placed upon the ability to communicate through the written word and through the use of video conferencing and conversation and even email.

Phosphene, for example, has successfully used a distributed agile methodology on some major projects in which some extremely talented individuals were tapped who not only had the necessary techinical skills but who were also skilled at rapid and iterative written communication. A distributed agile methodology, in our experience in fact, scales well and performs as well as a team in which every member sits in the same room. Distributed agile of course has its own necessary adaptations and forms, and like any agile methodology it has to be tested, proved and iterated over quickly.

An explicit meditation on, or awareness of, form is another essence of agile methodology. Forms must be adaptable if they are foster rapid and reliable collaborative development. And a necessary if seemingly obvious part of adaptability is the awareness of form as form, of form as a choice. It sounds and seems obvious that an awareness of form is essential to the development of any art or science, but the realization that form can be an explicit choice is something that still needs to be emphasized. If form is not explicitly recognized as something that can and needs to adapt or prove itself, then form is fixed and is simply not adaptable. If any business person wants to know only one thing about why an agile methodology is a welcome change and not just another fad or scrap of jargon, it is exactly the awareness of the conventions of software development and process as a form that makes a difference. If we are not aware of conventions as form and if certain conventions seem to be “the way things are” then there is no awareness of nor possibility of adaptation or development.

It is fitting, therefore, to bring up the subject of form in the context of migration because any developed or working artifact can seem dauntingly difficult to change, migrate, or adapt. It would help of course if an original or working artifact was designed with adaptation and explicit awareness of form and convention in mind. However, in the wild, as any working developer or business person knows, the majority of existent artifacts have not been designed with enough of an awareness of form or of rapid adaptation in mind. In those cases, a specific agile methodology for migration needs to be developed, tested, and proved. In fact, those specific methodologies have been developed at Phosphene and I will write about it in a following post.

If I have demonstrated or managed to think through the interdependence of migration, form, and agility in this post, then I have established some basis on which to further develop more thoughts and iterations.