RAT: Wherein Mel finally defines, describes, and backs up what the heck the praxis of radical transparency is, sort of.
One of many posts on my Readiness Assessment. As a reminder of the ground rules, this is a solo assessment, so while I’m allowed to think out loud on my blog, I can’t ask for or get (intellectual) help. Cookies and emotional support are, however, welcome.
I am now up to 28 pages without figures (probably in the mid-thirties with them), and argh, my cross-disciplinary citations are weak. I don't think I'll have time to shore them up before submitting my RAT, but it's something to talk with Robin about afterwards.
The more time I spend in academia, the more signs I see that the scholarly work of research and teaching are domains I could really build a home in and settle into. I have this sort of feeling. And the more time I spend in academia, the more signs I see that my practice is the practice of open communities. I need to freakin' articulate WHAT THAT IS, though. So.
Radical transparency refers to the cultural practices used by healthy open communities (Free/Libre and open source software, open content, and open hardware projects) to expose their work in as close to realtime as possible and in a way that makes it possible for others to freely and non-destructively experiment with it.
I can work with that. It's not perfect, but... I can work with it. I need to unpack that phrase. I think I'm going to have to cite Raymond. Also, good lord a lot of this is philosophical and not empirical -- I'm so thankful for the dissertations of all those folks from the last blog post.
Okay. Breaking it down.
...cultural practices...
Radical transparency is a praxis (that's a fancy word, by the way, that means specific scholarly things that I should unpack more but don't have the time or energy to read about right now -- it's sort of like "action-practice, practice-that-makes-changes," go read the wikipedia page) that has been adopted across various domains (for instance, the production of encyclopedias, operating systems, web browsers, and so forth) by communities of practice as part of their practice. A community of practice, a notion first articulated by sociologists Jean Lave and Etienne Wenger (1991), consists of a community of people who share a common practice within a domain of knowledge (Wenger, McDermott, & Snyder, 2002).
Exploring all the possible permutations for domains, communities, and practices gets complicated quickly. Multiple communities of practice may exist for a given domain; BSD, Linux, OSX, and Windows are all situated within the domain of operating systems development, but the former two are radically transparent whereas the latter two do not. Not all communities that utilize radical transparency are successful, as Mako Hill's research (2012) on failed online enyclopedia projects shows (and plenty of communities without radical transparency do just fine, as Apple, Inc. demonstrates).
Furthermore, the use of radical trasparency as a praxis does not necessarily mean the presence of a community; Krishnamurthy's (2002) looked at the 100 most active mature projects on Sourceforge, a popular open source project host, and found the most common team size was... 1. To further complicate matters, not all projects that claim to be open actually implement radical transparency; Phil Marsosudiro coined the term fauxpen to refer to projects that talk the talk but fail to walk the walk (Searls, 2009).
...used by healthy open communities...
Before we run into a quandary of boundary articulation, I'll offer up a phrase specifically created to make it easier for us to discuss this phenomenon. Since the phrase radically transparent community of practice is somewhat awkward, we'll use the term open community as a shorthand to refer to any community that utilizes the praxis of radical transparency as part of their practice, regardless of how they self-identify. For our purposes, if it walks like a duck and swims like a duck and quacks like a duck, we're going to call it a duck (Heim, 2007).
Successful open communities are fascinating ducks. They're actually more like platypi – strange hybrids that make researcher after researcher exclaim what the heck is that and how is it alive? What are the economic dynamics of these communities (Lerner & Tirole, 2002; Lakhani & Wolf, 2005; Lerner, Pathak, & Tirole, 2006)? Why would people join (von Krogh, Spaeth, & Lakhani, 2003), work in (Hars & Ou, 2002), and help colleagues in such a group? (Lakhani & Von Hippel, 2003) How can such an unorthodox arrangement possibly be successful? (Bonaccorsi & Rossi, 2003)
Against that sort of backdrop, one of the biggest early triumphs of open communities has simply been to prove the possibility of their existence to a sometimes-skeptical academic and corporate audience. Every successful project stands as a living, thriving example of what the dynamics and output of a radically transparent community in their domain could look like.
...to expose their work in as close to realtime as possible...
So what do these communities look like – or more to the point, what exactly does this praxis of radical transparency entail? One key cultural can be summed up as if it ain't public, it don't count. Coleman's (2010) ethnography of hacker conferences described the „validity and importance of such public discourse,“ noting that even small conversations at in-person gatherings would consistently reference and center around publicly-archived conversations. A related pattern, release early, release often, ensures the frequent presence of a contributor's voice in these highly-valued public conversations; realtime exposure thus carries a clear benefit to the individual. Linus's Law, (Raymond, 1999) is sometimes cited as a benefit of this behavior to the larger community: enough eyeballs make all bugs shallow – an argument fascinatingly similar to engineering education conversations about the importance of cross-disciplinary collaboration (Adams, Mann, Forin, & Jordan, 2009; Borrego & Newswander, 2008) both in terms of the rhetoric employed and the amount of philosophical work backed up by empirical evidence (alas, not enough).
The desire to get one's work in front of more eyeballs motivates radical transparency practitioners to target each of their contributions towards the open community where it will benefit the most people, a pattern called pushing to upstream that inadvertently benefits component reuse and quality control (Aberdour, 2007). Ellis, Hislop, Chua, and Dziallas (2011) note that making something publicly and easily available does not necessarily imply widely advertising it, a distinction that preserves a reasonable signal-to-noise ratio. Multiple empirical studies by Elliott (2007) confirmed that when contributions are „pushed upstream,“ the contributions themselves are used as „stigmergic cues to negotiate contributions“ and the „upstreams“ become workspaces that served as boundary objects that removed barriers to participation.
Participation is often solo and asynchronous, as Howison's ethnographies of variouso pen source projects (2008) revealed. However, they are done „in company“; Howison uses the phrase „Alone Together“ as his title. Efimova's ethnography of blogging knowledge workers (2009) explored how these asychronous conversations became an indirectly enabling network that supported the sense-making of participants who were able to embrace the „transparency and fragmentation of [their] work“ that the process entailed.
The stigmergic nature of collaboration means that, at any given time, a contributor is likely to be creating a new whole by adding one small part to the top of an existing stack of components, leading to Robinson's Maxim of begin with the finishing touches (Karlie Robinson, personal communication, 2009).
This pattern is also a sign of respect for oneself and other contributors. As Eric Raymond's „How to Become a Hacker“ manifesto puts it, „No problem should ever have to be solved twice... Creative brains are a valuable, limited resource. They shouldn't be wasted on re-inventing the wheel when there are so many fascinating new problems waiting out there... it's almost a moral duty for you to share information, solve problems and then give the solutions away just so other hackers can solve new problems instead of having to perpetually re-address old ones.“ (Raymond, 2012) Again, this stance of default to open (another common phrasing) is largely motivated by philosophy, with practical consequences following from this philosophical stance.
...makes it possible for others to freely... experiment with it...
The philosophical roots of radical transparency can be traced back to the Free Software Definition, first written in 1986 by Richard Stallman, who continues to maintain it today (2012). As the title implies, the original document is about software, which does a user's computing and is composed of source code; I have adapted the text below to refer to artifacts that serve a purpose to the user and are made of components of some sort, so here are the Four Freedoms in their generalized form:
The freedom to use the artifacts for any purpose.
The freedom to study how the artifacts work, and change it so it serves your purposes as you wish. Access to the components that comprise the artifact is a precondition for this.
The freedom to redistribute copies so you can help your neighbor.
The freedom to distribute copies of your modified version to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the components that comprise the artifact is a precondition for this.
Huh, I think I'm almost done. Last part: "nondestructively."
I think I want to put something in here about how forking and commit access and peer review and version control lower the cost of a "mistake," and cheaper mistakes mean we're less worried about who comes in the door. My desire to nap for half an hour before heading to class outweighs the desire to write that at the moment, though. BEDTIME. I can't break my string of no-allnighters from 2006, can I? (I mean, I can, but I don't want to.)