Reply to Cory
While reading sketching on whiteboards, etc). I can't comment on his blog without a login I'm not sure of how to get, so I'm writing a longer blog entry as a response instead. (For those who don't already know, I'm a deaf* hacker.)
First of all, I like the way Cory and his professor handled the question on how to assess his understanding of UML diagrams (a visual convention for describing program structure and a required topic in a class Cory is taking). He has to demonstrate understanding of the concepts; it's just that the input and output methods for that understanding are different.
...even though I may not be drawing diagrams, that doesn’t mean that I’m not responsible for knowing how each diagram is used and how to describe one.
Reading Cory's description of how he describes UML diagrams in text reminded me of the time in elementary school where my music class was going through the instruments of the orchestra; we were listening to sound clips from different instruments and had to write about each one. Since I can't hear high frequencies, my reports went something like this: "The tuba sounds like this, the bassoon sounds like that, the piccolo has a fascinating history and an intricate key mechanism that I will now diagram..."
I don’t believe that a fundamental property of the software development cycle is that it is visual. I think we make it that way because most people think it is more convenient...
I agree. And I don't believe that a fundamental property of high-bandwidth conversation is that it's auditory, either. I know many people who, at the present moment, find phone conversations to be the easiest way for them to communicate with others long-distance. But that's different from saying phone conversations are the most effective way of doing so, depending on your goals (for instance, phone conversations currently - usually - don't get logged for posterity, let alone logged in a way that can be automatically translated). Similarly, there are undoubtedly highly effective non-visual ways of doing design. As someone who's highly visual myself, I don't know what they are, but I would love to learn. (One of the reasons I enjoyed reading Cory's posts is that my hearing forces me to rely so heavily on visual input that I often forget to run thought experiments suspending the assumption that I can.)
I'd actually like to learn more about the design practice of looking at edge-case users (not sure if there's a better term for this). Maybe posts like Cory's can shed some insight into the advantages of non-visual design systems, or the disadvantages of visual design systems, in a way that makes both of them better for everyone (not just the visually impaired). I look a lot at the benefits of alternatives to auditory-by-default systems because I have to, and sometimes the adjustments I make end up being useful to other people.
Matt pointed out in his response to Cory's post that the majority of the software development world does not use visual input either.
...the overwhelming majority of our communication and collaboration regarding software developing is written/verbal, not visual. That is, we’re not shipping pictures back-and-forth 24/7—we’re chatting on mailing lists, IRC, and blogs to get things done.
However, I do wonder whether the dominance of text-based communications in software development will continue as tools like inkboard (collaborative Inkscape) continue to be developed, or if an (initially secondary - possibly only within a subculture at first) alternative, more graphical/auditory discourse will start happening. The parallel for me is podcasting and vlogging. They haven't replaced forums and mailing lists in general online discussions yet, but they are definitely a presence that I grow increasingly more disadvantaged for having to ignore.
Well, mostly ignore. Strictly speaking, I do have the advantage that I can catch some audio, and that I have friends who'll sometimes take the time to write a video summary for me, or sit next to me while a podcast is running and re-mouth the words so I can lipread them, but for most practical purposes, that's like saying that publishing documentation in Tamil should be entirely sufficient for English speakers because of the presence of Google Translate. It takes a lot of extra conscious effort, the availability of specific tools and helpers, a lot of extra time, and much is still lost in translation, so it's usually not worth the investment to even try.
I find it fascinating to see how other people adjust and hack inclusion into a world that often doesn't assume them in its default case. At least with open source I get to hack on things - and with things - that give me the freedom to shape them into what I need (yay visual system beeps!) but the burden's still on me to do the shaping and the constant reminding of others that I need accessibility to the things they'd like me to contribute to (for instance, project meetings by phone virtually guarantee my silence). At least the burden here comes with the tools I need in order to assume it. (Mostly. We could do better, but that's a longer post.) And I'm glad projects like Sugar try to make themselves more-hackable-by-default.
*re: "deaf" - I'm trying to get used to being able to use this word as well, though I can hear some sounds (my hearing loss is classified as "severe") and grew up mainstreamed in the hearing world (with lots of hacks). It's a cultural adjustment that I'm consciously learning (with tremendous latency and deep discomfort) to make.