...but sometimes you can merge your interests and produce surprising collisions of knowledge, so why not?

Aside from engineering education, specifically for the deaf; how do you teach someone to produce sounds they can't hear? You use a combination of methods to translate the imperceptible (sound) to the perceptible; lipreading, touch (placing your hand on cheeks and lips and throat to feel where the vibrations and resonance happen, or over the mouth to feel how strong the exhale comes out where), diagrams and explanations of how teeth and tongue move -- things like that.

This may be where my interest in translation between domains stems from. Can't hear doorbells? Set up a flashing door-signal. (I should do that here, actually.) Can't taste the difference between two wines? Try them with food to see if they react differently, or read descriptions by other people of the flavor notes and try to detect them on your own tongue.

The second is language acquisition, a fascinating combination of rule systems (grammar) and exceptions (irregular verbs), memorization (vocabulary) and context (colloquialisms), perception-tuning and cultural acquisition that end up changing the way you think. Yes, I know the Sapir-Whorf hypothesis does not apply to all cognition, but still! Language-learning is also (sadly) one of the few topics in andragogy that receives significant popular and commercial interest, so there's a lot of nifty and experimental stuff to play with.

Which brings me to the topic of my graduate studies. As part of my engineering education PhD, I've got to take a number of advanced graduate engineering classes that need to form some sort of coherent sequence. Beyond that, they can be anything I like. Therefore, I am considering Speech processing by computer (this spring), Psychophysics (next fall), and Embedded systems (next spring).

Okay, I may need to do some remedial DSP and/or Communications, as I only passed those as an undergrad through the great grace and mercy of Diana Dabby and Raymond Yim. (I'll probably see if I can do the same thing I did for ECS and TA the undergrad versions of those classes at some point.) I'd also like to grab a copy of the Introduction to computer communication networks syllabus because my ignorance in that area needs to be patched. But in any case, you may notice that my (hopeful) planned engineering classes start in the spring. What happens in the meantime?

German 601: First Course to Establish Reading Knowledge and Introduction to Phonetics (same as "Elements of Phonetics," but from the Speech, Language, and Hearing Sciences department instead of the Linguistics one; the same course, though).

Essentially, what I'm trying to do is decouple language learning from auditory perception, and play with both separately. I have a devil of a time starting a language by speaking and listening, which is exactly the opposite of what most folks do. I need to learn the rules, I need to pre-load some vocabulary, I need to read a lot to pre-load my mental Markov model with reasonable predictions. If I don't, I hit frustration-blockades; for instance, aside from the dreaded "ch" sound, I have a tough time with the German "r," which uses the back of your tongue and throat in a way that is impossible to lipread.

I suspect the difficulty comes from my prior work with the English "r," which was the bane of my speech therapists from kindergarten onwards. The American English "r" can be acceptably produced in a variety of ways -- you can raise the back of your tongue in a manner similar to the German "r," but you can also raise the tip of your tongue and get something similar enough for American English speakers to register it as a valid "r" sound. The tip-of-your-tongue method was far easier for me to lipread, and therefore to learn. Guess what I ended up with as my workaround for pronouncing all "r" sounds? However, German won't accept that as a valid "r" sound; you have to choke yourself with your own tonsils every time.

Don't even get me started about Mandarin phonemes. At least I could feel the different ways the air rushed through my teeth for zh/ch/sh/r after dozens of hours locking pinyin into muscle memory. Anyway.

So here's the plan. I "develop the skills necessary to read an intermediate-level German text with the use of a dictionary," then find whatever German speech therapy students use for textbooks, and go to town with that. In the meantime, speech processing and psychophysics give me more tools (computer-based) for playing with these ideas -- since I can't grasp auditory input like most folks, I need alternative frameworks and mechanisms (and technologies) for translating them into things I can understand; math, code, visuals, writing, diagrams. Embedded lets me step that into portable hardware implementations (and I like interesting hardware constraints anyhow). It all mashes together in a glorious mess of different styles and modes and disciplines of learning -- I jump between linguistics and audiology, mathematical theory and the smell of smoking solder, as a thread that runs above and uses (for introspection) the core of pedagogy/research my degree is based on.

Oh, yes -- and there's a ton of potential for open source stuff here. So many speech and language libraries, so many accessibility problems to solve, so many... mmm.

Another way of thinking about it: if one of the things I believe about engineering education is that problem-solving and technology are tools that you can have fun with, that you can and should blend the things you're passionate about into one glorious multidisciplinary mess -- then I will serve my future students far better if I drive hard at the stuff I find fascinating instead of worrying that I should take the meat-and-potatoes basics like Computer Architecture. Tech is something I can pick up and discard at will; if I need to know it, I will learn it, and if I don't... then I'll be hard-pressed to fake an interest in it.

Right then. Should talk with my advisor about registration Real Soon Now.