Hearing aids: available techniques report
On Wednesday, there were a ton of questions asked.
Background reading: "Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review" by Andrea Simpson, published in Volume 3 Number 2 of "Trends in Amplification" in 2009. Let me try to give a summary of the options, which all involve taking high frequencies (which I can't hear) and moving or smooshing them into low frequencies (which I can).
- Vocoding - I'll let Wikipedia explain it. Makes people sound like robots (or Daft Punk).
- Slow playback - What it sounds like. Makes everyone sound like goofy baritone cartoon characters. Also takes longer to play back than the actual speech sound, so you end up lagging more and more as the conversation goes on.
- Transposition - take the high frequency spectrum, shift it down, and copy-paste it on top of the low-frequency spectrum. This is the equivalent of playing piano while shifting your right hand two octaves down so it literally overlaps the left hand. Like the piano analogy, the trouble here is that your "high frequency info" ends up slamming over the low-frequency one.
- Nonlinear frequency compression - take the normal speech spectrum and squeeeeeeze it into the lower portion I can hear - nothing overlaps, the musical notes just get closer together. The "nonlinear" part comes from squeezing the high frequencies more than the low ones so the low frequencies get less distorted. Problem: can you imagine how awful music sounds like this?
- Frequency shifting - just move everything down. Makes everyone sound like Darth Vader.
These are all extreme oversimplifications, of course. The other bit I noticed is that there wasn't much aural rehabilitation done in the experiments covered by the meta-study. Most people simply aren't willing to put up with the cognitive discomfort and time needed to significantly retrain their brains to hear; they want to understand speech now because they're having difficulty and falling behind.
But I'm an odd case. I understand speech and have coping mechanisms sufficient to let me keep up with what I need to keep up with, unamplified. I have a long time horizon; I want to understand speech better 5 years from now, and am willing to pass through extreme amounts of masochism between now and then. I have a high tolerance for cognitive discomfort and like stretching my brain into unfamiliar shapes (see: graduate school, foreign languages, etc).
So I am completely fine with the idea that my amplified speech comprehension might drop for years before my brain retrains enough to climb back up -- it's the equivalent of learning dvorak (or steno) for typing when you already know qwerty. Yes, you're slower at first... but theoretically, once you climb the learning curve, you can blow past your prior performance. I want to see if the same might happen here.
So they said all right, maybe we might want to look at this spectral IQ technology - let's contact the manfuacturer, find out more details. From what I gather, this thing...
- is constantly working in the background to detect high-frequency speech sounds for instance, an "sh"
- when, and only when, it detects those sounds, it plays a lower-frequency sound that is not a transposition -- so not a low-frequency version of "sh" that occludes speech sounds within my hearing range, but rather a made-up sound that interferes less with the speech sounds I can hear. I would then train my brain to associate that made-up sound with "sh."
- aside from this, the original sound signal remains largely untouched; there's far, far less narrowing of bandwidth than with other techniques.
The concern here, I think, is that my hearing loss remains severe enough at low enough frequencies that this technology might not work. We'll check in again in two weeks and see where we stand, if we've heard back from the manufacturer yet, and so forth.