A Novel Approach to CI Adjustment and Rehabilitation

By Dr Chris Satchwell

Unlike a few CI users who report that they can understand speech soon after switch on, I had to struggle to learn to use mine, but managed to do so fairly quickly through a combination of luck and half an idea of the type of training likely to work for me. Reflecting on my experience, I would like to share some draft principles that facilitated my CI learning and positive outcome as well as a framework for their understanding. These are presented in the hope that some or all of them might help future CI patients and perhaps offer a base from which others can further develop CI training methods. I recognise that outcomes for CI users vary and that a number of factors play a role.

Relevant parts of my background include engineering, neural computing and time series analysis. I am a self-funded CI user as despite my severe bilateral hearing loss my audiological results did not meet NICE criteria. I acquired my hearing loss as an adult and it has been progressive in nature.  

My conceptualised framework for speech recognition involves four layers: inputs from a CI’s electrodes, phonemes, words and sentences. There are three relationships between these layers: CI-electrode/phoneme, phoneme/word and word/sentence. For brevity, the first letters of the four layers (EPWS) are re-arranged and the speech-recognition model referred to as “PEWS”. Working downwards, sentence recognition typically involves knowledge of context and rules of grammar; word recognition involves a phonetic dictionary and means of accessing it and finally; phoneme recognition requires the brain to form non-linear relationships between the CI’s electrodes and a language’s phonemes. This final relationship is one that every CI user needs to learn, but the approach may need to vary according to the effectiveness of a person’s phoneme/word and word/sentence relationships.

My strategy for CI learning derives from mathematical models of the brain. These suggest that the brain learns by correcting any difference between sound that needs to be heard and that actually heard; known as an error signal. They also suggest that the quantity of alert training is more important than the time over which it takes place; which offers a theoretical route to speedy CI learning. The error signal needs to be as clear as possible, which ideally involves foreknowledge of the phonemes (so they can be listened for) and very clear sound. My implementation of this idea was to use streaming devices to send TV sound signals directly to my aid and CI, and watch documentary programs. In these programs, well-packaged subtitles preceding speech were read and their words deconstructed into phonemes that were listened for; to deliver a contemporaneous error signal. The second requirement, about quantity of alert training, involved the availability of a wide choice of programs, so that at least one would be interesting.

A contrary observation involves spoken words without any foreknowledge of them. In my case my pre-CI scores for monosyllable word recognition tests were zero, but on sentence recognition approached fifty percent. These are not untypical and the reason is that hard-of-hearing speech recognition relies on interpolating what you cannot hear from whatever you can. The need to interpolate means that understanding of words and their phonemes will lag behind their sounds; so any error signal based on spoken words involves trying to remember a long-past phoneme to associate with one you can (at last) understand.  I refer to this as a feat-of-memory error signal, which has implications for other forms of CI training. Subjective confirmation of the problem of feat-of-memory error signals came from TV news broadcasts.  Their subtitles lag speech and offered me no training value.

Of the other recommended  CI training methods: there were limited opportunities for  face-to-face conversations but, despite their handicap of feat-of-memory error signals, were practiced whenever possible as a life skill; audio books and web sites were of limited interest and, for me, less effective for CI training than  TV documentaries. One point about computers is that their audio systems tend to be poor, so it may be useful to plug a streaming device into their audio socket to send their sound signals directly to the CI.

In the light of this experience I tentatively propose a list of principles for CI training.

(i) The starting point should be an assessment of the patient’s abilities in the PEWS speech recognition model described in the second paragraph. As an example, if a CI user can read TV subtitles quickly and deconstruct words into phonemes before they are spoken, then the kind of method I used could work.  For someone who has been hard of hearing from birth with very limited memories of phonemes, the right starting point might involve foreknowledge of phonemes in monosyllable words, followed by their sounds.

(ii) For early-stage training, sound quality should be as high as possible and noise avoided.

(iii) A clearly defined and independent means of informing the patient of forthcoming phonemes should be provided so that they know what to listen for. This should be simple and require as little mental effort as possible from the patient.

(iv) CI training should involve a contemporaneous error signal wherever possible. If it has to be done from memorised phonemes, then any lag between the memory of the phoneme heard, and that finally understood, should be minimised.

(v) CI Training should be devised to coincide with a person’s interests, ideally to make it a pleasure rather than a chore.

The foreknowledge of phonemes in monosyllable words followed by their sounds could be achieved by a web site or CD-ROM with a pre-displayed word and sight of a speaker (for lip reading) saying the word.  In the case of audio books, it might help to use short stories (so the plots do not have to be remembered) played in videos with pre-packaged words shown before a lip-readable person speaks them. Whilst claiming no real expertise on how these principles can best be implemented, I do believe these is a case for creating dedicated CI training material for patients with differing abilities in PEWS space.