Now I am reading the book “Visual Thinking for Design” by Colin Ware (ISBN 978-0-12-370896-0). The book starts by describing the way the brain processes visual information. Essentially, the brain processes it chunk by chunk, where the chunks are separated by saccades (rapid eye movements).
Since my V2V project requires finding a way to translate auditory information into visual information for processing by the brain, I’m looking also for the corresponding information about the way a brain processes auditory information.
One question, which arose in my mind, as I am reading the aforementioned book is as follows.
Assume a hearing (or hard of hearing) lipreader, who follows a speech by listening and uses lipreading as an auxiliary aid to filter out environmental noises and other speakers. Given that the lipreader’s eyes perform saccades as usual, are the saccades synchronized to times at which the speaker produces vowels rather than consonants?
And would the saccades still by synchronized to vowel production periods also for a deaf lipreader?