Projects

The research in dB SPL is motivated towards identifying problems that hearing-impaired listeners encounter in daily life, with the ultimate goal of proposing solutions to these problems. To achieve this goal, we conduct basic science research to understand mechanisms of hearing, or its failures, as well as clinical research with actual patients. Our research is multidisciplinary, and involves behavioral and cognitive sciences, as well as engineering approaches. We work closely with users of hearing aids and cochlear implants, the manufacturers of these devices, and local and international collaborators.

Some of our projects (selected) are as follows. To see a more comprehensive list please see the People section.  For both lab members and non-members, you can find useful tools and other materials at our dB SPL Lab website.

Contents

Ongoing Projects

Electric Pitch perception with cochlear implants (ElPi): using real-life sounds to get back on the right track

PhD student: Drs. F. Rotteveel
Supervisor: Dr. E. Gaudrain (CNRS)
Collaboration: Ir. Bert Maat, Dr. Damir Kovacic, Dr. Chris James
Funded by VICI grant (Başkent), GSMS, Kolff Institute, Heinsius Houbolt Foundation

Pitch perception is an essential element of human communication, which is known to be severely degraded in cochlear implant (CI) users. In speech, pitch perception helps identify speakers in noisy or multi-speaker environments, and crucially, it contributes to prosody and reveals the emotions of the speaker. In music, it plays a central role in melody and harmony. Past studies have suggested that pitch perception for CIs relies primarily on temporal cues as opposed to normal hearing (NH) listeners who also rely heavily on spectral cues. However, recent work involving more realistic stimuli, including studies from the dBSPL group, challenges this separation of mechanisms between NH and CI users. These studies suggest that implant users might also be utilizing spectral cues in addition to the temporal cues.

In the present study, we aim to clarify the relative contributions of spectral and temporal cues to pitch perception of CI users in realistic sound stimuli. To achieve this, we will mimic the electric stimulation patterns of the implant for speech and musical sounds, remove either temporal or spectral cues, and assess the effect of this manipulation on pitch discrimination tasks. With this method, we will systematically inspect the role of both cues across parameters concerning the nature of the stimuli and the nature of the implant stimulation strategy in actual users of implants.

The results have both theoretical and applied consequences, in better interpreting existing data and reshaping future research, but will also be valuable for implant manufacturers and for clinicians to improve signal processing and rehabilitation strategies.

AURORA: AuRoRA: Audiology Robotics for Research and Clinical Applications

PhD student: Drs. L. Meyer
PIs and co-promotors: Dr. G. Araiza Illan, Dr. L. Rachman
Collaboration: Dr. Khiet Truong, Univ. Twente
Past and current funding: NWO VICI grant (Başkent), UMCG GSMS, UMCG Kolff Institute, Heinsius Houbolt Foundation, UMCG Fonds Klinisch Onderzoek en Onderwijs (FKOO)

The Audiology Robotics for Research and Clinical Applications (AURORA) expertise team, based at the University Medical Center Groningen, Netherlands, explores the use of humanoid robots as a rehabilitative, diagnostic and testing interface for audiological research and clinical applications involving children and adults.

Luke Meyer’s PhD project R2D2 for KNO: Use of a Humanoid Robot for Rehabilitation of Deaf Individuals aims to address three primary objectives, namely, how a humanoid robot can be used as 1) an effective interface for the testing of auditory perception by using the pre-existing PICKA test battery, 2) a rehabilitative platform for cochlear implant users by providing support to existing rehabilitative procedures, and 3) an emotional support platform through various autonomous interactions.

Additional research projects include assessment of human robot interaction in children and adults, as well as older adults with various cognitive status, and application of automatic speech recognition in clinical procedures. For clinical applications, the AURORA group works with the clinical team in evaluating the use of the humanoid robots in clinical settings. The group also provides expertise knowledge both within UMCG and also via Computational Audiology Network (CAN).

Screening of Auditory Function in geriatric Elderly (SAFE)

Project leaders: Dhr. dr. T. Koelewijn; Dhr. ir. S. W. J. Ubbink
Researchers: Mw. Prof. dr. B. C. van Munster, Mw. Prof. dr. ir. D. Başkent, Dhr. J. Vrielink,
Mw. dr. G.A. Araiza Illan, Dhr. S. Türüdü
Funded by VICI

Due to the ageing of the “baby boom” generation and increasing life expectancy, the number of elderly people is rising quickly. In this elderly population the prevalence of both hearing loss and cognitive challenges (e.g., dementia) are higher than younger adults. Both hearing loss and cognitive changes, while of differing physiological origin, have similar consequences, e.g., difficulties in following conversations, forgetfulness, and withdrawal from social gatherings, with a negative impact on the quality of life (Kricos, 2009). Hearing loss is considered a risk factor for dementia (Livingston et al., 2020). There are indications that hearing intervention could reduce cognitive decline (Amieva and Ouvrard, 2020). To optimize quality of life in elderly, it is important to assess hearing function and to assure a successful hearing rehabilitation.

The prevalence of hearing loss in elderly visiting the geriatric clinic is considered to be high given the high rates of hearing loss incidence in the general aging population. About 30% of adults above 65 years of age have hearing loss, more than 50% for 80 to 84 years old, and about 80% for adults above 85 years old (Homans et al., 2017). In the population of elderly visiting the geriatric clinic with cognitive problems the prevalence of hearing loss may even be higher (Allen et al., 2003). The guideline for geriatric assessment in the Netherlands states that functional testing of hearing is recommended (richtlijnendatabase.nl).  The current way of hearing screening in the geriatric outpatient department is in general done by two questions posed to the patient: 1. do you feel you have hearing loss?, and 2. do you feel your hearing aid works sufficiently?, as well as through observation during anamnesis. This first question has a sensitivity of about 70% for detecting hearing loss (Nondahl et al., 1998)   indicating that there are patients underdiagnosed that would benefit from hearing intervention.

The aging population also results in an increasing number of people with dementia (or other forms of age-related cognitive changes) visiting the audiology clinics. In the audiology clinic, extensive tests are possible for full hearing assessment of auditory functioning, ranging from hearing thresholds and speech understanding in background noise to validated questionnaires. However, hearing assessment in people with cognitive problems can be challenging. A recent review reported that about 40% of people with dementia could not complete pure tone audiometry, the current golden standard, because of the long test duration (Bott et al., 2019). There is a need for additional shorter duration and practical tests in audiology to assess hearing function in people with cognitive problems (Dawes et al., 2021).

This study will investigate, in a collaboration between the audiology and geriatrics clinics, the feasibility of audiometric and cognitive tests in older populations, with or without hearing loss, and with or without cognitive decline, to identify hearing loss prevalence and dementia, and the most reliable methods to use for this purpose.

Variations of Digits in Noise Test

PhD student: Dhr. drs. S. Türüdü
Promotor: Mw. Prof. dr. ir. D. Başkent
Co-supervisor: Dhr. dr. T. Koelewijn
Collaborators: Dr. E. Gaudrain, dr. ir. G. Araiza-Illan
Funded by UMCG and Ministry of National Education Turkey

Smits and colleagues (2013) developed the Digits-in-Noise (DIN) test, which requires individuals to repeat three spoken numbers (a digit triplet) presented in noise, for clinical usage. The DIN test strongly correlates with pure tone audiometry averages, within the populations (relatively middle-aged and young-older individuals, no cognitive decline) it was tested. While pure tone audiometry requires strictly calibrated apparatus, as it measures perception of sounds at hearing threshold (very quiet), speech tests remain informative over a range of presentation levels. The DIN test is a suprathreshold test that only requires understanding of a short, closed set of numbers, words that are learned early in life and highly practised over the years.

To summarize, the DIN test can be performed quickly, is mostly not restricted by cognitive/linguistic skills, requires no trained audiologist or special audiometric room, and can even be performed online automatically. The DIN-test seems to be of added value as a screening instrument to assess hearing function in the geriatric clinic or other non-audiologic clinics. Similarly, it can be of value as an additional test in the audiology clinic for people with cognitive problems that cannot complete traditional hearing tests. However, the DIN test until now has been validated on a small number of populations up to the age of 70 years (Koole et al., 2016), not on older (geriatric) populations or on people with cognitive problems.

This study will investigate various implementations of DIN test to fully characterise the test capabilities.

The effect of voice and speech perception on listening effort

PhD student: Mw. MSc. A. Biçer
Promotor: Mw. Prof. dr. ir. D. Başkent
Co-supervisor: Dhr. dr. T. Koelewijn
Funded by VICI grant (Başkent) NWO, ZonMw, Heinsius Houbolt Foundation, Rosalind Franklin Fellowship

Listening to speech with one or multiple persons talking in the background is known to be challenging and effortful especially in people with a hearing impairment. Being able to perceive differences in voice cues like fundamental frequency (F0) and vocal-tract length (VTL) can help listeners segregate competing talkers, which improves speech understanding. Research showed that cochlear implant (CI) listening reduces sensitivity to F0 and VTL voice cues, potentially contributing to difficulties in understanding speech in adverse listening conditions. Previous studies show that voice exposure by implicit or explicit voice training can improve speech intelligibility in normal hearing listeners.

The pupil dilation response has shown to be an objective measure for cognitive processing load in adverse listening conditions, also referred to as listening effort. Using a variety of listening tasks, studies have shown different types of speech degradation, by masking (e.g., noise vs. speech) or vocoding, to affect the pupil dilation response. However, it is relatively unknown how voice perception processes, that make use of voice cue information, affect listening effort when speech is degraded like in CI-listening.

During her PhD project Ada Biçer investigates if voice training improves sensitivity to F0+VTL voice cues and affects speech-on-speech listening in normal hearing and CI users. The impact of voice training and vocoding on listening effort during voice cue discrimination and speech-on-speech listening is investigated by means of pupillometry. These outcomes will provide insight on the impact that voice familiarity have on voice discrimination and listening effort in normal and CI-listening.

Development of voice, speech, and language processing in normal-hearing and cochlear-implanted children of school age

PhD student: L. Nagels (completed)
Co-promotor:Prof. P. Hendriks (RUG Semantics)
Collaborators: Dr. E. Gaudrain (CNRS Lyon), Dr. D. Vickers (UCL)
Past students; BCN ReMa student: J. Libert,  Students: E. Ibiza, I. van Bommel
Funded by Faculty of Arts, VICI grant, PI: Hendriks, and VICI and VIDI grants, PI: Başkent
Web: https://www.picka-onderzoek.nl/

The PICKA project focuses on the perception of voice and indexical cues in children and adults, with normal, impaired, or cochlear-implant (CI) hearing. Voice cues are an important component of speech and language processing, adding to the semantic content (meaning from words), for example, in conveying vocal emotions. Voice cues can also help enhancing speech segregation and comprehension in cocktail-party listening — situations where hearing-impaired listeners have most difficulties.

Our previous research has shown that the perception of vocal-tract cues, information about the distribution of formants which is determined by the size of the speaker, is very limited in cochlear implant (CI) users. To gain a better understanding of the nature of the limited perception of voice characteristics in CI users, we will explore the developmental trajectory of voice pitch and vocal-tract length perception in normal-hearing children and children with CIs. The performance of the normal-hearing children will serve as a baseline for a normal developmental trajectory of voice characteristics perception, to which individual children with CIs can be compared. This allows us to identify any developmental differences or delays in children with CIs compared to normal-hearing children. In addition to their perceptual abilities, we will also assess children’s ability to use voice cues for three speech-related tasks: identification of vocal emotion, categorization of voice gender, and the perception of speech in the presence of competing background speech. As voice pitch is the most dominant cue in infant-directed speech, it is expected that perceptual sensitivity for this cue develops early in life. Sensitivity to vocal-tract length cues requires exposure to multiple talkers, and is thus expected to develop at a later age. Taken together, this implies a hierarchy for the perception of these voice characteristics. If the vocal-tract length information available to children with CIs is too distorted to be used for speech-related tasks, they may benefit from specific training.

Music and cochlear implants

Researchers: Dr. E. Harding, Dr. C.D. Fuller
Collaborators: Dr. R.H. Free, Ir. A. Maat, Mw. G. Nijhof, Dr. R. Harris (Prince Claus Conservatory), Dr. E. Gaudrain (CNRS Lyon), Prof. Dr. B. Tillmann (CNRS Lyon), Mr. B. Dijkstra (NHL Stenden), Mr. S. de Rooij (NHL Stenden)
Funded by VICI grant and Dorhout Mees Foundation

In cochlear implants, improvements in device design have produced good speech understanding in quiet, but speech perception in noise and enjoyment of music are still not satisfactory. Furthermore, cochlear-implant users rank music, after speech perception, as the second most important acoustical stimulus in their lives. Thus improvement of music enjoyment/perception and speech perception in noise could lead to a significant improvement in quality of life in cochlear-implant recipients. Knowing that music is the second most important acoustical stimulus in cochlear-implant users, the field is focusing on improvement of music perception.

Another relevance to music for CI users is that recently, music training has shown improvements also for perception of speech in noise. This was explained by a transfer of learning from music training to speech perception, likely as a result of overlapping neural networks specialized for music and for speech. In this project, we have both explored music perception and appreciation among our CI users, as well potential benefits of music training on perception of music and speech, with CI users and NH listeners using CI simulations, who further comprised of groups of people who were musically trained or not trained.

In this intervention study intended for postlingually deafened adult (and older adult) CI users, we will conduct a randomized controlled trial with a music lesson intervention, computer game control intervention, and a do-nothing control intervention. The main hypothesis of the study is that the music intervention — learning to play a music instrument using an improvisation-based audiomotor approach GAME (guided audiomotor exploration)— can improve cochlear implantation outcomes related to speech and music perception, music enjoyment, and quality of life.  In a novel approach, our Serious Gaming collaborators (NHL Stenden) are designing a control intervention that matches the format of GAME piano lessons (such as having weekly lessons with an instructor), while teaching the serious game ‘Minecraft’ (building a virtual world). Thus, we plan to isolate the impact of music that our music intervention will have on speech and music perception as well as quality of life.

Perception of realistic forms of speech with cochlear implants

Researchers: Dr. T. Tamati, Dr. T. Koelewijn
Collaborator: Dr. E. Janse (MPI)
Funded by VICI, VIDI grants (Başkent) and VENI grant (Tamati), Rosalind Franklin Fellowship.

Speech communication is an important part of daily life for humans, providing us a way to connect with other people and the surrounding world. Yet, everyday, real-life listening conditions can be very challenging. Listeners must deal with a great deal of natural speech variability, also in the presence of background noise and competition from other talkers. For example, the pronunciation of a word differs across talkers and social groups, as well as environmental and social contexts. For normal-hearing listeners, speech understanding is successful and robust despite this variability. Normal-hearing listeners have highly flexible perceptual systems, allowing them to adapt to and learn differences in talkers’voices, regional or foreign accents, or speaking styles to promote robust communication.

While cochlear implants (CIs) are successful in resorting hearing to profoundly deaf individuals, implant users rely on input signals that are heavily reduced in acoustic-phonetic detail. As a result, the adverse listening conditions commonly encountered in daily life appear be particularly detrimental to successful speech understanding in these users. However, in most current clinical and research approaches, ideal speech, i.e., carefully produced by a single talker with no discernable accent, is commonly used to assess the speech recognition abilities of patients. In contrast to ideal speech, highly variable, real-life speech imposes a greater perceptual and cognitive demand on listeners, resulting in more challenging or effortful speech recognition, as can be measured by lower accuracy scores or increased response times, or an increase in pupil size. As a consequence, the effects of talker variability on this population are still largely unknown due to the absence of sensitive clinical tools.

We urgently need to achieve better outcomes for implant recipients, but our current lack of knowledge of real-life challenges, outside the lab or clinic, remains a critical barrier to the development of new clinical tools and interventions. To fill the current knowledge gap and overcome the resulting clinical barrier, the overall aim of this project is to systematically investigate the effects of talker variability on speech understanding and listening effort by cochlear implant users.

Perception of L2 prosody in cochlear implant simulations

PhD student: Drs. M.K. Everhardt
Co-promotores: Prof. dr. W. Lowie (RUG Applied Linguistics), Prof. dr. D. Baskent (UMCG ENT)
Co-supervisor: Dr. A. Sarampalis (RUG Psychology)
Collaborator: Dr. M. Coler (RUG Campus Fryslan)
Funded by RUG Faculty of Arts

This PhD project explores how the perception of prosody in a non-native language is influenced by a cochlear implant (CI) simulation. Linguistically speaking, prosody (i.e. suprasegmental speech elements involving variation in fundamental frequency (f0), intensity, duration, and spectral characteristics) forms an important source for information on the syntactic and semantic properties of speech, which is especially important when learning a second language (L2); it contributes to a listener’s ability to determine boundaries between syllables and words as well as affecting the interpretation and comprehension of speech influenced by for instance (word) stress or sentence type. Degradation in fine spectrotemporal detail complicates the perception of prosody and could consequently lead to errors in the comprehensibility and processing of utterances. CI users and CI-simulation listeners are therefore at a disadvantage when processing prosody compared to NH listeners. In this project, we investigate the influence of CI simulations on the perception of L2 prosody at word-level and sentence-level. That is, we investigate how accurately and efficiently young native Dutch learners of English perceive prosody in spoken English words and sentences degraded by a CI simulation compared to how accurately and efficiently they perceive it in non-CI-simulated words and sentences. Furthermore, we investigate how the accuracy and efficiency of the non-native listeners compares to that of native listeners.

Music perception skills and the acquisition of second language prosody (MUSEPRO)

PhD student: Drs. N. Jansen
Co-promotores: Prof. dr. W. Lowie (RUG Applied Linguistics), Prof. dr. D. Baskent (UMCG ENT)
Co-supervisor: Dr. H. Loerts, Dr. E.E. Harding (UMCG ENT)

This PhD project investigates the acquisition of second language (L2) prosody and considers the influence of music perception skills on learning outcomes. Many theoretical and empirical studies support a link between music and language in cognition, and studies show advantages of listeners’ musical skills in the perception of prosodic variables in speech, such as pitch and duration, in native and nonnative languages. Prosody is an aspect of L2 learning that is important for comprehension and fluency in L2 speech, but remains difficult to learn, even to advanced learners. In this project, we investigate potential benefits of musical skills in L2 prosody comprehension and production. By doing so, we aim to shed light on individual differences in L2 acquisition, as well as contribute to the theoretical debate on the connection between music and speech prosody.

Musical skill will be measured by a music perception test, which gives participants a score of perceptual sensitivity to various musical cues. We will test speech comprehension by investigating the integration of prosody into two higher-order linguistic domains, testing Dutch adults with English as an L2. Firstly, we consider the integration of prosodic cues on the semantic level by investigating the processing of focus accents, using eye-tracking in the visual world paradigm. Secondly, we will investigate the comprehension of sarcastic prosody, reflecting the integration of prosody on a pragmatic level. In both subprojects, we expect L2 learners with higher musical skill to demonstrate a faster and better integration of prosody into meaning, thus showing enhanced L2 comprehension. The project also examines whether musical skills are connected to a more native-like production of L2 prosody, using elicitation techniques and perceptual and acoustic measures. We expect learners with high musical skills to show less influence from their native language in L2 speech.

Voice and speech perception in early-deafened late-implanted cochlear-implant users

Researcher: Dr. C.D. Fuller
Collaborator: Dr. R.H. Free
Funded by Mandema, VICI

Individuals who are deafened early in life but not implanted within a short period of time after deafness onset are a relatively new cochlear-implant (CI) population. New clinical protocols are implemented and evaluated for this population. For research, on the other hand, this population presents an excellent oppotunity to investigate voice and speech perception with long duration of auditory deprivation and perceptual re-learning following implantation.

The aim of the project  is two-fold; on one hand, this implant user group will serve as a model of auditory deprivation to provide scientific knowledge on voice and speech perception, on the other hand, it will provide evidence to increase implantation to a new population, supporting the clinical practice. Some research indirectly indicates that this early-deafened late implanted (EDLI) group may benefit from cochlear implantation, but research to this date remains relatively limited. Based on previous research, we expect positive outcomes both in hearing abilities, such as in voice and speech perception, as well as in psychological factors. It remains unknown, however, how this population performs compared to traditional implantation strong scientific evidence to support extending inclusion criteria for implantation of atypical patients, which can eventually help more deaf individuals.

Perception of voice and speech in cochlear implants and hearing impairment

PhD students: N. El-Boghdady (completed), Floor Arts
Audiologist in training: Mathieu Blom
Visiting PhD: Ben Zobel (completed)
Researchers, co-supervisors:  Dr. A. Wagner, Dr. T. Tamati, Dr. Thomas Koelewijn, Dr. Laura Rachman
Collaborators: Dr. E. Gaudrain (co-supervisor; CNRS Lyon), Dr. W. Nogueira (Medical School Hannover)
Funded by VIDI and VICI grants, partial funding Advanced Bionics and GSMS

Like fingerprints, each person has a characteristic voice that can be used for identification purposes. However, unlike fingerprints, voice is involved in speech communication and listeners use this information to identify a speaker or to infer some characteristics of the person who is talking. This is particularly useful when many talkers speak at the same time, like in a crowded environment, or when there is no visual information available, like on the phone. These two situations are particularly difficult for cochlear implant users. Some recent work from our group allowed us to show that cochlear implant users have a clear deficit in the perception of some vocal characteristics, which certainly contributes to the difficulties described above. Our task is now to understand the origins of this deficit, and to explore new techniques to restore appropriate perception of voices.

There are a number of vocal characteristics that can be used to identify a speaker. However, two of them appear to be most important: one is the pitch of the voice, and the other is linked to the size of the speaker. A violin and a cello can play the same note while having bodies of different sizes, producing two sounds that can be easily distinguished despite having the same pitch. The difference in timbre between two such sounds originates from the difference in the size of the resonating bodies of the two instruments. In the human speech production system, the size of the resonating body is characterized by the length of the vocal tract, which similarly affects the timbre of the voice.

The way these two dimensions, pitch and vocal-tract length, are coded in the acoustic signal is relatively well known. Furthermore, the reduced ability of cochlear implants to deliver pitch information has also been largely documented. However, very little is known on the perception of size (or vocal-tract length) information in cochlear implant users. The first stage of our research thus consists in evaluating the sensitivity of CI users to this voice characteristic. Our results indicate that while voice pitch is sufficiently preserved in the implant to allow gender categorization, vocal-tract length information is not available to the cochlear implant patients. As shown in some of our previous research, this can lead to confusing a male speaker for a female, and yields difficulties in separating concurrent voices. We undertake a thorough exploration of the pitch – vocal tract length – signal-to-noise ratio (SNR) parameter space to determine what degree of separation in voice characteristics listeners require to be able to pick out a single voice in a multispeaker environment, and how this is affected by hearing loss.

We have been exploring methods to improve the perception of these vocal characteristics in cochlear implant patients. We focused in particular on vocal-tract length, as this dimension seems to be the most problematic. Because the perception of vocal-tract length is solely based on spectral information (whereas pitch information can also be conveyed through temporal information), the accuracy of the spectral information is of primary importance for this dimension. Consequently, our first approach explored how fine-tuning of the frequency allocation map, i.e. the way the spectral information is distributed along the electrode array in the cochlea, could improve vocal-tract length perception. Our results indicate that frequency allocation map has an effect on vocal-tract length discrimination in implant simulations. In addition, we have tested some experimental coding strategies, like Spectral Contrast Enhancement (in collaboration with Advanced Bionics European Research Center and Dr. Nogueira, University Medical Center Hannover), to assess whether they could improve vocal-tract length discrimination. Related techniques, like current focusing combined with current steering, while showing little benefit for the perception of speech in silence, may also provide an advantage for vocal-tract length perception, and thus for voice identification.

Completed projects

Emotion processing in audio-visual impairment

PhD student: Drs. M. de Boer (completed)
Collaborator, co-supervisor: Prof. F. Cornelissen (UMCG Ophtalmology)
Funded by GSMS BCN-Brain, VICI, Uitzicht.

ENRICH: Enriched communication across the lifespans

PhD students: E. Kaplan, J. Kirwan
Researcher, co-supervisor: Dr. A. Wagner
Funded by H2020 MSCA ITN, VICI grant.

The effects of aging on temporal integration in vision and hearing

PhD student: J. Saija
Collaborators: Dr. E. Akyürek (RUG Psychology), Dr. T. Andringa (RUG AI)
Funded by RUG and UMCG Faculties, NWO Aspasia grant.

Mental and auditory representations in cochlear-implant users

Researcher: Dr. A. Wagner
Collaborator: Prof. dr. N. Maurits (RUG Neurology), drs. B. Maat
Funded by Marie Curie Intra-European Fellowship (PI: Wagner, Host: Başkent), Med-El, VICI.

Perception of speech in complex listening environments in normal hearing, simulated hearing loss, and users of cochlear implants

PhD students: P. Bhargava, J. Clarke
Researchers: Dr. E. Gaudrain
Collaborators: Dr. M. Chatterjee (Boys Town, USA), Dr. R. Morse (Aston, UK), Dr. S. Holmes (Aston, UK)
Funded by VIDI and Aspasia grants from NWO, ZonMw, Rosalind Franklin Fellowship.

Geriatric cochlear-implant candidacy

Collaborators: Dr. R. Hofman, Dr. G. Izaks
JSMS student: N. Schubert

Listening effort with cochlear implants

PhD student: C.Pals, MSc, Co-supervisor: Dr. A. Sarampalis
Collaborators: Dr. A. Beynon (Radboud MC), Dr. H. van Rijn (RUG Psychology)
Partially funded by Cochlear Europe Ltd., Rosalind Franklin Fellowship, Doorhout Mees Foundation, Stichting Steun Gehoorgestoorde Kind, GSMS.

Musical experience, quality of life and speech intelligibility in normal hearing and cochlear-implant recipients

PhD student: C.D. Fuller, Co-supervisors: dr. R.H. Free, ir. A. Maat
Collaborators: Dr. E. Gaudrain, Dr. J. Galvin III (UCLA)
Partially funded by Advanced Bionics, Rosalind Franklin Fellowship, Heinsius Houbolt Foundation.

Single- and multi-channel pattern perception in electric hearing

Researcher: Drs. J. J. Galvin III (UCLA)
Partially funded by VIDI grant and Rosalind Franklin Fellowship (PI: Başkent). NIH grants, PI: Prof. Qian-Jie Fu.

Interference from music, speech, and noise in middle age
Master Student: S. van Engelshoven, Collaborator: Dr. J. J. Galvin III (UCLA)
Partially funded by VIDI grant and Rosalind Franklin Fellowship.

Audiovisual integration in young and elderly listeners

Audiologist in training: ir. M. Stawicki
Collaborator: Dr. Piotr Majdak (ARI, Austria)
Partially funded by VIDI grant and Rosalind Franklin Fellowship.

Perceptual learning of interrupted speech

PhD student: M.R. Benard
Funded by VIDI grant (NWO) and Rosalind Franklin Fellowship

Behavioral diagnosis of tinnitus

Researcher: Dr. K. Boyen
Project leader: Prof. P. van Dijk
Funded by Action in Hearing Loss.

Second language learning in cochlear-implanted children and adolescents 

Phd students: Drs. E. Jung, Drs. M. Everhardt
Collaborators: Dr. A. Sarampalis (RUG Psychology), Dr. W. Lowie (RUG Linguistics)
Funded by VIDI grant and Rosalind Franklin Fellowship.