Semantically Congruent Bimodal Presentation with Divided-Modality Attention Accelerates Unisensory Working Memory Retrieval
Although previous studies have shown that semantic multisensory integration can be differentially modulated by attention focus, it remains unclear whether attentionally mediated multisensory perceptual facilitation could impact further cognitive performance. Using a delayed matching-to-sample paradigm, the present study investigated the effect of semantically congruent bimodal presentation on subsequent unisensory working memory (WM) performance by manipulating attention focus. The results showed that unisensory WM retrieval was faster in the semantically congruent condition than in the incongruent multisensory encoding condition. However, such a result was only found in the divided-modality attention condition. This result indicates that a robust multisensory representation was constructed during semantically congruent multisensory encoding with divided-modality attention; this representation then accelerated unisensory WM performance, especially auditory WM retrieval. Additionally, an overall faster unisensory WM retrieval was observed under the modality-specific selective attention condition compared with the divided-modality condition, indicating that the division of attention to address two modalities demanded more central executive resources to encode and integrate crossmodal information and to maintain a constructed multisensory representation, leaving few resources for WM retrieval. Additionally, the present finding may support the amodal view that WM has an amodal central storage component that is used to maintain modal-based attention-optimized multisensory representations.