Publications

Displaying 801 - 900 of 925
  • Swingley, D. (2003). Phonetic detail in the developing lexicon. Language and Speech, 46(3), 265-294.

    Abstract

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children's knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct pronunciations and mispronunciations of familiar words naming one of the pictures. Mispronunciations were word-initial in Experiment 1 and word-medial in Experiment 2, and in both experiments involved substituting one segment with [d] (a common sound in Dutch) or [g] (a rare sound). In both experiments, word recognition performance was better for correct pronunciations than for mispronunciations involving either substituted consonant. These effects did not depend upon children's knowledge of lexical or nonlexical phonological neighbors of the tested words. The results indicate the encoding of phonetic detail in words at 19 months.
  • Swingley, D., & Aslin, R. N. (2000). Spoken word recognition and lexical representation in very young children. Cognition, 76, 147-166. doi:10.1016/S0010-0277(00)00081-0.

    Abstract

    Although children's knowledge of the sound patterns of words has been a focus of debate for many years, little is known about the lexical representations very young children use in word recognition. In particular, researchers have questioned the degree of specificity encoded in early lexical representations. The current study addressed this issue by presenting 18–23-month-olds with object labels that were either correctly pronounced, or mispronounced. Mispronunciations involved replacement of one segment with a similar segment, as in ‘baby–vaby’. Children heard sentences containing these words while viewing two pictures, one of which was the referent of the sentence. Analyses of children's eye movements showed that children recognized the spoken words in both conditions, but that recognition was significantly poorer when words were mispronounced. The effects of mispronunciation on recognition were unrelated to age or to spoken vocabulary size. The results suggest that children's representations of familiar words are phonetically well-specified, and that this specification may not be a consequence of the need to differentiate similar words in production.
  • Tagliapietra, L., & McQueen, J. M. (2010). What and where in speech recognition: Geminates and singletons in spoken Italian. Journal of Memory and Language, 63, 306-323. doi:10.1016/j.jml.2010.05.001.

    Abstract

    Four cross-modal repetition priming experiments examined whether consonant duration in Italian provides listeners with information not only for segmental identification ("what" information: whether the consonant is a geminate or a singleton) but also for lexical segmentation (“where” information: whether the consonant is in word-initial or word-medial position). Italian participants made visual lexical decisions to words containing geminates or singletons, preceded by spoken primes (whole words or fragments) containing either geminates or singletons. There were effects of segmental identity (geminates primed geminate recognition; singletons primed singleton recognition), and effects of consonant position (regression analyses revealed graded effects of geminate duration only for geminates which can vary in position, and mixed-effect modeling revealed a positional effect for singletons only in low-frequency words). Durational information appeared to be more important for segmental identification than for lexical segmentation. These findings nevertheless indicate that the same kind of information can serve both "what" and "where" functions in speech comprehension, and that the perceptual processes underlying those functions are interdependent.
  • Takashima, A., Bakker-Marshall, I., Van Hell, J. G., McQueen, J. M., & Janzen, G. (2019). Neural correlates of word learning in children. Developmental Cognitive Neuroscience, 37: 100647. doi:10.1016/j.dcn.2019.100649.

    Abstract

    Memory representations of words are thought to undergo changes with consolidation: Episodic memories of novel words are transformed into lexical representations that interact with other words in the mental dictionary. Behavioral studies have shown that this lexical integration process is enhanced when there is more time for consolidation. Neuroimaging studies have further revealed that novel word representations are initially represented in a hippocampally-centered system, whereas left posterior middle temporal cortex activation increases with lexicalization. In this study, we measured behavioral and brain responses to newly-learned words in children. Two groups of Dutch children, aged between 8-10 and 14-16 years, were trained on 30 novel Japanese words depicting novel concepts. Children were tested on word-forms, word-meanings, and the novel words’ influence on existing word processing immediately after training, and again after a week. In line with the adult findings, hippocampal involvement decreased with time. Lexical integration, however, was not observed immediately or after a week, neither behaviorally nor neurally. It appears that time alone is not always sufficient for lexical integration to occur. We suggest that other factors (e.g., the novelty of the concepts and familiarity with the language the words are derived from) might also influence the integration process.

    Additional information

    Supplementary data
  • Takashima, A., & Verhoeven, L. (2019). Radical repetition effects in beginning learners of Chinese as a foreign language reading. Journal of Neurolinguistics, 50, 71-81. doi:10.1016/j.jneuroling.2018.03.001.

    Abstract

    The aim of the present study was to examine whether repetition of radicals during training of Chinese characters leads to better word acquisition performance in beginning learners of Chinese as a foreign language. Thirty Dutch university students were trained on 36 Chinese one-character words for their pronunciations and meanings. They were also exposed to the specifics of the radicals, that is, for phonetic radicals, the associated pronunciation was explained, and for semantic radicals the associated categorical meanings were explained. Results showed that repeated exposure to phonetic and semantic radicals through character pronunciation and meaning trainings indeed induced better understanding of those radicals that were shared among different characters. Furthermore, characters in the training set that shared phonetic radicals were pronounced better than those that did not. Repetition of semantic radicals across different characters, however, hindered the learning of exact meanings. Students generally confused the meanings of other characters that shared the semantic radical. The study shows that in the initial stage of learning, overlapping information of the shared radicals are effectively learned. Acquisition of the specifics of individual characters, however, requires more training.

    Additional information

    Supplementary data
  • Takaso, H., Eisner, F., Wise, R. J. S., & Scott, S. K. (2010). The effect of delayed auditory feedback on activity in the temporal lobe while speaking: A Positron Emission Tomography study. Journal of Speech, Language, and Hearing Research, 53, 226-236. doi:10.1044/1092-4388(2009/09-0009).

    Abstract

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many non-stuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission tomography (PET) was used to image regional cerebral blood flow changes, an index of neural activity, and assessed the influence of increasing amounts of delay. Results: Delayed auditory feedback led to increased activation in the bilateral superior temporal lobes, extending into posterior-medial auditory areas. Similar peaks in the temporal lobe were sensitive to increases in the amount of delay. A single peak in the temporal parietal junction responded to the amount of delay but not to the presence of a delay (relative to no delay). Conclusions: This study permitted distinctions to be made between the neural response to hearing one's voice at a delay, and the neural activity that correlates with this delay. Notably all the peaks showed some influence of the amount of delay. This result confirms a role for the posterior, sensori-motor ‘how’ system in the production of speech under conditions of delayed auditory feedback.
  • Tanenhaus, M. K., Magnuson, J. S., Dahan, D., & Chaimbers, G. (2000). Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29, 557-580. doi:10.1023/A:1026464108329.

    Abstract

    A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies
  • Taylor, L. J., Lev-Ari, S., & Zwaan, R. A. (2008). Inferences about action engage action systems. Brain and Language, 107(1), 62-67. doi:10.1016/j.bandl.2007.08.004.

    Abstract

    Verbal descriptions of actions activate compatible motor responses [Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9, 558–565]. Previous studies have found that the motor processes for manual rotation are engaged in a direction-specific manner when a verb disambiguates the direction of rotation [e.g. “unscrewed;” Zwaan, R. A., & Taylor, L. (2006). Seeing, acting, understanding: Motor resonance in language comprehension. Journal of Experimental Psychology: General, 135, 1–11]. The present experiment contributes to this body of work by showing that verbs that leave direction ambiguous (e.g. “turned”) do not necessarily yield such effects. Rather, motor resonance is associated with a word that disambiguates some element of an action, as meaning is being integrated across sentences. The findings are discussed within the context of discourse processes, inference generation, motor activation, and mental simulation.
  • Telling, A. L., Kumar, S., Meyer, A. S., & Humphreys, G. W. (2010). Electrophysiological evidence of semantic interference in visual search. Journal of Cognitive Neuroscience, 22(10), 2212-2225. doi:10.1162/jocn.2009.21348.

    Abstract

    Visual evoked responses were monitored while participants searched for a target (e.g., bird) in a four-object display that could include a semantically related distractor (e.g., fish). The occurrence of both the target and the semantically related distractor modulated the N2pc response to the search display: The N2pc amplitude was more pronounced when the target and the distractor appeared in the same visual field, and it was less pronounced when the target and the distractor were in opposite fields, relative to when the distractor was absent. Earlier components (P1, N1) did not show any differences in activity across the different distractor conditions. The data suggest that semantic distractors influence early stages of selecting stimuli in multielement displays.
  • Telling, A. L., Meyer, A. S., & Humphreys, G. W. (2010). Distracted by relatives: Effects of frontal lobe damage on semantic distraction. Brain and Cognition, 73, 203-214. doi:10.1016/j.bandc.2010.05.004.

    Abstract

    When young adults carry out visual search, distractors that are semantically related, rather than unrelated, to targets can disrupt target selection (see [Belke et al., 2008] and [Moores et al., 2003]). This effect is apparent on the first eye movements in search, suggesting that attention is sometimes captured by related distractors. Here we assessed effects of semantically related distractors on search in patients with frontal-lobe lesions and compared them to the effects in age-matched controls. Compared with the controls, the patients were less likely to make a first saccade to the target and they were more likely to saccade to distractors (whether related or unrelated to the target). This suggests a deficit in a first stage of selecting a potential target for attention. In addition, the patients made more errors by responding to semantically related distractors on target-absent trials. This indicates a problem at a second stage of target verification, after items have been attended. The data suggest that frontal lobe damage disrupts both the ability to use peripheral information to guide attention, and the ability to keep separate the target of search from the related items, on occasions when related items achieve selection.
  • Ten Oever, S., & Sack, A. T. (2019). Interactions between rhythmic and feature predictions to create parallel time-content associations. Frontiers in Neuroscience, 13: 791. doi:10.3389/fnins.2019.00791.

    Abstract

    The brain is inherently proactive, constantly predicting the when (moment) and what (content) of future input in order to optimize information processing. Previous research on such predictions has mainly studied the “when” or “what” domain separately, missing to investigate the potential integration of both types of predictive information. In the absence of such integration, temporal cues are assumed to enhance any upcoming content at the predicted moment in time (general temporal predictor). However, if the when and what prediction domain were integrated, a much more flexible neural mechanism may be proposed in which temporal-feature interactions would allow for the creation of multiple concurrent time-content predictions (parallel time-content predictor). Here, we used a temporal association paradigm in two experiments in which sound identity was systematically paired with a specific time delay after the offset of a rhythmic visual input stream. In Experiment 1, we revealed that participants associated the time delay of presentation with the identity of the sound. In Experiment 2, we unexpectedly found that the strength of this temporal association was negatively related to the EEG steady-state evoked responses (SSVEP) in preceding trials, showing that after high neuronal responses participants responded inconsistent with the time-content associations, similar to adaptation mechanisms. In this experiment, time-content associations were only present for low SSVEP responses in previous trials. These results tentatively show that it is possible to represent multiple time-content paired predictions in parallel, however, future research is needed to investigate this interaction further.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernandez, G. (2008). Contributions of the medial temporal lobe to declarative memory retrieval: Manipulating the amount of contextual retrieval. Learning and Memory, 15(9), 611-617. doi:10.1101/lm.916708.

    Abstract

    We investigated how the hippocampus and its adjacent mediotemporal structures contribute to contextual and noncontextual declarative memory retrieval by manipulating the amount of contextual information across two levels of the same contextual dimension in a source memory task. A first analysis identified medial temporal lobe (MTL) substructures mediating either contextual or noncontextual retrieval. A linearly weighted analysis elucidated which MTL substructures show a gradually increasing neural activity, depending on the amount of contextual information retrieved. A hippocampal engagement was found during both levels of source memory but not during item memory retrieval. The anterior MTL including the perirhinal cortex was only engaged during item memory retrieval by an activity decrease. Only the posterior parahippocampal cortex showed an activation increasing with the amount of contextual information retrieved. If one assumes a roughly linear relationship between the blood-oxygenation level-dependent (BOLD) signal and the associated cognitive process, our results suggest that the posterior parahippocampal cortex is involved in contextual retrieval on the basis of memory strength while the hippocampus processes representations of item-context binding. The anterior MTL including perirhinal cortex seems to be particularly engaged in familiarity-based item recognition. If one assumes departure from linearity, however, our results can also be explained by one-dimensional modulation of memory strength.
  • Terrill, A., & Burenhult, N. (2008). Orientation as a strategy of spatial reference. Studies in Language, 32(1), 93-136. doi:10.1075/sl.32.1.05ter.

    Abstract

    This paper explores a strategy of spatial expression which utilizes orientation, a way of describing the spatial relationship of entities by means of reference to their facets. We present detailed data and analysis from two languages, Jahai (Mon-Khmer, Malay Peninsula) and Lavukaleve (Papuan isolate, Solomon Islands), and supporting data from five more languages, to show that the orientation strategy is a major organizing principle in these languages. This strategy has not previously been recognized in the literature as a unitary phenomenon, and the languages which employ it present particular challenges to existing typologies of spatial frames of reference.
  • Terrill, A., & Dunn, M. (2003). Orthographic design in the Solomon Islands: The social, historical, and linguistic situation of Touo (Baniata). Written Language and Literacy, 6(2), 177-192. doi:10.1075/wll.6.2.03ter.

    Abstract

    This paper discusses the development of an orthography for the Touo language (Solomon Islands). Various orthographies have been proposed for this language in the past, and the paper discusses why they are perceived by the community to have failed. Current opinion about orthography development within the Touo-speaking community is divided along religious, political, and geographical grounds; and the development of a successful orthography must take into account a variety of opinions. The paper examines the social, historical, and linguistic obstacles that have hitherto prevented the development of an accepted Touo orthography, and presents a new proposal which has thus far gained acceptance with community leaders. The fundamental issue is that creating an orthography for a language takes place in a social, political, and historical context; and for an orthography to be acceptable for the speakers of a language, all these factors must be taken into account.
  • Terrill, A. (2010). [Review of Bowern, Claire. 2008. Linguistic fieldwork: a practical guide]. Language, 86(2), 435-438. doi:10.1353/lan.0.0214.
  • Terrill, A. (2010). [Review of R. A. Blust The Austronesian languages. 2009. Canberra: Pacific Linguistics]. Oceanic Linguistics, 49(1), 313-316. doi:10.1353/ol.0.0061.

    Abstract

    In lieu of an abstract, here is a preview of the article. This is a marvelous, dense, scholarly, detailed, exhaustive, and ambitious book. In 800-odd pages, it seeks to describe the whole huge majesty of the Austronesian language family, as well as the history of the family, the history of ideas relating to the family, and all the ramifications of such topics. Blust doesn't just describe, he goes into exhaustive detail, and not just over a few topics, but over every topic he covers. This is an incredible achievement, representing a lifetime of experience. This is not a book to be read from cover to cover—it is a book to be dipped into, pondered, and considered, slowly and carefully. The book is not organized by area or subfamily; readers interested in one area or family can consult the authoritative work on Western Austronesian (Adelaar and Himmelmann 2005), or, for the Oceanic languages, Lynch, Ross, and Crowley (2002). Rather, Blust's stated aim "is to provide a comprehensive overview of Austronesian languages which integrates areal interests into a broader perspective" (xxiii). Thus the aim is more ambitious than just discussion of areal features or historical connections, but seeks to describe the interconnections between these. The Austronesian language family is very large, second only in size to Niger-Congo (xxii). It encompasses over 1,000 members, and its protolanguage has been dated back to 6,000 years ago (xxii). The exact groupings of some Austronesian languages are still under discussion, but broadly, the family is divided into ten major subgroups, nine of which are spoken in Taiwan, the homeland of the Austronesian family. The tenth, Malayo-Polynesian, is itself divided into two major groups: Western Malayo-Polynesian, which is spread throughout the Philippines, Indonesia, and mainland Southeast Asia to Madagascar; and Central-Eastern Malayo-Polynesian, spoken from eastern Indonesia throughout the Pacific. The geographic, cultural, and linguistic diversity of the family
  • Terrill, A. (2003). Linguistic stratigraphy in the central Solomon Islands: Lexical evidence of early Papuan/Austronesian interaction. Journal of the Polynesian Society, 112(4), 369-401.

    Abstract

    The extent to which linguistic borrowing can be used to shed light on the existence and nature of early contact between Papuan and Oceanic speakers is examined. The question is addressed by taking one Papuan language, Lavukaleve, spoken in the Russell Islands, central Solomon Islands and examining lexical borrowings between it and nearby Oceanic languages, and with reconstructed forms of Proto Oceanic. Evidence from ethnography, culture history and archaeology, when added to the linguistic evidence provided in this study, indicates long-standing cultural links between other (non-Russell) islands. The composite picture is one of a high degree of cultural contact with little linguistic mixing, i.e., little or no changes affecting the structure of the languages and actually very little borrowed vocabulary.
  • Thiebaut de Schotten, M., Friedrich, P., & Forkel, S. J. (2019). One size fits all does not apply to brain lateralisation. Physics of Life Reviews, 30, 30-33. doi:10.1016/j.plrev.2019.07.007.

    Abstract

    Our understanding of the functioning of the brain is primarily based on an average model of the brain's functional organisation, and any deviation from the standard is considered as random noise or a pathological appearance. Studying pathologies has, however, greatly contributed to our understanding of brain functions. For instance, the study of naturally-occurring or surgically-induced brain lesions revealed that language is predominantly lateralised to the left hemisphere while perception/action and emotion are commonly lateralised to the right hemisphere. The lateralisation of function was subsequently replicated by task-related functional neuroimaging in the healthy population. Despite its high significance and reproducibility, this pattern of lateralisation of function is true for most, but not all participants. Bilateral and flipped representations of classically lateralised functions have been reported during development and in the healthy adult population for language, perception/action and emotion. Understanding these different functional representations at an individual level is crucial to improve the sophistication of our models and account for the variance in developmental trajectories, cognitive performance differences and clinical recovery. With the availability of in vivo neuroimaging, it has become feasible to study large numbers of participants and reliably characterise individual differences, also referred to as phenotypes. Yet, we are at the beginning of inter-individual variability modelling, and new theories of brain function will have to account for these differences across participants.
  • Tilot, A. K., Vino, A., Kucera, K. S., Carmichael, D. A., Van den Heuvel, L., Den Hoed, J., Sidoroff-Dorso, A. V., Campbell, A., Porteous, D. J., St Pourcain, B., Van Leeuwen, T. M., Ward, J., Rouw, R., Simner, J., & Fisher, S. E. (2019). Investigating genetic links between grapheme-colour synaesthesia and neuropsychiatric traits. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190026. doi:10.1098/rstb.2019.0026.

    Abstract

    Synaesthesia is a neurological phenomenon affecting perception, where triggering stimuli (e.g. letters and numbers) elicit unusual secondary sensory experiences (e.g. colours). Family-based studies point to a role for genetic factors in the development of this trait. However, the contributions of common genomic variation to synaesthesia have not yet been investigated. Here, we present the SynGenes cohort, the largest genotyped collection of unrelated people with grapheme–colour synaesthesia (n = 723). Synaesthesia has been associated with a range of other neuropsychological traits, including enhanced memory and mental imagery, as well as greater sensory sensitivity. Motivated by the prior literature on putative trait overlaps, we investigated polygenic scores derived from published genome-wide scans of schizophrenia and autism spectrum disorder (ASD), comparing our SynGenes cohort to 2181 non-synaesthetic controls. We found a very slight association between schizophrenia polygenic scores and synaesthesia (Nagelkerke's R2 = 0.0047, empirical p = 0.0027) and no significant association for scores related to ASD (Nagelkerke's R2 = 0.00092, empirical p = 0.54) or body mass index (R2 = 0.00058, empirical p = 0.60), included as a negative control. As sample sizes for studying common genomic variation continue to increase, genetic investigations of the kind reported here may yield novel insights into the shared biology between synaesthesia and other traits, to complement findings from neuropsychology and brain imaging.

    Files private

    Request files
  • Toni, I., De Lange, F. P., Noordzij, M. L., & Hagoort, P. (2008). Language beyond action. Journal of Physiology, 102, 71-79. doi:10.1016/j.jphysparis.2008.03.005.

    Abstract

    The discovery of mirror neurons in macaques and of a similar system in humans has provided a new and fertile neurobiological ground for rooting a variety of cognitive faculties. Automatic sensorimotor resonance has been invoked as the key elementary process accounting for disparate (dys)functions, like imitation, ideomotor apraxia, autism, and schizophrenia. In this paper, we provide a critical appraisal of three of these claims that deal with the relationship between language and the motor system. Does language comprehension require the motor system? Was there an evolutionary switch from manual gestures to speech as the primary mode of language? Is human communication explained by automatic sensorimotor resonances? A positive answer to these questions would open the tantalizing possibility of bringing language and human communication within the fold of the motor system. We argue that the available empirical evidence does not appear to support these claims, and their theoretical scope fails to account for some crucial features of the phenomena they are supposed to explain. Without denying the enormous importance of the discovery of mirror neurons, we highlight the limits of their explanatory power for understanding language and communication.
  • Torreira, F., Adda-Decker, M., & Ernestus, M. (2010). The Nijmegen corpus of casual French. Speech Communication, 52, 201-212. doi:10.1016/j.specom.2009.10.004.

    Abstract

    This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual French (NCCFr). The corpus contains a total of over 36 h of recordings of 46 French speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around 90 min of speech from every pair of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Comparisons with the ESTER corpus of journalistic speech show that the two corpora contain speech of considerably different registers. A number of indicators of casualness, including swear words, casual words, verlan, disfluencies and word repetitions, are more frequent in the NCCFr than in the ESTER corpus, while the use of double negation, an indicator of formal speech, is less frequent. In general, these estimates of casualness are constant through the three parts of the recording sessions and across speakers. Based on these facts, we conclude that our corpus is a rich resource of highly casual speech, and that it can be effectively exploited by researchers in language science and technology.

    Files private

    Request files
  • Tourtouri, E. N., Delogu, F., Sikos, L., & Crocker, M. W. (2019). Rational over-specification in visually-situated comprehension and production. Journal of Cultural Cognitive Science, 3(2), 175-202. doi:10.1007/s41809-019-00032-6.

    Abstract

    Contrary to the Gricean maxims of quantity (Grice, in: Cole, Morgan (eds) Syntax and semantics: speech acts, vol III, pp 41–58, Academic Press, New York, 1975), it has been repeatedly shown that speakers often include redundant information in their utterances (over-specifications). Previous research on referential communication has long debated whether this redundancy is the result of speaker-internal or addressee-oriented processes, while it is also unclear whether referential redundancy hinders or facilitates comprehension. We present an information-theoretic explanation for the use of over-specification in visually-situated communication, which quantifies the amount of uncertainty regarding the referent as entropy (Shannon in Bell Syst Tech J 5:10, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x, 1948). Examining both the comprehension and production of over-specifications, we present evidence that (a) listeners’ processing is facilitated by the use of redundancy as well as by a greater reduction of uncertainty early on in the utterance, and (b) that at least for some speakers, listeners’ processing concerns influence their encoding of over-specifications: Speakers were more likely to use redundant adjectives when these adjectives reduced entropy to a higher degree than adjectives necessary for target identification.
  • Trujillo, J. P., Vaitonyte, J., Simanova, I., & Ozyurek, A. (2019). Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behavior Research Methods, 51(2), 769-777. doi:10.3758/s13428-018-1086-8.

    Abstract

    Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human–robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.
  • Truong, D. T., Adams, A. K., Paniagua, S., Frijters, J. C., Boada, R., Hill, D. E., Lovett, M. W., Mahone, E. M., Willcutt, E. G., Wolf, M., Defries, J. C., Gialluisi, A., Francks, C., Fisher, S. E., Olson, R. K., Pennington, B. F., Smith, S. D., Bosson-Heenan, J., & Gruen, J. R. (2019). Multivariate genome-wide association study of rapid automatised naming and rapid alternating stimulus in Hispanic American and African–American youth. Journal of Medical Genetics, 56(8), 557-566. doi:10.1136/jmedgenet-2018-105874.

    Abstract

    Background Rapid automatised naming (RAN) and rapid alternating stimulus (RAS) are reliable predictors of reading disability. The underlying biology of reading disability is poorly understood. However, the high correlation among RAN, RAS and reading could be attributable to shared genetic factors that contribute to common biological mechanisms.

    Objective To identify shared genetic factors that contribute to RAN and RAS performance using a multivariate approach.

    Methods We conducted a multivariate genome-wide association analysis of RAN Objects, RAN Letters and RAS Letters/Numbers in a sample of 1331 Hispanic American and African–American youth. Follow-up neuroimaging genetic analysis of cortical regions associated with reading ability in an independent sample and epigenetic examination of extant data predicting tissue-specific functionality in the brain were also conducted.

    Results Genome-wide significant effects were observed at rs1555839 (p=4.03×10−8) and replicated in an independent sample of 318 children of European ancestry. Epigenetic analysis and chromatin state models of the implicated 70 kb region of 10q23.31 support active transcription of the gene RNLS in the brain, which encodes a catecholamine metabolising protein. Chromatin contact maps of adult hippocampal tissue indicate a potential enhancer–promoter interaction regulating RNLS expression. Neuroimaging genetic analysis in an independent, multiethnic sample (n=690) showed that rs1555839 is associated with structural variation in the right inferior parietal lobule.

    Conclusion This study provides support for a novel trait locus at chromosome 10q23.31 and proposes a potential gene–brain–behaviour relationship for targeted future functional analysis to understand underlying biological mechanisms for reading disability.

    Additional information

    Supplementary data
  • Tsoi, E. Y. L., Yang, W., Chan, A. W. S., & Kidd, E. (2019). Mandarin-English speaking bilingual and Mandarin speaking monolingual children’s comprehension of relative clauses. Applied Psycholinguistics, 40(4), 933-964. doi:10.1017/S0142716419000079.

    Abstract

    The current study investigated the comprehension of subject and object relative clauses (RCs) in bilingual Mandarin-English children (N = 55, Mage = 7;5, SD = 1;8) and language-matched monolingual Mandarin-speaking children (N = 59, Mage = 5;4, SD = 0;7). The children completed a referent selection task that tested their comprehension of subject and object RCs, and standardised assessments of vocabulary knowledge. Results showed a very similar pattern of responding in both groups. In comparison to past studies of Cantonese, the bilingual and monolingual children both showed a significant subject-over-object RC advantage. An error analysis suggested that the children’s difficulty with object RCs reflected the tendency to interpret the sentential subject as the head noun. A subsequent corpus analysis suggested that children’s difficulty with object RCs may be in part due to distributional information favouring subject RC analyses. Individual differences analyses suggested cross-linguistic transfer from English to Mandarin in the bilingual children at the individual but not the group level, with the results indicating that comparative English-dominance makes children vulnerable to error
  • Tucker, B. V., & Warner, N. (2010). What it means to be phonetic or phonological: The case of Romanian devoiced nasals. Phonology, 27, 289-324. doi:10.1017/S0952675710000138.

    Abstract

    phonological patterns and detailed phonetic patterns can combine to produce unusual acoustic results, but criteria for what aspects of a pattern are phonetic and what aspects are phonological are often disputed. Early literature on Romanian makes mention of nasal devoicing in word-final clusters (e.g. in /basm/ 'fairy-tale'). Using acoustic, aerodynamic and ultrasound data, the current work investigates how syllable structure, prosodic boundaries, phonetic paradigm uniformity and assimilation influence Romanian nasal devoicing. It provides instrumental phonetic documentation of devoiced nasals, a phenomenon that has not been widely studied experimentally, in a phonetically underdocumented language. We argue that sound patterns should not be separated into phonetics and phonology as two distinct systems, but neither should they all be grouped together as a single, undifferentiated system. Instead, we argue for viewing the distinction between phonetics and phonology as a largely continuous multidimensional space, within which sound patterns, including Romanian nasal devoicing, fall.
  • Uddén, J., Folia, V., Forkstam, C., Ingvar, M., Fernández, G., Overeem, S., Van Elswijk, G., Hagoort, P., & Petersson, K. M. (2008). The inferior frontal cortex in artificial syntax processing: An rTMS study. Brain Research, 1224, 69-78. doi:10.1016/j.brainres.2008.05.070.

    Abstract

    The human capacity to implicitly acquire knowledge of structured sequences has recently been investigated in artificial grammar learning using functional magnetic resonance imaging. It was found that the left inferior frontal cortex (IFC; Brodmann's area (BA) 44/45) was related to classification performance. The objective of this study was to investigate whether the IFC (BA 44/45) is causally related to classification of artificial syntactic structures by means of an off-line repetitive transcranial magnetic stimulation (rTMS) paradigm. We manipulated the stimulus material in a 2 × 2 factorial design with grammaticality status and local substring familiarity as factors. The participants showed a reliable effect of grammaticality on classification of novel items after 5days of exposure to grammatical exemplars without performance feedback in an implicit acquisition task. The results show that rTMS of BA 44/45 improves syntactic classification performance by increasing the rejection rate of non-grammatical items and by shortening reaction times of correct rejections specifically after left-sided stimulation. A similar pattern of results is observed in FMRI experiments on artificial syntactic classification. These results suggest that activity in the inferior frontal region is causally related to artificial syntax processing.
  • Uddén, J., Folia, V., & Petersson, K. M. (2010). The neuropharmacology of implicit learning. Current Neuropharmacology, 8, 367-381. doi:10.2174/157015910793358178.

    Abstract

    Two decades of pharmacologic research on the human capacity to implicitly acquire knowledge as well as cognitive skills and procedures have yielded surprisingly few conclusive insights. We review the empirical literature of the neuropharmacology of implicit learning. We evaluate the findings in the context of relevant computational models related to neurotransmittors such as dopamine, serotonin, acetylcholine and noradrenalin. These include models for reinforcement learning, sequence production, and categorization. We conclude, based on the reviewed literature, that one can predict improved implicit acquisition by moderately elevated dopamine levels and impaired implicit acquisition by moderately decreased dopamine levels. These effects are most prominent in the dorsal striatum. This is supported by a range of behavioral tasks in the empirical literature. Similar predictions can be made for serotonin, although there is yet a lack of support in the literature for serotonin involvement in classical implicit learning tasks. There is currently a lack of evidence for a role of the noradrenergic and cholinergic systems in implicit and related forms of learning. GABA modulators, including benzodiazepines, seem to affect implicit learning in a complex manner and further research is needed. Finally, we identify allosteric AMPA receptors modulators as a potentially interesting target for future investigation of the neuropharmacology of procedural and implicit learning.
  • Udden, J., Hulten, A., Bendt, K., Mineroff, Z., Kucera, K. S., Vino, A., Fedorenko, E., Hagoort, P., & Fisher, S. E. (2019). Towards robust functional neuroimaging genetics of cognition. Journal of Neuroscience, 39(44), 8778-8787. doi:10.1523/JNEUROSCI.0888-19.2019.

    Abstract

    A commonly held assumption in cognitive neuroscience is that, because measures of human brain function are closer to underlying biology than distal indices of behavior/cognition, they hold more promise for uncovering genetic pathways. Supporting this view is an influential fMRI-based study of sentence reading/listening by Pinel et al. (2012), who reported that common DNA variants in specific candidate genes were associated with altered neural activation in language-related regions of healthy individuals that carried them. In particular, different single-nucleotide polymorphisms (SNPs) of FOXP2 correlated with variation in task-based activation in left inferior frontal and precentral gyri, whereas a SNP at the KIAA0319/TTRAP/THEM2 locus was associated with variable functional asymmetry of the superior temporal sulcus. Here, we directly test each claim using a closely matched neuroimaging genetics approach in independent cohorts comprising 427 participants, four times larger than the original study of 94 participants. Despite demonstrating power to detect associations with substantially smaller effect sizes than those of the original report, we do not replicate any of the reported associations. Moreover, formal Bayesian analyses reveal substantial to strong evidence in support of the null hypothesis (no effect). We highlight key aspects of the original investigation, common to functional neuroimaging genetics studies, which could have yielded elevated false-positive rates. Genetic accounts of individual differences in cognitive functional neuroimaging are likely to be as complex as behavioral/cognitive tests, involving many common genetic variants, each of tiny effect. Reliable identification of true biological signals requires large sample sizes, power calculations, and validation in independent cohorts with equivalent paradigms.

    SIGNIFICANCE STATEMENT A pervasive idea in neuroscience is that neuroimaging-based measures of brain function, being closer to underlying neurobiology, are more amenable for uncovering links to genetics. This is a core assumption of prominent studies that associate common DNA variants with altered activations in task-based fMRI, despite using samples (10–100 people) that lack power for detecting the tiny effect sizes typical of genetically complex traits. Here, we test central findings from one of the most influential prior studies. Using matching paradigms and substantially larger samples, coupled to power calculations and formal Bayesian statistics, our data strongly refute the original findings. We demonstrate that neuroimaging genetics with task-based fMRI should be subject to the same rigorous standards as studies of other complex traits.
  • Vainio, M., Järvikivi, J., Aalto, D., & Suni, A. (2010). Phonetic tone signals phonological quantity and word structure. Journal of the Acoustical Society of America, 128, 1313-1321. doi:10.1121/1.3467767.

    Abstract

    Many languages exploit suprasegmental devices in signaling word meaning. Tone languages exploit fundamental frequency whereas quantity languages rely on segmental durations to distinguish otherwise similar words. Traditionally, duration and tone have been taken as mutually exclusive. However, some evidence suggests that, in addition to durational cues, phonological quantity is associated with and co-signaled by changes in fundamental frequency in quantity languages such as Finnish, Estonian, and Serbo-Croat. The results from the present experiment show that the structure of disyllabic word stems in Finnish are indeed signaled tonally and that the phonological length of the stressed syllable is further tonally distinguished within the disyllabic sequence. The results further indicate that the observed association of tone and duration in perception is systematically exploited in speech production in Finnish.
  • Van Turennout, M., Bielamowicz, L., & Martin, A. (2003). Modulation of neural activity during object naming: Effects of time and practice. Cerebral Cortex, 13(4), 381-391.

    Abstract

    Repeated exposure to objects improves our ability to identify and name them, even after a long delay. Previous brain imaging studies have demonstrated that this experience-related facilitation of object naming is associated with neural changes in distinct brain regions. We used event-related functional magnetic resonance imaging (fMRI) to examine the modulation of neural activity in the object naming system as a function of experience and time. Pictures of common objects were presented repeatedly for naming at different time intervals (1 h, 6 h and 3 days) before scanning, or at 30 s intervals during scanning. The results revealed that as objects became more familiar with experience, activity in occipitotemporal and left inferior frontal regions decreased while activity in the left insula and basal ganglia increased. In posterior regions, reductions in activity as a result of multiple repetitions did not interact with time, whereas in left inferior frontal cortex larger decreases were observed when repetitions were spaced out over time. This differential modulation of activity in distinct brain regions provides support for the idea that long-lasting object priming is mediated by two neural mechanisms. The first mechanism may involve changes in object-specific representations in occipitotemporal cortices, the second may be a form of procedural learning involving a reorganization in brain circuitry that leads to more efficient name retrieval.
  • Van Berkum, J. J. A., Van den Brink, D., Tesink, C. M. J. Y., Kos, M., & Hagoort, P. (2008). The neural integration of speaker and message. Journal of Cognitive Neuroscience, 20(4), 580-591. doi:10.1162/jocn.2008.20054.

    Abstract

    When do listeners take into account who the speaker is? We asked people to listen to utterances whose content sometimes did not match inferences based on the identity of the speaker (e.g., “If only I looked like Britney Spears” in a male voice, or “I have a large tattoo on my back” spoken with an upper-class accent). Event-related brain responses revealed that the speaker's identity is taken into account as early as 200–300 msec after the beginning of a spoken word, and is processed by the same early interpretation mechanism that constructs sentence meaning based on just the words. This finding is difficult to reconcile with standard “Gricean” models of sentence interpretation in which comprehenders initially compute a local, context-independent meaning for the sentence (“semantics”) before working out what it really means given the wider communicative context and the particular speaker (“pragmatics”). Because the observed brain response hinges on voice-based and usually stereotype-dependent inferences about the speaker, it also shows that listeners rapidly classify speakers on the basis of their voices and bring the associated social stereotypes to bear on what is being said. According to our event-related potential results, language comprehension takes very rapid account of the social context, and the construction of meaning based on language alone cannot be separated from the social aspects of language use. The linguistic brain relates the message to the speaker immediately.
  • Van Berkum, J. J. A. (2008). Understanding sentences in context: What brain waves can tell us. Current Directions in Psychological Science, 17(6), 376-380. doi:10.1111/j.1467-8721.2008.00609.x.

    Abstract

    Language comprehension looks pretty easy. You pick up a novel and simply enjoy the plot, or ponder the human condition. You strike a conversation and listen to whatever the other person has to say. Although what you're taking in is a bunch of letters and sounds, what you really perceive—if all goes well—is meaning. But how do you get from one to the other so easily? The experiments with brain waves (event-related brain potentials or ERPs) reviewed here show that the linguistic brain rapidly draws upon a wide variety of information sources, including prior text and inferences about the speaker. Furthermore, people anticipate what might be said about whom, they use heuristics to arrive at the earliest possible interpretation, and if it makes sense, they sometimes even ignore the grammar. Language comprehension is opportunistic, proactive, and, above all, immediately context-dependent.
  • Van Berkum, J. J. A., Zwitserlood, P., Hagoort, P., & Brown, C. M. (2003). When and how do listeners relate a sentence to the wider discourse? Evidence from the N400 effect. Cognitive Brain Research, 17(3), 701-718. doi:10.1016/S0926-6410(03)00196-4.

    Abstract

    In two ERP experiments, we assessed the impact of discourse-level information on the processing of an unfolding spoken sentence. Subjects listened to sentences like Jane told her brother that he was exceptionally quick/slow, designed such that the alternative critical words were equally acceptable within the local sentence context. In Experiment 1, these sentences were embedded in a discourse that rendered one of the critical words anomalous (e.g. because Jane’s brother had in fact done something very quickly). Relative to the coherent alternative, these discourse-anomalous words elicited a standard N400 effect that started at 150–200 ms after acoustic word onset. Furthermore, when the same sentences were heard in isolation in Experiment 2, the N400 effect disappeared. The results demonstrate that our listeners related the unfolding spoken words to the wider discourse extremely rapidly, after having heard the first two or three phonemes only, and in many cases well before the end of the word. In addition, the identical nature of discourse- and sentence-dependent N400 effects suggests that from the perspective of the word-elicited comprehension process indexed by the N400, the interpretive context delineated by a single unfolding sentence and a larger discourse is functionally identical.
  • van der Burght, C. L., Goucha, T., Friederici, A. D., Kreitewolf, J., & Hartwigsen, G. (2019). Intonation guides sentence processing in the left inferior frontal gyrus. Cortex, 117, 122-134. doi:10.1016/j.cortex.2019.02.011.

    Abstract

    Speech prosody, the variation in sentence melody and rhythm, plays a crucial role in sentence comprehension. Specifically, changes in intonational pitch along a sentence can affect our understanding of who did what to whom. To date, it remains unclear how the brain processes this particular use of intonation and which brain regions are involved. In particular, one central matter of debate concerns the lateralisation of intonation processing. To study the role of intonation in sentence comprehension, we designed a functional MRI experiment in which participants listened to spoken sentences. Critically, the interpretation of these sentences depended on either intonational or grammatical cues. Our results
    showed stronger functional activity in the left inferior frontal gyrus (IFG) when the intonational cue was crucial for sentence comprehension compared to when it was not. When instead a grammatical cue was crucial for sentence comprehension, we found involvement of an overlapping region in the left IFG, as well as in a posterior temporal
    region. A further analysis revealed that the lateralisation of intonation processing depends on its role in syntactic processing: activity in the IFG was lateralised to the left hemisphere when intonation was the only source of information to comprehend the sentence. In contrast, activity in the IFG was right-lateralised when intonation did not contribute to sentence comprehension. Together, these results emphasise the key role of the left IFG in sentence comprehension, showing the importance of this region when intonation
    establishes sentence structure. Furthermore, our results provide evidence for the theory
    that the lateralisation of prosodic processing is modulated by its linguistic role.
  • Van Leeuwen, T. M., Van Petersen, E., Burghoorn, F., Dingemanse, M., & Van Lier, R. (2019). Autistic traits in synaesthesia: Atypical sensory sensitivity and enhanced perception of details. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190024. doi:10.1098/rstb.2019.0024.

    Abstract

    In synaesthetes specific sensory stimuli (e.g., black letters) elicit additional experiences (e.g. colour). Synaesthesia is highly prevalent among individuals with autism spectrum disorder but the mechanisms of this co-occurrence are not clear. We hypothesized autism and synaesthesia share atypical sensory sensitivity and perception. We assessed autistic traits, sensory sensitivity, and visual perception in two synaesthete populations. In Study 1, synaesthetes (N=79, of different types) scored higher than non-synaesthetes (N=76) on the Attention-to-detail and Social skills subscales of the Autism Spectrum Quotient indexing autistic traits, and on the Glasgow Sensory Questionnaire indexing sensory hypersensitivity and hyposensitivity which frequently occur in autism. Synaesthetes performed two local/global visual tasks because individuals with autism typically show a bias toward detail processing. In synaesthetes, elevated motion coherence thresholds suggested reduced global motion perception and higher accuracy on an embedded figures task suggested enhanced local perception. In Study 2 sequence-space synaesthetes (N=18) completed the same tasks. Questionnaire and embedded figures results qualitatively resembled Study 1 results but no significant group differences with non-synaesthetes (N=20) were obtained. Unexpectedly, sequence-space synaesthetes had reduced motion coherence thresholds. Altogether, our studies suggest atypical sensory sensitivity and a bias towards detail processing are shared features of synaesthesia and autism spectrum disorder.
  • Van Wijk, C., & Kempen, G. (1987). A dual system for producing self-repairs in spontaneous speech: Evidence from experimentally elicited corrections. Cognitive Psychology, 19, 403-440. doi:10.1016/0010-0285(87)90014-4.

    Abstract

    This paper presents a cognitive theory on the production and shaping of selfrepairs during speaking. In an extensive experimental study, a new technique is tried out: artificial elicitation of self-repairs. The data clearly indicate that two mechanisms for computing the shape of self-repairs should be distinguished. One is based on the repair strategy called reformulation, the second one on lemma substitution. W. Levelt’s (1983, Cognition, 14, 41- 104) well-formedness rule, which connects self-repairs to coordinate structures, is shown to apply only to reformulations. In case of lemma substitution, a totally different set of rules is at work. The linguistic unit of central importance in reformulations is the major syntactic constituent; in lemma substitutions it is a prosodic unit. the phonological phrase. A parametrization of the model yielded a very satisfactory fit between observed and reconstructed scores.
  • Van Paridon, J., Roelofs, A., & Meyer, A. S. (2019). A lexical bottleneck in shadowing and translating of narratives. Language, Cognition and Neuroscience, 34(6), 803-812. doi:10.1080/23273798.2019.1591470.

    Abstract

    In simultaneous interpreting, speech comprehension and production processes have to be coordinated in close temporal proximity. To examine the coordination, Dutch-English bilingual participants were presented with narrative fragments recorded in English at speech rates varying from 100 to 200 words per minute and they were asked to translate the fragments into Dutch (interpreting) or repeat them in English (shadowing). Interpreting yielded more errors than shadowing at every speech rate, and increasing speech rate had a stronger negative effect on interpreting than on shadowing. To understand the differential effect of speech rate, a computational model was created of sub-lexical and lexical processes in comprehension and production. Computer simulations revealed that the empirical findings could be captured by assuming a bottleneck preventing simultaneous lexical selection in production and comprehension. To conclude, our empirical and modelling results suggest the existence of a lexical bottleneck that limits the translation of narratives at high speed.

    Additional information

    plcp_a_1591470_sm5183.docx
  • Van den Bos, E., & Poletiek, F. H. (2019). Correction to: Effects of grammar complexity on artificial grammar learning (vol 36, pg 1122, 2008). Memory & Cognition, 47(8), 1619-1620. doi:10.3758/s13421-019-00946-0.
  • Van den Broek, G. S. E., Segers, E., Van Rijn, H., Takashima, A., & Verhoeven, L. (2019). Effects of elaborate feedback during practice tests: Costs and benefits of retrieval prompts. Journal of Experimental Psychology: Applied, 25(4), 588-601. doi:10.1037/xap0000212.

    Abstract

    This study explores the effect of feedback with hints on students’ recall of words. In three classroom experiments, high school students individually practiced vocabulary words through computerized retrieval practice with either standard show-answer feedback (display of answer) or hints feedback after incorrect responses. Hints feedback gave students a second chance to find the correct response using orthographic (Experiment 1), mnemonic (Experiment 2), or cross-language hints (Experiment 3). During practice, hints led to a shift of practice time from further repetitions to longer feedback processing but did not reduce (repeated) errors. There was no effect of feedback on later recall except when the hints from practice were also available on the test, indicating limited transfer of practice with hints to later recall without hints (in Experiments 1 and 2). Overall, hints feedback was not preferable over show-answer feedback. The common notion that hints are beneficial may not hold when the total practice time is limited.
  • Van den Bos, E., & Poletiek, F. H. (2008). Effects of grammar complexity on artificial grammar learning. Memory & Cognition, 36(6), 1122-1131. doi:10.3758/MC.36.6.1122.

    Abstract

    The present study identified two aspects of complexity that have been manipulated in the implicit learning literature and investigated how they affect implicit and explicit learning of artificial grammars. Ten finite state grammars were used to vary complexity. The results indicated that dependency length is more relevant to the complexity of a structure than is the number of associations that have to be learned. Although implicit learning led to better performance on a grammaticality judgment test than did explicit learning, it was negatively affected by increasing complexity: Performance decreased as there was an increase in the number of previous letters that had to be taken into account to determine whether or not the next letter was a grammatical continuation. In particular, the results suggested that implicit learning of higher order dependencies is hampered by the presence of longer dependencies. Knowledge of first-order dependencies was acquired regardless of complexity and learning mode.
  • Van Gijn, R. (2010). [Review of the book Complementation ed. by R. M. W. Dixon, A. Aikhenvald]. Studies in Language, 34(1), 187-194. doi:10.1075/sl.34.1.06van.
  • Van Putten, S. (2010). [Review of the book Focus structures in African languages: The interaction of focus and grammar", edited by Enoch Oladé Aboh, Katharina Hartmann & Malte Zimmermann]. Journal of African Languages and Linguistics, 31(1), 101-104. doi:10.1515/JALL.2010.006.
  • Van Gijn, R., & Hirtzel, V. (2010). [Review of the book The Anthropology of color, ed. by Robert E. MacLaura, Galina V. Paramei and Don Dedrick]. Journal of Linguistic Anthropology, 20(1), 241-245.
  • Van Berkum, J. J. A., Brown, C. M., Hagoort, P., & Zwitserlood, P. (2003). Event-related brain potentials reflect discourse-referential ambiguity in spoken language comprehension. Psychophysiology, 40(2), 235-248. doi:10.1111/1469-8986.00025.

    Abstract

    In two experiments, we explored the use of event-related brain potentials to selectively track the processes that establish reference during spoken language comprehension. Subjects listened to stories in which a particular noun phrase like "the girl" either uniquely referred to a single referent mentioned in the earlier discourse, or ambiguously referred to two equally suitable referents. Referentially ambiguous nouns ("the girl" with two girls introduced in the discourse context) elicited a frontally dominant and sustained negative shift in brain potentials, emerging within 300–400 ms after acoustic noun onset. The early onset of this effect reveals that reference to a discourse entity can be established very rapidly. Its morphology and distribution suggest that at least some of the processing consequences of referential ambiguity may involve an increased demand on memory resources. Furthermore, because this referentially induced ERP effect is very different from that of well-known ERP effects associated with the semantic (N400) and syntactic (e.g., P600/SPS) aspects of language comprehension, it suggests that ERPs can be used to selectively keep track of three major processes involved in the comprehension of an unfolding piece of discourse.
  • Van Gompel, R. P., & Majid, A. (2003). Antecedent frequency effects during the processing of pronouns. Cognition, 90(3), 255-264. doi:10.1016/S0010-0277(03)00161-6.

    Abstract

    An eye-movement reading experiment investigated whether the ease with which pronouns are processed is affected by the lexical frequency of their antecedent. Reading times following pronouns with infrequent antecedents were faster than following pronouns with frequent antecedents. We argue that this is consistent with a saliency account, according to which infrequent antecedents are more salient than frequent antecedents. The results are not predicted by accounts which claim that readers access all or part of the lexical properties of the antecedent during the processing of pronouns.
  • Van Heuven, W. J. B., Schriefers, H., Dijkstra, T., & Hagoort, P. (2008). Language conflict in the bilingual brain. Cerebral Cortex, 18(11), 2706-2716. doi:10.1093/cercor/bhn030.

    Abstract

    The large majority of humankind is more or less fluent in 2 or even more languages. This raises the fundamental question how the language network in the brain is organized such that the correct target language is selected at a particular occasion. Here we present behavioral and functional magnetic resonance imaging data showing that bilingual processing leads to language conflict in the bilingual brain even when the bilinguals’ task only required target language knowledge. This finding demonstrates that the bilingual brain cannot avoid language conflict, because words from the target and nontarget languages become automatically activated during reading. Importantly, stimulus-based language conflict was found in brain regions in the LIPC associated with phonological and semantic processing, whereas response-based language conflict was only found in the pre-supplementary motor area/anterior cingulate cortex when language conflict leads to response conflicts.
  • Van der Linden, M., Van Turennout, M., & Indefrey, P. (2010). Formation of category representations in superior temporal sulcus. Journal of Cognitive Neuroscience, 22, 1270-1282. doi:10.1162/jocn.2009.21270.

    Abstract

    The human brain contains cortical areas specialized in representing object categories. Visual experience is known to change the responses in these category-selective areas of the brain. However, little is known about how category training specifically affects cortical category selectivity. Here, we investigated the experience-dependent formation of object categories using an fMRI adaptation paradigm. Outside the scanner, subjects were trained to categorize artificial bird types into arbitrary categories (jungle birds and desert birds). After training, neuronal populations in the occipito-temporal cortex, such as the fusiform and the lateral occipital gyrus, were highly sensitive to perceptual stimulus differences. This sensitivity was not present for novel birds, indicating experience-related changes in neuronal representations. Neurons in STS showed category selectivity. A release from adaptation in STS was only observed when two birds in a pair crossed the category boundary. This dissociation could not be explained by perceptual similarities because the physical difference between birds from the same side of the category boundary and between birds from opposite sides of the category boundary was equal. Together, the occipito-temporal cortex and the STS have the properties suitable for a system that can both generalize across stimuli and discriminate between them.
  • Van den Bos, E., & Poletiek, F. H. (2008). Intentional artificial grammar learning: When does it work? European Journal of Cognitive Psychology, 20(4), 793-806. doi:10.1080/09541440701554474.

    Abstract

    Actively searching for the rules of an artificial grammar has often been shown to produce no more knowledge than memorising exemplars without knowing that they have been generated by a grammar. The present study investigated whether this ineffectiveness of intentional learning could be overcome by removing dual task demands and providing participants with more specific instructions. The results only showed a positive effect of learning intentionally for participants specifically instructed to find out which letters are allowed to follow each other. These participants were also unaffected by a salient feature. In contrast, for participants who did not know what kind of structure to expect, intentional learning was not more effective than incidental learning and knowledge acquisition was guided by salience.
  • Van Gijn, R. (2010). Middle voice and ideophones, a diachronic connection: The case of Yurakaré. Studies in Language, 34, 273-297. doi:10.1075/sl.34.2.02gij.

    Abstract

    Kemmer (1993) argues that middle voice markers almost always arise diachronically through the semantic extension of a reflexive marker to other semantic uses related to reflexive. In this paper I will argue for an alternative diachronic path that has led to the development of the middle marker in Yurakaré (unclassified, Bolivia): through ideophone-verb constructions. Taking this perspective helps explain a number of synchronic peculiarities of the middle marker in Yurakaré, and it introduces a previously unnoticed channel for middle voice markers to arise.

    Files private

    Request files
  • Van Alphen, P. M., & Van Berkum, J. J. A. (2010). Is there pain in champagne? Semantic involvement of words within words during sense-making. Journal of Cognitive Neuroscience, 22, 2618-2626. doi:10.1162/jocn.2009.21336.

    Abstract

    In an ERP experiment, we examined whether listeners, when making sense of spoken utterances, take into account the meaning of spurious words that are embedded in longer words, either at their onsets (e. g., pie in pirate) or at their offsets (e. g., pain in champagne). In the experiment, Dutch listeners heard Dutch words with initial or final embeddings presented in a sentence context that did or did not support the meaning of the embedded word, while equally supporting the longer carrier word. The N400 at the carrier words was modulated by the semantic fit of the embedded words, indicating that listeners briefly relate the meaning of initial-and final-embedded words to the sentential context, even though these words were not intended by the speaker. These findings help us understand the dynamics of initial sense-making and its link to lexical activation. In addition, they shed new light on the role of lexical competition and the debate concerning the lexical activation of final-embedded words.
  • Van Wingen, G. A., Van Broekhoven, F., Verkes, R. J., Petersson, K. M., Bäckström, T., Buitelaar, J. K., & Fernández, G. (2008). Progesterone selectively increases amygdala reactivity in women. Molecular Psychiatry, 13, 325-333. doi:doi:10.1038/sj.mp.4002030.

    Abstract

    The acute neural effects of progesterone are mediated by its neuroactive metabolites allopregnanolone and pregnanolone. These neurosteroids potentiate the inhibitory actions of c-aminobutyric acid (GABA). Progesterone is known to produce anxiolytic effects in animals, but recent animal studies suggest that pregnanolone increases anxiety after a period of low allopregnanolone concentration. This effect is potentially mediated by the amygdala and related to the negative mood symptoms in humans that are observed during increased allopregnanolone levels. Therefore, we investigated with functional magnetic resonance imaging (MRI) whether a single progesterone administration to healthy young women in their follicular phase modulates the amygdala response to salient, biologically relevant stimuli. The progesterone administration increased the plasma concentrations of progesterone and allopregnanolone to levels that are reached during the luteal phase and early pregnancy. The imaging results show that progesterone selectively increased amygdala reactivity. Furthermore, functional connectivity analyses indicate that progesterone modulated functional coupling of the amygdala with distant brain regions. These results reveal a neural mechanism by which progesterone may mediate adverse effects on anxiety and mood.
  • Van Bergen, G., Flecken, M., & Wu, R. (2019). Rapid target selection of object categories based on verbs: Implications for language-categorization interactions. Psychophysiology, 56(9): e13395. doi:10.1111/psyp.13395.

    Abstract

    Although much is known about how nouns facilitate object categorization, very little is known about how verbs (e.g., posture verbs such as stand or lie) facilitate object categorization. Native Dutch speakers are a unique population to investigate this issue with because the configurational categories distinguished by staan (to stand) and liggen (to lie) are inherent in everyday Dutch language. Using an ERP component (N2pc), four experiments demonstrate that selection of posture verb categories is rapid (between 220–320 ms). The effect was attenuated, though present, when removing the perceptual distinction between categories. A similar attenuated effect was obtained in native English speakers, where the category distinction is less familiar, and when category labels were implicit for native Dutch speakers. Our results are among the first to demonstrate that category search based on verbs can be rapid, although extensive linguistic experience and explicit labels may not be necessary to facilitate categorization in this case.

    Additional information

    psyp13395-sup-0001-appendixs1.pdf
  • Van Leeuwen, E. J. C., Cronin, K. A., & Haun, D. B. M. (2019). Reply to Farine and Aplin: Chimpanzees choose their association and interaction partners. Proceedings of the National Academy of Sciences of the United States of America, 116(34), 16676-16677. doi:10.1073/pnas.1905745116.

    Abstract

    Farine and Aplin (1) question the validity of our study reporting group-specific social dynamics in chimpanzees (2). As alternative to our approach, Farine and Aplin advance a “prenetwork permutation” methodology that tests against random assortment (3). We appreciate Farine and Aplin’s interest and applied their suggested approaches to our data. The new analyses revealed highly similar results to those of our initial approach. We further dispel Farine and Aplin’s critique by outlining its incompatibility to our study system, methodology, and analysis.First, when we apply the suggested prenetwork permutation to our proximity dataset, we again find significant population-level differences in association rates, while controlling for population size [as derived from Farine and Aplin’s script (4); original result, P < 0.0001; results including prenetwork permutation, P < 0.0001]. Furthermore, when we … ↵1To whom correspondence may be addressed. Email: ejcvanleeuwen{at}gmail.com.
  • Van Berkum, J. J. A. (2010). The brain is a prediction machine that cares about good and bad - Any implications for neuropragmatics? Italian Journal of Linguistics, 22, 181-208.

    Abstract

    Experimental pragmatics asks how people construct contextualized meaning in communication. So what does it mean for this field to add neuroas a prefix to its name? After analyzing the options for any subfield of cognitive science, I argue that neuropragmatics can and occasionally should go beyond the instrumental use of EEG or fMRI and beyond mapping classic theoretical distinctions onto Brodmann areas. In particular, if experimental pragmatics ‘goes neuro’, it should take into account that the brain evolved as a control system that helps its bearer negotiate a highly complex, rapidly changing and often not so friendly environment. In this context, the ability to predict current unknowns, and to rapidly tell good from bad, are essential ingredients of processing. Using insights from non-linguistic areas of cognitive neuroscience as well as from EEG research on utterance comprehension, I argue that for a balanced development of experimental pragmatics, these two characteristics of the brain cannot be ignored.
  • Van den Boomen, C., Fahrenfort, J. J., Snijders, T. M., & Kemner, C. (2019). Slow segmentation of faces in Autism Spectrum Disorder. Neuropsychologia, 127, 1-8. doi:10.1016/j.neuropsychologia.2019.02.005.

    Abstract

    Atypical visual segmentation, affecting object perception, might contribute to face processing problems in Autism Spectrum Disorder (ASD). The current study investigated impairments in visual segmentation of faces in ASD. Thirty participants (ASD: 16; Control: 14) viewed texture-defined faces, houses, and homogeneous images, while electroencephalographic and behavioral responses were recorded. The ASD group showed slower face-segmentation related brain activity and longer segmentation reaction times than the control group, but no difference in house-segmentation related activity or behavioral performance. Furthermore, individual differences in face-segmentation but not house-segmentation correlated with score on the Autism Quotient. Segmentation is thus selectively impaired for faces in ASD, and relates to the degree of ASD traits. Face segmentation relates to recurrent connectivity from the fusiform face area (FFA) to the visual cortex. These findings thus suggest that atypical connectivity from the FFA might contribute to delayed face processing in ASD.

    Additional information

    Supplementary material
  • Van Es, M. W. J., & Schoffelen, J.-M. (2019). Stimulus-induced gamma power predicts the amplitude of the subsequent visual evoked response. NeuroImage, 186, 703-712. doi:10.1016/j.neuroimage.2018.11.029.

    Abstract

    The efficiency of neuronal information transfer in activated brain networks may affect behavioral performance.
    Gamma-band synchronization has been proposed to be a mechanism that facilitates neuronal processing of
    behaviorally relevant stimuli. In line with this, it has been shown that strong gamma-band activity in visual
    cortical areas leads to faster responses to a visual go cue. We investigated whether there are directly observable
    consequences of trial-by-trial fluctuations in non-invasively observed gamma-band activity on the neuronal
    response. Specifically, we hypothesized that the amplitude of the visual evoked response to a go cue can be
    predicted by gamma power in the visual system, in the window preceding the evoked response. Thirty-three
    human subjects (22 female) performed a visual speeded response task while their magnetoencephalogram
    (MEG) was recorded. The participants had to respond to a pattern reversal of a concentric moving grating. We
    estimated single trial stimulus-induced visual cortical gamma power, and correlated this with the estimated single
    trial amplitude of the most prominent event-related field (ERF) peak within the first 100 ms after the pattern
    reversal. In parieto-occipital cortical areas, the amplitude of the ERF correlated positively with gamma power, and
    correlated negatively with reaction times. No effects were observed for the alpha and beta frequency bands,
    despite clear stimulus onset induced modulation at those frequencies. These results support a mechanistic model,
    in which gamma-band synchronization enhances the neuronal gain to relevant visual input, thus leading to more
    efficient downstream processing and to faster responses.
  • Van den Bos, E., & Poletiek, F. H. (2010). Structural selection in implicit learning of artificial grammars. Psychological Research-Psychologische Forschung, 74(2), 138-151. doi:10.1007/s00426-009-0227-1.

    Abstract

    In the contextual cueing paradigm, Endo and Takeda (in Percept Psychophys 66:293–302, 2004) provided evidence that implicit learning involves selection of the aspect of a structure that is most useful to one’s task. The present study attempted to replicate this finding in artificial grammar learning to investigate whether or not implicit learning commonly involves such a selection. Participants in Experiment 1 were presented with an induction task that could be facilitated by several characteristics of the exemplars. For some participants, those characteristics included a perfectly predictive feature. The results suggested that the aspect of the structure that was most useful to the induction task was selected and learned implicitly. Experiment 2 provided evidence that, although salience affected participants’ awareness of the perfectly predictive feature, selection for implicit learning was mainly based on usefulness.

    Additional information

    Supplementary material
  • Van Goch, M. M., Verhoeven, L., & McQueen, J. M. (2019). Success in learning similar-sounding words predicts vocabulary depth above and beyond vocabulary breadth. Journal of Child Language, 46(1), 184-197. doi:10.1017/S0305000918000338.

    Abstract

    In lexical development, the specificity of phonological representations is important. The ability to build phonologically specific lexical representations predicts the number of words a child knows (vocabulary breadth), but it is not clear if it also fosters how well words are known (vocabulary depth). Sixty-six children were studied in kindergarten (age 5;7) and first grade (age 6;8). The predictive value of the ability to learn phonologically similar new words, phoneme discrimination ability, and phonological awareness on vocabulary breadth and depth were assessed using hierarchical regression. Word learning explained unique variance in kindergarten and first-grade vocabulary depth, over the other phonological factors. It did not explain unique variance in vocabulary breadth. Furthermore, even after controlling for kindergarten vocabulary breadth, kindergarten word learning still explained unique variance in first-grade vocabulary depth. Skill in learning phonologically similar words appears to predict knowledge children have about what words mean.
  • Van Leeuwen, T. M., Petersson, K. M., & Hagoort, P. (2010). Synaesthetic colour in the brain: Beyond colour areas. A functional magnetic resonance imaging study of synaesthetes and matched controls. PLoS One, 5(8), E12074. doi:10.1371/journal.pone.0012074.

    Abstract

    Background: In synaesthesia, sensations in a particular modality cause additional experiences in a second, unstimulated modality (e.g., letters elicit colour). Understanding how synaesthesia is mediated in the brain can help to understand normal processes of perceptual awareness and multisensory integration. In several neuroimaging studies, enhanced brain activity for grapheme-colour synaesthesia has been found in ventral-occipital areas that are also involved in real colour processing. Our question was whether the neural correlates of synaesthetically induced colour and real colour experience are truly shared. Methodology/Principal Findings: First, in a free viewing functional magnetic resonance imaging (fMRI) experiment, we located main effects of synaesthesia in left superior parietal lobule and in colour related areas. In the left superior parietal lobe, individual differences between synaesthetes (projector-associator distinction) also influenced brain activity, confirming the importance of the left superior parietal lobe for synaesthesia. Next, we applied a repetition suppression paradigm in fMRI, in which a decrease in the BOLD (blood-oxygenated-level-dependent) response is generally observed for repeated stimuli. We hypothesized that synaesthetically induced colours would lead to a reduction in BOLD response for subsequently presented real colours, if the neural correlates were overlapping. We did find BOLD suppression effects induced by synaesthesia, but not within the colour areas. Conclusions/Significance: Because synaesthetically induced colours were not able to suppress BOLD effects for real colour, we conclude that the neural correlates of synaesthetic colour experience and real colour experience are not fully shared. We propose that synaesthetic colour experiences are mediated by higher-order visual pathways that lie beyond the scope of classical, ventral-occipital visual areas. Feedback from these areas, in which the left parietal cortex is likely to play an important role, may induce V4 activation and the percept of synaesthetic colour.
  • Van Berkum, J. J. A., Hagoort, P., & Brown, C. M. (2000). The use of referential context and grammatical gender in parsing: A reply to Brysbaert and Mitchell. Journal of Psycholinguistic Research, 29(5), 467-481. doi:10.1023/A:1005168025226.

    Abstract

    Based on the results of an event-related brain potentials (ERP) experiment (van Berkum, Brown, & Hagoort. 1999a, b), we have recently argued that discourse-level referential context can be taken into account extremely rapidly by the parser. Moreover, our ERP results indicated that local grammatical gender information, although available within a few hundred milliseconds from word onset, is not always used quickly enough to prevent the parser from considering a discourse-supported, but agreement-violating, syntactic analysis. In a comment on our work, Brysbaert and Mitchell (2000) have raised concerns about the methodology of our ERP experiment and have challenged our interpretation of the results. In this reply, we argue that these concerns are unwarranted and, that, in contrast to our own interpretation, the alternative explanations provided by Brysbaert and Mitchell do not account for the full pattern of ERP results.
  • Van Gijn, R., Hirtzel, V., & Gipper, S. (2010). Updating and loss of color terminology in Yurakaré: An interdisciplinary point of view. Language & Communication, 30(4), 240-264. doi:10.1016/j.langcom.2010.02.002.

    Abstract

    In spite of the well-established idea that language contact is fundamental for explaining language change, this aspect has been remarkably absent in most studies of color term evolution. This paper discusses the changes in the color system of Yurakaré (unclassified, Bolivia) that have occurred during the last 200 years, as a result of intensive contact with Spanish language and culture. Developing the new theoretical concept of ‘updating’, we will show that different contexts have resulted in qualitatively different changes to the color system of the language.
  • Van Herpt, C., Van der Meulen, M., & Redl, T. (2019). Voorbeeldzinnen kunnen het goede voorbeeld geven. Levende Talen Magazine, 106(4), 18-21.
  • Varma, S., Takashima, A., Fu, L., & Kessels, R. P. C. (2019). Mindwandering propensity modulates episodic memory consolidation. Aging Clinical and Experimental Research, 31(11), 1601-1607. doi:10.1007/s40520-019-01251-1.

    Abstract

    Research into strategies that can combat episodic memory decline in healthy older adults has gained widespread attention over the years. Evidence suggests that a short period of rest immediately after learning can enhance memory consolidation, as compared to engaging in cognitive tasks. However, a recent study in younger adults has shown that post-encoding engagement in a working memory task leads to the same degree of memory consolidation as from post-encoding rest. Here, we tested whether this finding can be extended to older adults. Using a delayed recognition test, we compared the memory consolidation of word–picture pairs learned prior to 9 min of rest or a 2-Back working memory task, and examined its relationship with executive functioning and mindwandering propensity. Our results show that (1) similar to younger adults, memory for the word–picture associations did not differ when encoding was followed by post-encoding rest or 2-Back task and (2) older adults with higher mindwandering propensity retained more word–picture associations encoded prior to rest relative to those encoded prior to the 2-Back task, whereas participants with lower mindwandering propensity had better memory performance for the pairs encoded prior to the 2-Back task. Overall, our results indicate that the degree of episodic memory consolidation during both active and passive post-encoding periods depends on individual mindwandering tendency.

    Additional information

    Supplementary material
  • Veenstra, A., Berends, S., & Van Hout, A. (2010). Acquisition of object and quantitative pronouns in Dutch: Kinderen wassen 'hem' voordat ze 'er' twee meenemen. Groninger Arbeiten zur Germanistischen Linguistik, 51, 9-25.

    Abstract

    1. Introduction Despite a large literature on Dutch children’s pronoun interpretation, relatively little is known about their production. In this study we elicited pronouns in two syntactic environments: object pronouns and quantitative er (Q-er). The goal was to see how different types of pronouns develop, in particular, whether acquisition depends on their different syntactic properties. Our Dutch data add another type of language to the acquisition literature on object clitics in the Romance languages. Moreover, we present another angle on this discussion by comparing object pronouns and Q-er.
  • Verdonschot, R. G., Tokimoto, S., & Miyaoka, Y. (2019). The fundamental phonological unit of Japanese word production: An EEG study using the picture-word interference paradigm. Journal of Neurolinguistics, 51, 184-193. doi:10.1016/j.jneuroling.2019.02.004.

    Abstract

    It has been shown that in Germanic languages (e.g. English, Dutch) phonemes are the primary (or proximate) planning units during the early stages of phonological encoding. Contrastingly, in Chinese and Japanese the phoneme does not seem to play an important role but rather the syllable (Chinese) and mora (Japanese) are essential. However, despite the lack of behavioral evidence, neurocorrelational studies in Chinese suggested that electrophysiological brain responses (i.e. preceding overt responses) may indicate some significance for the phoneme. We investigated this matter in Japanese and our data shows that unlike in Chinese (for which the literature shows mixed effects), in Japanese both the behavioral and neurocorrelational data indicate an important role only for the mora (and not the phoneme) during the early stages of phonological encoding.
  • Verdonschot, R. G., La Heij, W., & Schiller, N. O. (2010). Semantic context effects when naming Japanese kanji, but not Chinese hànzì. Cognition, 115(3), 512-518. doi:10.1016/j.cognition.2010.03.005.

    Abstract

    The process of reading aloud bare nouns in alphabetic languages is immune to semantic context effects from pictures. This is accounted for by assuming that words in alphabetic languages can be read aloud relatively fast through a sub-lexical grapheme-phoneme conversion (GPC) route or by a direct route from orthography to word form. We examined semantic context effects in a word-naming task in two languages with logographic scripts for which GPC cannot be applied: Japanese kanji and Chinese hanzi. We showed that reading aloud bare nouns is sensitive to semantically related context pictures in Japanese, but not in Chinese. The difference between these two languages is attributed to processing costs caused by multiple pronunciations for Japanese kanji. (C) 2010 Elsevier B.V. All rights reserved.
  • Verga, L., & Kotz, S. A. (2019). Putting language back into ecological communication contexts. Language, Cognition and Neuroscience, 34(4), 536-544. doi:10.1080/23273798.2018.1506886.

    Abstract

    Language is a multi-faceted form of communication. It is not until recently though that language research moved on from simple stimuli and protocols toward a more ecologically valid approach, namely “shifting” from words and simple sentences to stories with varying degrees of contextual complexity. While much needed, the use of ecologically valid stimuli such as stories should also be explored in interactive rather than individualistic experimental settings leading the way to an interactive neuroscience of language. Indeed, mounting evidence suggests that cognitive processes and their underlying neural activity significantly differ between social and individual experiences. We aim at reviewing evidence, which indicates that the characteristics of linguistic and extra-linguistic contexts may significantly influence communication–including spoken language comprehension. In doing so, we provide evidence on the use of new paradigms and methodological advancements that may enable the study of complex language features in a truly interactive, ecological way.
  • Verga, L., & Kotz, S. A. (2019). Spatial attention underpins social word learning in the right fronto-parietal network. NeuroImage, 195, 165-173. doi:10.1016/j.neuroimage.2019.03.071.

    Abstract

    In a multi- and inter-cultural world, we daily encounter new words. Adult learners often rely on a situational context to learn and understand a new word's meaning. Here, we explored whether interactive learning facilitates word learning by directing the learner's attention to a correct new word referent when a situational context is non-informative. We predicted larger involvement of inferior parietal, frontal, and visual cortices involved in visuo-spatial attention during interactive learning. We scanned participants while they played a visual word learning game with and without a social partner. As hypothesized, interactive learning enhanced activity in the right Supramarginal Gyrus when the situational context provided little information. Activity in the right Inferior Frontal Gyrus during interactive learning correlated with post-scanning behavioral test scores, while these scores correlated with activity in the Fusiform Gyrus in the non-interactive group. These results indicate that attention is involved in interactive learning when the situational context is minimal and suggest that individual learning processes may be largely different from interactive ones. As such, they challenge the ecological validity of what we know about individual learning and advocate the exploration of interactive learning in naturalistic settings.
  • Verhoef, E., Demontis, D., Burgess, S., Shapland, C. Y., Dale, P. S., Okbay, A., Neale, B. M., Faraone, S. V., iPSYCH-Broad-PGC ADHD Consortium, Stergiakouli, E., Davey Smith, G., Fisher, S. E., Borglum, A., & St Pourcain, B. (2019). Disentangling polygenic associations between Attention-Deficit/Hyperactivity Disorder, educational attainment, literacy and language. Translational Psychiatry, 9: 35. doi:10.1038/s41398-018-0324-2.

    Abstract

    Interpreting polygenic overlap between ADHD and both literacy-related and language-related impairments is challenging as genetic associations might be influenced by indirectly shared genetic factors. Here, we investigate genetic overlap between polygenic ADHD risk and multiple literacy-related and/or language-related abilities (LRAs), as assessed in UK children (N ≤ 5919), accounting for genetically predictable educational attainment (EA). Genome-wide summary statistics on clinical ADHD and years of schooling were obtained from large consortia (N ≤ 326,041). Our findings show that ADHD-polygenic scores (ADHD-PGS) were inversely associated with LRAs in ALSPAC, most consistently with reading-related abilities, and explained ≤1.6% phenotypic variation. These polygenic links were then dissected into both ADHD effects shared with and independent of EA, using multivariable regressions (MVR). Conditional on EA, polygenic ADHD risk remained associated with multiple reading and/or spelling abilities, phonemic awareness and verbal intelligence, but not listening comprehension and non-word repetition. Using conservative ADHD-instruments (P-threshold < 5 × 10−8), this corresponded, for example, to a 0.35 SD decrease in pooled reading performance per log-odds in ADHD-liability (P = 9.2 × 10−5). Using subthreshold ADHD-instruments (P-threshold < 0.0015), these effects became smaller, with a 0.03 SD decrease per log-odds in ADHD risk (P = 1.4 × 10−6), although the predictive accuracy increased. However, polygenic ADHD-effects shared with EA were of equal strength and at least equal magnitude compared to those independent of EA, for all LRAs studied, and detectable using subthreshold instruments. Thus, ADHD-related polygenic links with LRAs are to a large extent due to shared genetic effects with EA, although there is evidence for an ADHD-specific association profile, independent of EA, that primarily involves literacy-related impairments.

    Additional information

    41398_2018_324_MOESM1_ESM.docx
  • Verhoeven, L., Schreuder, R., & Baayen, R. H. (2003). Units of analysis in reading Dutch bisyllabic pseudowords. Scientific Studies of Reading, 7(3), 255-271. doi:10.1207/S1532799XSSR0703_4.

    Abstract

    Two experiments were carried out to explore the units of analysis is used by children to read Dutch bisyllabic pseudowords. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme e may represent three different vowels:/∊/, /e/, or /λ/. In Experiment 1, Grade 6 elementary school children were presented lists of bisyllabic pseudowords containing the grapheme e in the initial syllable representing a content morpheme, a prefix, or a random string. On the basis of general word frequency data, we expected the interpretation of the initial syllable as a random string to elicit the pronunciation of a stressed /e/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, and the interpretation as a prefix to elicit the pronunciation of an unstressed /&lamda;/. We found both the pronunciation and the stress assignment for pseudowords to depend on word type, which shows morpheme boundaries and prefixes to be identified. However, the identification of prefixes could also be explained by the correspondence of the prefix boundaries in the pseudowords to syllable boundaries. To exclude this alternative explanation, a follow-up experiment with the same group of children was conducted using bisyllabic pseudowords containing prefixes that did not coincide with syllable boundaries versus similar pseudowords with no prefix. The results of the first experiment were replicated. That is, the children identified prefixes and shifted their assignment of word stress accordingly. The results are discussed with reference to a parallel dual-route model of word decoding
  • Vernes, S. C., Newbury, D. F., Abrahams, B. S., Winchester, L., Nicod, J., Groszer, M., Alarcón, M., Oliver, P. L., Davies, K. E., Geschwind, D. H., Monaco, A. P., & Fisher, S. E. (2008). A functional genetic link between distinct developmental language disorders. New England Journal of Medicine, 359(22), 2337 -2345. doi:10.1056/NEJMoa0802828.

    Abstract

    BACKGROUND: Rare mutations affecting the FOXP2 transcription factor cause a monogenic speech and language disorder. We hypothesized that neural pathways downstream of FOXP2 influence more common phenotypes, such as specific language impairment. METHODS: We performed genomic screening for regions bound by FOXP2 using chromatin immunoprecipitation, which led us to focus on one particular gene that was a strong candidate for involvement in language impairments. We then tested for associations between single-nucleotide polymorphisms (SNPs) in this gene and language deficits in a well-characterized set of 184 families affected with specific language impairment. RESULTS: We found that FOXP2 binds to and dramatically down-regulates CNTNAP2, a gene that encodes a neurexin and is expressed in the developing human cortex. On analyzing CNTNAP2 polymorphisms in children with typical specific language impairment, we detected significant quantitative associations with nonsense-word repetition, a heritable behavioral marker of this disorder (peak association, P=5.0x10(-5) at SNP rs17236239). Intriguingly, this region coincides with one associated with language delays in children with autism. CONCLUSIONS: The FOXP2-CNTNAP2 pathway provides a mechanistic link between clinically distinct syndromes involving disrupted language.

    Additional information

    nejm_vernes_2337sa1.pdf
  • Veroude, K., Norris, D. G., Shumskaya, E., Gullberg, M., & Indefrey, P. (2010). Functional connectivity between brain regions involved in learning words of a new language. Brain and Language, 113, 21-27. doi:10.1016/j.bandl.2009.12.005.

    Abstract

    Previous studies have identified several brain regions that appear to be involved in the acquisition of novel word forms. Standard word-by-word presentation is often used although exposure to a new language normally occurs in a natural, real world situation. In the current experiment we investigated naturalistic language exposure and applied a model-free analysis for hemodynamic-response data. Functional connectivity, temporal correlations between hemodynamic activity of different areas, was assessed during rest before and after presentation of a movie of a weather report in Mandarin Chinese to Dutch participants. We hypothesized that learning of novel words might be associated with stronger functional connectivity of regions that are involved in phonological processing. Participants were divided into two groups, learners and non-learners, based on the scores on a post hoc word recognition task. The learners were able to recognize Chinese target words from the weather report, while the non-learners were not. In the first resting state period, before presentation of the movie, stronger functional connectivity was observed for the learners compared to the non-learners between the left supplementary motor area and the left precentral gyrus as well as the left insula and the left rolandic operculum, regions that are important for phonological rehearsal. After exposure to the weather report, functional connectivity between the left and right supramarginal gyrus was stronger for learners than for non-learners. This is consistent with a role of the left supramarginal gyrus in the storage of phonological forms. These results suggest both pre-existing and learning-induced differences between the two groups.
  • Versace, E., Rogge, J. R., Shelton-May, N., & Ravignani, A. (2019). Positional encoding in cotton-top tamarins (Saguinus oedipus). Animal Cognition, 22, 825-838. doi:10.1007/s10071-019-01277-y.

    Abstract

    Strategies used in artificial grammar learning can shed light into the abilities of different species to extract regularities from the environment. In the A(X)nB rule, A and B items are linked, but assigned to different positional categories and separated by distractor items. Open questions are how widespread is the ability to extract positional regularities from A(X)nB patterns, which strategies are used to encode positional regularities and whether individuals exhibit preferences for absolute or relative position encoding. We used visual arrays to investigate whether cotton-top tamarins (Saguinusoedipus) can learn this rule and which strategies they use. After training on a subset of exemplars, two of the tested monkeys successfully generalized to novel combinations. These tamarins discriminated between categories of tokens with different properties (A, B, X) and detected a positional relationship between non-adjacent items even in the presence of novel distractors. The pattern of errors revealed that successful subjects used visual similarity with training stimuli to solve the task and that successful tamarins extracted the relative position of As and Bs rather than their absolute position, similarly to what has been observed in other species. Relative position encoding appears to be favoured in different tasks and taxa. Generalization, though, was incomplete, since we observed a failure with items that during training had always been presented in reinforced arrays, showing the limitations in grasping the underlying positional rule. These results suggest the use of local strategies in the extraction of positional rules in cotton-top tamarins.

    Additional information

    Supplementary file
  • Verspeek, J., Staes, N., Van Leeuwen, E. J. C., Eens, M., & Stevens, J. M. G. (2019). Bonobo personality predicts friendship. Scientific Reports, 9: 19245. doi:10.1038/s41598-019-55884-3.

    Abstract

    In bonobos, strong bonds have been documented between unrelated females and between mothers
    and their adult sons, which can have important fitness benefits. Often age, sex or kinship similarity
    have been used to explain social bond strength variation. Recent studies in other species also stress
    the importance of personality, but this relationship remains to be investigated in bonobos. We used
    behavioral observations on 39 adult and adolescent bonobos housed in 5 European zoos to study the
    role of personality similarity in dyadic relationship quality. Dimension reduction analyses on individual
    and dyadic behavioral scores revealed multidimensional personality (Sociability, Openness, Boldness,
    Activity) and relationship quality components (value, compatibility). We show that, aside from
    relatedness and sex combination of the dyad, relationship quality is also associated with personality
    similarity of both partners. While similarity in Sociability resulted in higher relationship values, lower
    relationship compatibility was found between bonobos with similar Activity scores. The results of this
    study expand our understanding of the mechanisms underlying social bond formation in anthropoid
    apes. In addition, we suggest that future studies in closely related species like chimpanzees should
    implement identical methods for assessing bond strength to shed further light on the evolution of this
    phenomenon.

    Additional information

    Supplementary material
  • Viaro, M., Bercelli, F., & Rossano, F. (2008). Una relazione terapeutica: Il terapeuta allenatore. Connessioni: Rivista di consulenza e ricerca sui sistemi umani, 20, 95-105.
  • von Spiczak, S., Muhle, H., Helbig, I., De Kovel, C. G. F., Hampe, J., Gaus, V., Koeleman, B. P. C., Lindhout, D., Schreiber, S., Sander, T., & Stephani, U. (2010). Association Study of TRPC4 as a Candidate Gene for Generalized Epilepsy with Photosensitivity. Neuromolecular Medicine, 12(3), 292-299. doi:10.1007/s12017-010-8122-x.

    Abstract

    Photoparoxysmal response (PPR) is characterized by abnormal visual sensitivity of the brain to photic stimulation. Frequently associated with idiopathic generalized epilepsies (IGEs), it might be an endophenotype for cortical excitability. Transient receptor potential cation (TRPC) channels are involved in the generation of epileptiform discharges, and TRPC4 constitutes the main TRPC channel in the central nervous system. The present study investigated an association of PPR with sequence variations of the TRPC4 gene. Thirty-five single nucleotide polymorphisms (SNP) within TRPC4 were genotyped in 273 PPR probands and 599 population controls. Association analyses were performed for the broad PPR endophenotype (PPR types I-IV; n = 273), a narrow model of affectedness (PPR types III and IV; n = 214) and PPR associated with IGE (PPR/IGE; n = 106) for each SNP and for corresponding haplotypes. Association was found between the intron 5 SNP rs10507456 and PPR/IGE both for single markers (P = 0.005) and haplotype level (P = 0.01). Three additional SNPs (rs1535775, rs10161932 and rs7338118) within the same haplotype block were associated with PPR/IGE at P < 0.05 (uncorrected) as well as two more markers (rs10507457, rs7329459) located in intron 3. Again, the corresponding haplotype also showed association with PPR/IGE. Results were not significant following correction for multiple comparisons by permutation analysis for single markers and Bonferroni-Holm for haplotypes. No association was found between variants in TRPC4 and other phenotypes. Our results showed a trend toward association of TRPC4 variants and PPR/IGE. Further studies including larger samples of photosensitive probands are required to clarify the relevance of TRPC4 for PPR and IGE.
  • Vonk, W., Hustinx, L. G., & Simons, W. H. (1992). The use of referential expressions in structuring discourse. Language and Cognitive Processes, 301-333. doi:10.1080/01690969208409389.

    Abstract

    Referential expressions that refer to entities that occur in a text differ in lexical specificity. It is claimed that if these anaphoric expressions are more specific than necessary for their identificational function, they not only relate the current information to the intended referent, but also contribute to the expression of the thematic structure of the discourse and to the comprehension of the thematic structure. In two controlled production experiments, it is demonstrated that thematic shifts are produced when one has to make use of such an overspecified expression, and that overspecified referential expressions are produced when one has to formulate a thematic shift. In two comprehension experiments, using a probe recognition technique, it is shown that an overspecified referential expression decreases the availability of information contained in a sentence that precedes the overspecification. This finding is interpreted in terms of the thematic structuring function of referential expressions in the understanding of discourse.
  • De Vos, J., Schriefers, H., Bosch, L. t., & Lemhöfer, K. (2019). Interactive L2 vocabulary acquisition in a lab-based immersion setting. Language, Cognition and Neuroscience, 34(7), 916-935. doi:10.1080/23273798.2019.1599127.

    Abstract

    ABSTRACTWe investigated to what extent L2 word learning in spoken interaction takes place when learners are unaware of taking part in a language learning study. Using a novel paradigm for approximating naturalistic (but not necessarily non-intentional) L2 learning in the lab, German learners of Dutch were led to believe that the study concerned judging the price of objects. Dutch target words (object names) were selected individually such that these words were unknown to the respective participant. Then, in a dialogue-like task with the experimenter, the participants were first exposed to and then tested on the target words. In comparison to a no-input control group, we observed a clear learning effect especially from the first two exposures, and better learning for cognates than for non-cognates, but no modulating effect of the exposure-production lag. Moreover, some of the acquired knowledge persisted over a six-month period.
  • De Vos, C. (2008). Janger Kolok: de Balinese dovendans. Woord en Gebaar, 12-13.
  • Vosse, T., & Kempen, G. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar. Cognition, 75, 105-143.

    Abstract

    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types.
  • De Vries, M., Barth, A. C. R., Maiworm, S., Knecht, S., Zwitserlood, P., & Flöel, A. (2010). Electrical stimulation of Broca’s area enhances implicit learning of an artificial grammar. Journal of Cognitive Neuroscience, 22, 2427-2436. doi:10.1162/jocn.2009.21385.

    Abstract

    Artificial grammar learning constitutes a well-established model for the acquisition of grammatical knowledge in a natural setting. Previous neuroimaging studies demonstrated that Broca's area (left BA 44/45) is similarly activated by natural syntactic processing and artificial grammar learning. The current study was conducted to investigate the causal relationship between Broca's area and learning of an artificial grammar by means of transcranial direct current stimulation (tDCS). Thirty-eight healthy subjects participated in a between-subject design, with either anodal tDCS (20 min, 1 mA) or sham stimulation, over Broca's area during the acquisition of an artificial grammar. Performance during the acquisition phase, presented as a working memory task, was comparable between groups. In the subsequent classification task, detecting syntactic violations, and specifically, those where no cues to superficial similarity were available, improved significantly after anodal tDCS, resulting in an overall better performance. A control experiment where 10 subjects received anodal tDCS over an area unrelated to artificial grammar learning further supported the specificity of these effects to Broca's area. We conclude that Broca's area is specifically involved in rule-based knowledge, and here, in an improved ability to detect syntactic violations. The results cannot be explained by better tDCS-induced working memory performance during the acquisition phase. This is the first study that demonstrates that tDCS may facilitate acquisition of grammatical knowledge, a finding of potential interest for rehabilitation of aphasia.
  • De Vries, M., Ulte, C., Zwitserlood, P., Szymanski, B., & Knecht, S. (2010). Increasing dopamine levels in the brain improves feedback-based procedural learning in healthy participants: An artificial-grammar-learning experiment. Neuropsychologia, 48, 3193-3197. doi:10.1016/j.neuropsychologia.2010.06.024.

    Abstract

    Recently, an increasing number of studies have suggested a role for the basal ganglia and related dopamine inputs in procedural learning, specifically when learning occurs through trial-by-trial feedback (Shohamy, Myers, Kalanithi, & Gluck. (2008). Basal ganglia and dopamine contributions to probabilistic category learning. Neuroscience and Biobehavioral Reviews, 32, 219–236). A necessary relationship has however only been demonstrated in patient studies. In the present study, we show for the first time that increasing dopamine levels in the brain improves the gradual acquisition of complex information in healthy participants. We implemented two artificial-grammar-learning tasks, one with and one without performance feedback. Learning was improved after levodopa intake for the feedback-based learning task only, suggesting that dopamine plays a specific role in trial-by-trial feedback-based learning. This provides promising directions for future studies on dopaminergic modulation of cognitive functioning.
  • Wagner, A., & Ernestus, M. (2008). Identification of phonemes: Differences between phoneme classes and the effect of class size. Phonetica, 65(1-2), 106-127. doi:10.1159/000132389.

    Abstract

    This study reports general and language-specific patterns in phoneme identification. In a series of phoneme monitoring experiments, Castilian Spanish, Catalan, Dutch, English, and Polish listeners identified vowel, fricative, and stop consonant targets that are phonemic in all these languages, embedded in nonsense words. Fricatives were generally identified more slowly than vowels, while the speed of identification for stop consonants was highly dependent on the onset of the measurements. Moreover, listeners' response latencies and accuracy in detecting a phoneme correlated with the number of categories within that phoneme's class in the listener's native phoneme repertoire: more native categories slowed listeners down and decreased their accuracy. We excluded the possibility that this effect stems from differences in the frequencies of occurrence of the phonemes in the different languages. Rather, the effect of the number of categories can be explained by general properties of the perception system, which cause language-specific patterns in speech processing.
  • Waller, D., & Haun, D. B. M. (2003). Scaling techniques for modeling directional knowledge. Behavior Research Methods, Instruments, & Computers, 35(2), 285-293.

    Abstract

    A common way for researchers to model or graphically portray spatial knowledge of a large environment is by applying multidimensional scaling (MDS) to a set of pairwise distance estimations. We introduce two MDS-like techniques that incorporate people’s knowledge of directions instead of (or in addition to) their knowledge of distances. Maps of a familiar environment derived from these procedures were more accurate and were rated by participants as being more accurate than those derived from nonmetric MDS. By incorporating people’s relatively accurate knowledge of directions, these methods offer spatial cognition researchers and behavioral geographers a sharper analytical tool than MDS for studying cognitive maps.
  • Warner, N., Otake, T., & Arai, A. (2010). Intonational structure as a word-boundary cue in Tokyo Japanese. Language and Speech, 53, 107-131. doi:10.1177/0023830909351235.

    Abstract

    While listeners are recognizing words from the connected speech stream, they are also parsing information from the intonational contour. This contour may contain cues to word boundaries, particularly if a language has boundary tones that occur at a large proportion of word onsets. We investigate how useful the pitch rise at the beginning of an accentual phrase (APR) would be as a potential word-boundary cue for Japanese listeners. A corpus study shows that it should allow listeners to locate approximately 40–60% of word onsets, while causing less than 1% false positives. We then present a word-spotting study which shows that Japanese listeners can, indeed, use accentual phrase boundary cues during segmentation. This work shows that the prosodic patterns that have been found in the production of Japanese also impact listeners’ processing.
  • Warren, C. M., Tona, K. D., Ouwekerk, L., Van Paridon, J., Poletiek, F. H., Bosch, J. A., & Nieuwenhuis, S. (2019). The neuromodulatory and hormonal effects of transcutaneous vagus nerve stimulation as evidenced by salivary alpha amylase, salivary cortisol, pupil diameter, and the P3 event-related potential. Brain Stimulation, 12(3), 635-642. doi:10.1016/j.brs.2018.12.224.

    Abstract

    Background

    Transcutaneous vagus nerve stimulation (tVNS) is a new, non-invasive technique being investigated as an intervention for a variety of clinical disorders, including epilepsy and depression. It is thought to exert its therapeutic effect by increasing central norepinephrine (NE) activity, but the evidence supporting this notion is limited.
    Objective

    In order to test for an impact of tVNS on psychophysiological and hormonal indices of noradrenergic function, we applied tVNS in concert with assessment of salivary alpha amylase (SAA) and cortisol, pupil size, and electroencephalograph (EEG) recordings.
    Methods

    Across three experiments, we applied real and sham tVNS to 61 healthy participants while they performed a set of simple stimulus-discrimination tasks. Before and after the task, as well as during one break, participants provided saliva samples and had their pupil size recorded. EEG was recorded throughout the task. The target for tVNS was the cymba conchae, which is heavily innervated by the auricular branch of the vagus nerve. Sham stimulation was applied to the ear lobe.
    Results

    P3 amplitude was not affected by tVNS (Experiment 1A: N=24; Experiment 1B: N=20; Bayes factor supporting null model=4.53), nor was pupil size (Experiment 2: N=16; interaction of treatment and time: p=0.79). However, tVNS increased SAA (Experiments 1A and 2: N=25) and attenuated the decline of salivary cortisol compared to sham (Experiment 2: N=17), as indicated by significant interactions involving treatment and time (p=.023 and p=.040, respectively).
    Conclusion

    These findings suggest that tVNS modulates hormonal indices but not psychophysiological indices of noradrenergic function.
  • Weber, A., & Cutler, A. (2003). Perceptual similarity co-existing with lexical dissimilarity [Abstract]. Abstracts of the 146th Meeting of the Acoustical Society of America. Journal of the Acoustical Society of America, 114(4 Pt. 2), 2422. doi:10.1121/1.1601094.

    Abstract

    The extreme case of perceptual similarity is indiscriminability, as when two second‐language phonemes map to a single native category. An example is the English had‐head vowel contrast for Dutch listeners; Dutch has just one such central vowel, transcribed [E]. We examine whether the failure to discriminate in phonetic categorization implies indiscriminability in other—e.g., lexical—processing. Eyetracking experiments show that Dutch‐native listeners instructed in English to ‘‘click on the panda’’ look (significantly more than native listeners) at a pictured pencil, suggesting that pan‐ activates their lexical representation of pencil. The reverse, however, is not the case: ‘‘click on the pencil’’ does not induce looks to a panda, suggesting that pen‐ does not activate panda in the lexicon. Thus prelexically undiscriminated second‐language distinctions can nevertheless be maintained in stored lexical representations. The problem of mapping a resulting unitary input to two distinct categories in lexical representations is solved by allowing input to activate only one second‐language category. For Dutch listeners to English, this is English [E], as a result of which no vowels in the signal ever map to words containing [ae]. We suggest that the choice of category is here motivated by a more abstract, phonemic, metric of similarity.
  • Weber, K., Christiansen, M., Indefrey, P., & Hagoort, P. (2019). Primed from the start: Syntactic priming during the first days of language learning. Language Learning, 69(1), 198-221. doi:10.1111/lang.12327.

    Abstract

    New linguistic information must be integrated into our existing language system. Using a novel experimental task that incorporates a syntactic priming paradigm into artificial language learning, we investigated how new grammatical regularities and words are learned. This innovation allowed us to control the language input the learner received, while the syntactic priming paradigm provided insight into the nature of the underlying syntactic processing machinery. The results of the present study pointed to facilitatory syntactic processing effects within the first days of learning: Syntactic and lexical priming effects revealed participants’ sensitivity to both novel words and word orders. This suggested that novel syntactic structures and their meaning (form–function mapping) can be acquired rapidly through incidental learning. More generally, our study indicated similar mechanisms for learning and processing in both artificial and natural languages, with implications for the relationship between first and second language learning.
  • Weber, K., Micheli, C., Ruigendijk, E., & Rieger, J. (2019). Sentence processing is modulated by the current linguistic environment and a priori information: An fMRI study. Brain and Behavior, 9(7): e01308. doi:10.1002/brb3.1308.

    Abstract

    Introduction
    Words are not processed in isolation but in rich contexts that are used to modulate and facilitate language comprehension. Here, we investigate distinct neural networks underlying two types of contexts, the current linguistic environment and verb‐based syntactic preferences.

    Methods
    We had two main manipulations. The first was the current linguistic environment, where the relative frequencies of two syntactic structures (prepositional object [PO] and double‐object [DO]) would either follow everyday linguistic experience or not. The second concerned the preference toward one or the other structure depending on the verb; learned in everyday language use and stored in memory. German participants were reading PO and DO sentences in German while brain activity was measured with functional magnetic resonance imaging.

    Results
    First, the anterior cingulate cortex (ACC) showed a pattern of activation that integrated the current linguistic environment with everyday linguistic experience. When the input did not match everyday experience, the unexpected frequent structure showed higher activation in the ACC than the other conditions and more connectivity from the ACC to posterior parts of the language network. Second, verb‐based surprisal of seeing a structure given a verb (PO verb preference but DO structure presentation) resulted, within the language network (left inferior frontal and left middle/superior temporal gyrus) and the precuneus, in increased activation compared to a predictable verb‐structure pairing.

    Conclusion
    In conclusion, (1) beyond the canonical language network, brain areas engaged in prediction and error signaling, such as the ACC, might use the statistics of syntactic structures to modulate language processing, (2) the language network is directly engaged in processing verb preferences. These two networks show distinct influences on sentence processing.

    Additional information

    Supporting information
  • Weber, K., & Lavric, A. (2008). Syntactic anomaly elicits a lexico-semantic (N400) ERP effect in the second but not in the first language. Psychophysiology, 45(6), 920-925. doi:10.1111/j.1469-8986.2008.00691.x.

    Abstract

    Recent brain potential research into first versus second language (L1 vs. L2) processing revealed striking responses to morphosyntactic features absent in the mother tongue. The aim of the present study was to establish whether the presence of comparable morphosyntactic features in L1 leads to more similar electrophysiological L1 and L2 profiles. ERPs were acquired while German-English bilinguals and native speakers of English read sentences. Some sentences were meaningful and well formed, whereas others contained morphosyntactic or semantic violations in the final word. In addition to the expected P600 component, morphosyntactic violations in L2 but not L1 led to an enhanced N400. This effect may suggest either that resolution of morphosyntactic anomalies in L2 relies on the lexico-semantic system or that the weaker/slower morphological mechanisms in L2 lead to greater sentence wrap-up difficulties known to result in N400 enhancement.
  • Wheeldon, L. (2003). Inhibitory from priming of spoken word production. Language and Cognitive Processes, 18(1), 81-109. doi:10.1080/01690960143000470.

    Abstract

    Three experiments were designed to examine the effect on picture naming of the prior production of a word related in phonological form. In Experiment 1, the latency to produce Dutch words in response to pictures (e.g., hoed , hat) was longer following the production of a form-related word (e.g., hond , dog) in response to a definition on a preceding trial, than when the preceding definition elicited an unrelated word (e.g., kerk , church). Experiment 2 demonstrated that the inhibitory effect disappears when one unrelated word is produced intervening prime and target productions (e.g., hond-kerk-hoed ). The size of the inhibitory effect was not significantly affected by the frequency of the prime words or the target picture names. In Experiment 3, facilitation was observed for word pairs that shared offset segments (e.g., kurk-jurk , cork-dress), whereas inhibition was observed for shared onset segments (e.g., bloed-bloem , blood-flower). However, no priming was observed for prime and target words with shared phonemes but no mismatching segments (e.g., oom-boom , uncle-tree; hex-hexs , fence-witch). These findings are consistent with a process of phoneme competition during phonological encoding.
  • Widlok, T. (2008). Landscape unbounded: Space, place, and orientation in ≠Akhoe Hai// om and beyond. Language Sciences, 30(2/3), 362-380. doi:10.1016/j.langsci.2006.12.002.

    Abstract

    Even before it became a common place to assume that “the Eskimo have a hundred words for snow” the languages of hunting and gathering people have played an important role in debates about linguistic relativity concerning geographical ontologies. Evidence from languages of hunter-gatherers has been used in radical relativist challenges to the overall notion of a comparative typology of generic natural forms and landscapes as terms of reference. It has been invoked to emphasize a personalized relationship between humans and the non-human world. It is against this background that this contribution discusses the landscape terminology of ≠Akhoe Hai//om, a Khoisan language spoken by “Bushmen” in Namibia. Landscape vocabulary is ubiquitous in ≠Akhoe Hai//om due to the fact that the landscape plays a critical role in directionals and other forms of “topographical gossip” and due to merges between landscape and group terminology. This system of landscape-cum-group terminology is outlined and related to the use of place names in the area.
  • Willems, R. M., Ozyurek, A., & Hagoort, P. (2008). Seeing and hearing meaning: ERP and fMRI evidence of word versus picture integration into a sentence context. Journal of Cognitive Neuroscience, 20, 1235-1249. doi:10.1162/jocn.2008.20085.

    Abstract

    Understanding language always occurs within a situational context and, therefore, often implies combining streams of information from different domains and modalities. One such combination is that of spoken language and visual information, which are perceived together in a variety of ways during everyday communication. Here we investigate whether and how words and pictures differ in terms of their neural correlates when they are integrated into a previously built-up sentence context. This is assessed in two experiments looking at the time course (measuring event-related potentials, ERPs) and the locus (using functional magnetic resonance imaging, fMRI) of this integration process. We manipulated the ease of semantic integration of word and/or picture to a previous sentence context to increase the semantic load of processing. In the ERP study, an increased semantic load led to an N400 effect which was similar for pictures and words in terms of latency and amplitude. In the fMRI study, we found overlapping activations to both picture and word integration in the left inferior frontal cortex. Specific activations for the integration of a word were observed in the left superior temporal cortex. We conclude that despite obvious differences in representational format, semantic information coming from pictures and words is integrated into a sentence context in similar ways in the brain. This study adds to the growing insight that the language system incorporates (semantic) information coming from linguistic and extralinguistic domains with the same neural time course and by recruitment of overlapping brain areas.
  • Willems, R. M., Hagoort, P., & Casasanto, D. (2010). Body-specific representations of action verbs: Neural evidence from right- and left-handers. Psychological Science, 21, 67-74. doi:10.1177/0956797609354072.

    Abstract

    According to theories of embodied cognition, understanding a verb like throw involves unconsciously simulating the action of throwing, using areas of the brain that support motor planning. If understanding action words involves mentally simulating one’s own actions, then the neurocognitive representation of word meanings should differ for people with different kinds of bodies, who perform actions in systematically different ways. In a test of the body-specificity hypothesis, we used functional magnetic resonance imaging to compare premotor activity correlated with action verb understanding in right- and left-handers. Righthanders preferentially activated the left premotor cortex during lexical decisions on manual-action verbs (compared with nonmanual-action verbs), whereas left-handers preferentially activated right premotor areas. This finding helps refine theories of embodied semantics, suggesting that implicit mental simulation during language processing is body specific: Right- and lefthanders, who perform actions differently, use correspondingly different areas of the brain for representing action verb meanings.
  • Willems, R. M., Peelen, M. V., & Hagoort, P. (2010). Cerebral lateralization of face-selective and body-selective visual areas depends on handedness. Cerebral Cortex, 20, 1719-1725. doi:10.1093/cercor/bhp234.

    Abstract

    The left-hemisphere dominance for language is a core example of the functional specialization of the cerebral hemispheres. The degree of left-hemisphere dominance for language depends on hand preference: Whereas the majority of right-handers show left-hemispheric language lateralization, this number is reduced in left-handers. Here, we assessed whether handedness analogously has an influence upon lateralization in the visual system. Using functional magnetic resonance imaging, we localized 4 more or less specialized extrastriate areas in left- and right-handers, namely fusiform face area (FFA), extrastriate body area (EBA), fusiform body area (FBA), and human motion area (human middle temporal [hMT]). We found that lateralization of FFA and EBA depends on handedness: These areas were right lateralized in right-handers but not in left-handers. A similar tendency was observed in FBA but not in hMT. We conclude that the relationship between handedness and hemispheric lateralization extends to functionally lateralized parts of visual cortex, indicating a general coupling between cerebral lateralization and handedness. Our findings indicate that hemispheric specialization is not fixed but can vary considerably across individuals even in areas engaged relatively early in the visual system.
  • Willems, R. M., De Boer, M., De Ruiter, J. P., Noordzij, M. L., Hagoort, P., & Toni, I. (2010). A dissociation between linguistic and communicative abilities in the human brain. Psychological Science, 21, 8-14. doi:10.1177/0956797609355563.

    Abstract

    Although language is an effective vehicle for communication, it is unclear how linguistic and communicative abilities relate to each other. Some researchers have argued that communicative message generation involves perspective taking (mentalizing), and—crucially—that mentalizing depends on language. We employed a verbal communication paradigm to directly test whether the generation of a communicative action relies on mentalizing and whether the cerebral bases of communicative message generation are distinct from parts of cortex sensitive to linguistic variables. We found that dorsomedial prefrontal cortex, a brain area consistently associated with mentalizing, was sensitive to the communicative intent of utterances, irrespective of linguistic difficulty. In contrast, left inferior frontal cortex, an area known to be involved in language, was sensitive to the linguistic demands of utterances, but not to communicative intent. These findings show that communicative and linguistic abilities rely on cerebrally (and computationally) distinct mechanisms
  • Willems, R. M., Oostenveld, R., & Hagoort, P. (2008). Early decreases in alpha and gamma band power distinguish linguistic from visual information during spoken sentence comprehension. Brain Research, 1219, 78-90. doi:10.1016/j.brainres.2008.04.065.

    Abstract

    Language is often perceived together with visual information. This raises the question on how the brain integrates information conveyed in visual and/or linguistic format during spoken language comprehension. In this study we investigated the dynamics of semantic integration of visual and linguistic information by means of time-frequency analysis of the EEG signal. A modified version of the N400 paradigm with either a word or a picture of an object being semantically incongruous with respect to the preceding sentence context was employed. Event-Related Potential (ERP) analysis showed qualitatively similar N400 effects for integration of either word or picture. Time-frequency analysis revealed early specific decreases in alpha and gamma band power for linguistic and visual information respectively. We argue that these reflect a rapid context-based analysis of acoustic (word) or visual (picture) form information. We conclude that although full semantic integration of linguistic and visual information occurs through a common mechanism, early differences in oscillations in specific frequency bands reflect the format of the incoming information and, importantly, an early context-based detection of its congruity with respect to the preceding language context
  • Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2010). Neural dissociations between action verb understanding and motor imagery. Journal of Cognitive Neuroscience, 22(10), 2387-2400. doi:10.1162/jocn.2009.21386.

    Abstract

    According to embodied theories of language, people understand a verb like throw, at least in part, by mentally simulating throwing. This implicit simulation is often assumed to be similar or identical to motor imagery. Here we used fMRI totest whether implicit simulations of actions during language understanding involve the same cortical motor regions as explicit motor imagery Healthy participants were presented with verbs related to hand actions (e.g., to throw) and nonmanual actions (e.g., to kneel). They either read these verbs (lexical decision task) or actively imagined performing the actions named by the verbs (imagery task). Primary motor cortex showd effector-specific activation during imagery, but not during lexical decision. Parts of premotor cortex distinguished manual from nonmanual actions during both lexical decision and imagery, but there was no overlap or correlation between regions activated during the two tasks. These dissociations suggest that implicit simulation and explicit imagery cued by action verbs may involve different types of motor representations and that the construct of “mental simulation” should be distinguished from “mental imagery” in embodied theories of language.
  • Willems, R. M., & Varley, R. (2010). Neural insights into the relation between language and communication. Frontiers in Human Neuroscience, 4, 203. doi:10.3389/fnhum.2010.00203.

    Abstract

    The human capacity to communicate has been hypothesized to be causally dependent upon language. Intuitively this seems plausible since most communication relies on language. Moreover, intention recognition abilities (as a necessary prerequisite for communication) and language development seem to co-develop. Here we review evidence from neuroimaging as well as from neuropsychology to evaluate the relationship between communicative and linguistic abilities. Our review indicates that communicative abilities are best considered as neurally distinct from language abilities. This conclusion is based upon evidence showing that humans rely on different cortical systems when designing a communicative message for someone else as compared to when performing core linguistic tasks, as well as upon observations of individuals with severe language loss after extensive lesions to the language system, who are still able to perform tasks involving intention understanding

Share this page