Publications

Displaying 401 - 500 of 1462
  • Frances, C., Pueyo, S., Anaya, V., & Dunabeitia Landaburu, J. A. (2020). Interpreting foreign smiles: language context and type of scale in the assessment of perceived happiness and sadness. Psicológica, 41, 21-38. doi:10.2478/psicolj-2020-0002.

    Abstract

    The current study focuses on how different scales with varying demands can
    affect our subjective assessments. We carried out 2 experiments in which we
    asked participants to rate how happy or sad morphed images of faces looked.
    The two extremes were the original happy and original sad faces with 4
    morphs in between. We manipulated language of the task—namely, half of
    the participants carried it out in their native language, Spanish, and the other
    half in their foreign language, English—and type of scale. Within type of
    scale, we compared verbal and brightness scales. We found that, while
    language did not have an effect on the assessment, type of scale did. The
    brightness scale led to overall higher ratings, i.e., assessing all faces as
    somewhat happier. This provides a limitation on the foreign language effect,
    as well as evidence for the influence of the cognitive demands of a scale on
    emotionality assessments.
  • Frances, C., De Bruin, A., & Duñabeitia, J. A. (2020). The effects of language and emotionality of stimuli on vocabulary learning. PLoS One, 15(10): e0240252. doi:10.1371/journal.pone.0240252.

    Abstract

    Learning new content and vocabulary in a foreign language can be particularly difficult. Yet,
    there are educational programs that require people to study in a language they are not
    native speakers of. For this reason, it is important to understand how these learning processes work and possibly differ from native language learning, as well as to develop strategies to ease this process. The current study takes advantage of emotionality—operationally
    defined as positive valence and high arousal—to improve memory. In two experiments, the
    present paper addresses whether participants have more difficulty learning the names of
    objects they have never seen before in their foreign language and whether embedding them
    in a positive semantic context can help make learning easier. With this in mind, we had participants (with a minimum of a B2 level of English) in two experiments (43 participants in
    Experiment 1 and 54 in Experiment 2) read descriptions of made-up objects—either positive
    or neutral and either in their native or a foreign language. The effects of language varied
    with the difficulty of the task and measure used. In both cases, learning the words in a positive context improved learning. Importantly, the effect of emotionality was not modulated by
    language, suggesting that the effects of emotionality are independent of language and could
    potentially be a useful tool for improving foreign language vocabulary learning.

    Additional information

    Supporting information
  • Francisco, A. A., Jesse, A., Groen, M. a., & McQueen, J. M. (2014). Audiovisual temporal sensitivity in typical and dyslexic adult readers. In Proceedings of the 15th Annual Conference of the International Speech Communication Association (INTERSPEECH 2014) (pp. 2575-2579).

    Abstract

    Reading is an audiovisual process that requires the learning of systematic links between graphemes and phonemes. It is thus possible that reading impairments reflect an audiovisual processing deficit. In this study, we compared audiovisual processing in adults with developmental dyslexia and adults without reading difficulties. We focused on differences in cross-modal temporal sensitivity both for speech and for non-speech events. When compared to adults without reading difficulties, adults with developmental dyslexia presented a wider temporal window in which unsynchronized speech events were perceived as synchronized. No differences were found between groups for the non-speech events. These results suggests a deficit in dyslexia in the perception of cross-modal temporal synchrony for speech events.
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.
  • Francks, C. (2009). 13 - LRRTM1: A maternally suppressed genetic effect on handedness and schizophrenia. In I. E. C. Sommer, & R. S. Kahn (Eds.), Cerebral lateralization and psychosis (pp. 181-196). Cambridge: Cambridge University Press.

    Abstract

    The molecular, developmental, and evolutionary bases of human brain asymmetry are almost completely unknown. Genetic linkage and association mapping have pin-pointed a gene called LRRTM1 (leucine-rich repeat transmembrane neuronal 1) that may contribute to variability in human handedness. Here I describe how LRRTM1's involvement in handedness was discovered, and also the latest knowledge of its functions in brain development and disease. The association of LRRTM1 with handedness was derived entirely from the paternally inherited gene, and follow-up analysis of gene expression confirmed that LRRTM1 is one of a small number of genes that are imprinted in the human genome, for which the maternally inherited copy is suppressed. The same variation at LRRTM1 that was associated paternally with mixed-/left-handedness was also over-transmitted paternally to schizophrenic patients in a large family study.
    LRRTM1 is expressed in specific regions of the developing and adult forebrain by post-mitotic neurons, and the protein may be involved in axonal trafficking. Thus LRRTM1 has a probable role in neurodevelopment, and its association with handedness suggests that one of its functions may be in establishing or consolidating human brain asymmetry.
    LRRTM1 is the first gene for which allelic variation has been associated with human handedness. The genetic data also suggest indirectly that the epigenetic regulation of this gene may yet prove more important than DNA sequence variation for influencing brain development and disease.
    Intriguingly, the parent-of-origin activity of LRRTM1 suggests that men and women have had conflicting interests in relation to the outcome of lateralized brain development in their offspring.
  • Francks, C. (2009). Understanding the genetics of behavioural and psychiatric traits will only be achieved through a realistic assessment of their complexity. Laterality: Asymmetries of Body, Brain and Cognition, 14(1), 11-16. doi:10.1080/13576500802536439.

    Abstract

    Francks et al. (2007) performed a recent study in which the first putative genetic effect on human handedness was identified (the imprinted locus LRRTM1 on human chromosome 2). In this issue of Laterality, Tim Crow and colleagues present a critique of that study. The present paper presents a personal response to that critique which argues that Francks et al. (2007) published a substantial body of evidence implicating LRRTM1 in handedness and schizophrenia. Progress will now be achieved by others trying to validate, refute, or extend those findings, rather than by further armchair discussion.
  • French, C. A., & Fisher, S. E. (2014). What can mice tell us about Foxp2 function? Current Opinion in Neurobiology, 28, 72-79. doi:10.1016/j.conb.2014.07.003.

    Abstract

    Disruptions of the FOXP2 gene cause a rare speech and language disorder, a discovery that has opened up novel avenues for investigating the relevant neural pathways. FOXP2 shows remarkably high conservation of sequence and neural expression in diverse vertebrates, suggesting that studies in other species are useful in elucidating its functions. Here we describe how investigations of mice that carry disruptions of Foxp2 provide insights at multiple levels: molecules, cells, circuits and behaviour. Work thus far has implicated the gene in key processes including neurite outgrowth, synaptic plasticity, sensorimotor integration and motor-skill learning.
  • Friedlaender, J., Hunley, K., Dunn, M., Terrill, A., Lindström, E., Reesink, G., & Friedlaender, F. (2009). Linguistics more robust than genetics [Letter to the editor]. Science, 324, 464-465. doi:10.1126/science.324_464c.
  • Friedrich, P., Thiebaut de Schotten, M., Forkel, S. J., Stacho, M., & Howells, H. (2020). An ancestral anatomical and spatial bias for visually guided behavior. PNAS, 117(5), 2251-2252. doi:10.1073/pnas.1918402117.

    Abstract

    Human behavioral asymmetries are commonly studied in the context of structural cortical and connectional asymmetries. Within this framework, Sreenivasan and Sridharan (1) provide intriguing evidence of a relationship between visual asymmetries and the lateralization of superior colliculi connections—a phylogenetically older mesencephalic structure. Specifically, response facilitation for cued locations (i.e., choice bias) in the contralateral hemifield was associated with differences in the connectivity of the superior colliculus. Given that the superior colliculus has a structural homolog—the optic tectum—which can be traced across all Vertebrata, these results may have meaningful evolutionary ramifications.
  • Friedrich, P., Forkel, S. J., & Thiebaut de Schotten, M. (2020). Mapping the principal gradient onto the corpus callosum. NeuroImage, 223: 117317. doi:10.1016/j.neuroimage.2020.117317.

    Abstract

    Gradients capture some of the variance of the resting-state functional magnetic resonance imaging (rsfMRI) signal. Amongst these, the principal gradient depicts a functional processing hierarchy that spans from sensory-motor cortices to regions of the default-mode network. While the cortex has been well characterised in terms of gradients little is known about its underlying white matter. For instance, comprehensive mapping of the principal gradient on the largest white matter tract, the corpus callosum, is still missing. Here, we mapped the principal gradient onto the midsection of the corpus callosum using the 7T human connectome project dataset. We further explored how quantitative measures and variability in callosal midsection connectivity relate to the principal gradient values. In so doing, we demonstrated that the extreme values of the principal gradient are located within the callosal genu and the posterior body, have lower connectivity variability but a larger spatial extent along the midsection of the corpus callosum than mid-range values. Our results shed light on the relationship between the brain's functional hierarchy and the corpus callosum. We further speculate about how these results may bridge the gap between functional hierarchy, brain asymmetries, and evolution.

    Additional information

    supplementary file
  • Frost, R. (2014). Learning grammatical structures with and without sleep. PhD Thesis, Lancaster University, Lancaster.
  • Frost, R. L. A., Dunn, K., Christiansen, M. H., Gómez, R. L., & Monaghan, P. (2020). Exploring the "anchor word" effect in infants: Segmentation and categorisation of speech with and without high frequency words. PLoS One, 15(12): e0243436. doi:10.1371/journal.pone.0243436.

    Abstract

    High frequency words play a key role in language acquisition, with recent work suggesting they may serve both speech segmentation and lexical categorisation. However, it is not yet known whether infants can detect novel high frequency words in continuous speech, nor whether they can use them to help learning for segmentation and categorisation at the same time. For instance, when hearing “you eat the biscuit”, can children use the high-frequency words “you” and “the” to segment out “eat” and “biscuit”, and determine their respective lexical categories? We tested this in two experiments. In Experiment 1, we familiarised 12-month-old infants with continuous artificial speech comprising repetitions of target words, which were preceded by high-frequency marker words that distinguished the targets into two distributional categories. In Experiment 2, we repeated the task using the same language but with additional phonological cues to word and category structure. In both studies, we measured learning with head-turn preference tests of segmentation and categorisation, and compared performance against a control group that heard the artificial speech without the marker words (i.e., just the targets). There was no evidence that high frequency words helped either speech segmentation or grammatical categorisation. However, segmentation was seen to improve when the distributional information was supplemented with phonological cues (Experiment 2). In both experiments, exploratory analysis indicated that infants’ looking behaviour was related to their linguistic maturity (indexed by infants’ vocabulary scores) with infants with high versus low vocabulary scores displaying novelty and familiarity preferences, respectively. We propose that high-frequency words must reach a critical threshold of familiarity before they can be of significant benefit to learning.

    Additional information

    data
  • Frost, R. L. A., Jessop, A., Durrant, S., Peter, M. S., Bidgood, A., Pine, J. M., Rowland, C. F., & Monaghan, P. (2020). Non-adjacent dependency learning in infancy, and its link to language development. Cognitive Psychology, 120: 101291. doi:10.1016/j.cogpsych.2020.101291.

    Abstract

    To acquire language, infants must learn how to identify words and linguistic structure in speech. Statistical learning has been suggested to assist both of these tasks. However, infants’ capacity to use statistics to discover words and structure together remains unclear. Further, it is not yet known how infants’ statistical learning ability relates to their language development. We trained 17-month-old infants on an artificial language comprising non-adjacent dependencies, and examined their looking times on tasks assessing sensitivity to words and structure using an eye-tracked head-turn-preference paradigm. We measured infants’ vocabulary size using a Communicative Development Inventory (CDI) concurrently and at 19, 21, 24, 25, 27, and 30 months to relate performance to language development. Infants could segment the words from speech, demonstrated by a significant difference in looking times to words versus part-words. Infants’ segmentation performance was significantly related to their vocabulary size (receptive and expressive) both currently, and over time (receptive until 24 months, expressive until 30 months), but was not related to the rate of vocabulary growth. The data also suggest infants may have developed sensitivity to generalised structure, indicating similar statistical learning mechanisms may contribute to the discovery of words and structure in speech, but this was not related to vocabulary size.

    Additional information

    Supplementary data
  • Frost, R. L. A., & Monaghan, P. (2020). Insights from studying statistical learning. In C. F. Rowland, A. L. Theakston, B. Ambridge, & K. E. Twomey (Eds.), Current Perspectives on Child Language Acquisition: How children use their environment to learn (pp. 65-89). Amsterdam: John Benjamins. doi:10.1075/tilar.27.03fro.

    Abstract

    Acquiring language is notoriously complex, yet for the majority of children this feat is accomplished with remarkable ease. Usage-based accounts of language acquisition suggest that this success can be largely attributed to the wealth of experience with language that children accumulate over the course of language acquisition. One field of research that is heavily underpinned by this principle of experience is statistical learning, which posits that learners can perform powerful computations over the distribution of information in a given input, which can help them to discern precisely how that input is structured, and how it operates. A growing body of work brings this notion to bear in the field of language acquisition, due to a developing understanding of the richness of the statistical information contained in speech. In this chapter we discuss the role that statistical learning plays in language acquisition, emphasising the importance of both the distribution of information within language, and the situation in which language is being learnt. First, we address the types of statistical learning that apply to a range of language learning tasks, asking whether the statistical processes purported to support language learning are the same or distinct across different tasks in language acquisition. Second, we expand the perspective on what counts as environmental input, by determining how statistical learning operates over the situated learning environment, and not just sequences of sounds in utterances. Finally, we address the role of variability in children’s input, and examine how statistical learning can accommodate (and perhaps even exploit) this during language acquisition.
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Galbiati, A., Sforza, M., Poletti, M., Verga, L., Zucconi, M., Ferini-Strambi, L., & Castronovo, V. (2020). Insomnia patients with subjective short total sleep time have a boosted response to cognitive behavioral therapy for insomnia despite residual symptoms. Behavioral Sleep Medicine, 18(1), 58-67. doi:10.1080/15402002.2018.1545650.

    Abstract

    Background: Two distinct insomnia disorder (ID) phenotypes have been proposed, distinguished on the basis of an objective total sleep time less or more than 6 hr. In particular, it has been recently reported that patients with objective short sleep duration have a blunted response to cognitive behavioral therapy for insomnia (CBT-I). The aim of this study was to investigate the differences of CBT-I response in two groups of ID patients subdivided according to total sleep time. Methods: Two hundred forty-six ID patients were subdivided into two groups, depending on their reported total sleep time (TST) assessed by sleep diaries. Patients with a TST greater than 6 hr were classified as “normal sleepers” (NS), while those with a total sleep time less than 6 hr were classified as “short sleepers” (SS). Results: The delta between Insomnia Severity Index scores and sleep efficiency at the beginning as compared to the end of the treatment was significantly higher for SS in comparison to NS, even if they still exhibit more insomnia symptoms. No difference was found between groups in terms of remitters; however, more responders were observed in the SS group in comparison to the NS group. Conclusions: Our results demonstrate that ID patients with reported short total sleep time had a beneficial response to CBT-I of greater magnitude in comparison to NS. However, these patients may still experience the presence of residual insomnia symptoms after treatment.
  • Gallotto, S., Duecker, F., Ten Oever, S., Schuhmann, T., De Graaf, T. A., & Sack, A. T. (2020). Relating alpha power modulations to competing visuospatial attention theories. NeuroImage, 207: 116429. doi:10.1016/j.neuroimage.2019.116429.

    Abstract

    Visuospatial attention theories often propose hemispheric asymmetries underlying the control of attention. In general support of these theories, previous EEG/MEG studies have shown that spatial attention is associated with hemispheric modulation of posterior alpha power (gating by inhibition). However, since measures of alpha power are typically expressed as lateralization scores, or collapsed across left and right attention shifts, the individual hemispheric contribution to the attentional control mechanism remains unclear. This is, however, the most crucial and decisive aspect in which the currently competing attention theories continue to disagree. To resolve this long-standing conflict, we derived predictions regarding alpha power modulations from Heilman's hemispatial theory and Kinsbourne's interhemispheric competition theory and tested them empirically in an EEG experiment. We used an attention paradigm capable of isolating alpha power modulation in two attentional states, namely attentional bias in a neutral cue condition and spatial orienting following directional cues. Differential alpha modulations were found for both hemispheres across conditions. When anticipating peripheral visual targets without preceding directional cues (neutral condition), posterior alpha power in the left hemisphere was generally lower and more strongly modulated than in the right hemisphere, in line with the interhemispheric competition theory. Intriguingly, however, while alpha power in the right hemisphere was modulated by both, cue-directed leftward and rightward attention shifts, the left hemisphere only showed modulations by rightward shifts of spatial attention, in line with the hemispatial theory. This suggests that the two theories may not be mutually exclusive, but rather apply to different attentional states.
  • Ganushchak, L. Y., & Schiller, N. O. (2009). Speaking in one’s second language under time pressure: An ERP study on verbal self-monitoring in German-Dutch bilinguals. Psychophysiology, 46, 410-419. doi:10.1111/j.1469-8986.2008.00774.x.

    Abstract

    This study addresses how verbal self-monitoring and the Error-Related Negativity (ERN) are affected by time pressure
    when a task is performed in a second language as opposed to performance in the native language. German–Dutch
    bilinguals were required to perform a phoneme-monitoring task in Dutch with and without a time pressure manipulation.
    We obtained an ERN following verbal errors that showed an atypical increase in amplitude under time
    pressure. This finding is taken to suggest that under time pressure participants had more interference from their native
    language, which in turn led to a greater response conflict and thus enhancement of the amplitude of the ERN. This
    result demonstrates once more that the ERN is sensitive to psycholinguistic manipulations and suggests that the
    functioning of the verbal self-monitoring systemduring speaking is comparable to other performance monitoring, such
    as action monitoring.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • Garcia, R., Roeser, J., & Höhle, B. (2020). Children’s online use of word order and morphosyntactic markers in Tagalog thematic role assignment: An eye-tracking study. Journal of Child Language, 47(3), 533-555. doi:10.1017/S0305000919000618.

    Abstract

    We investigated whether Tagalog-speaking children incrementally interpret the first noun
    as the agent, even if verbal and nominal markers for assigning thematic roles are given
    early in Tagalog sentences. We asked five- and seven-year-old children and adult
    controls to select which of two pictures of reversible actions matched the sentence they
    heard, while their looks to the pictures were tracked. Accuracy and eye-tracking data
    showed that agent-initial sentences were easier to comprehend than patient-initial
    sentences, but the effect of word order was modulated by voice. Moreover, our eyetracking
    data provided evidence that, by the first noun phrase, seven-year-old children
    looked more to the target in the agent-initial compared to the patient-initial conditions,
    but this word order advantage was no longer observed by the second noun phrase. The
    findings support language processing and acquisition models which emphasize the role
    of frequency in developing heuristic strategies (e.g., Chang, Dell, & Bock, 2006).
  • Garcia, R., & Kidd, E. (2020). The acquisition of the Tagalog symmetrical voice system: Evidence from structural priming. Language Learning and Development, 16(4), 399-425. doi:10.1080/15475441.2020.1814780.

    Abstract

    We report on two experiments that investigated the acquisition of the Tagalog symmetrical voice system, a typologically rare feature of Western Austronesian languages in which there are more than one basic transitive construction and no preference for agents to be syntactic subjects. In the experiments, 3-, 5-, and 7-year-old Tagalog-speaking children and adults completed a structural priming task that manipulated voice and word order, with the uniqueness of Tagalog allowing us to tease apart priming of thematic role order from that of syntactic roles. Participants heard a description of a picture showing a transitive action, and were then asked to complete a sentence of an unrelated picture using a voice-marked verb provided by the experimenter. Our results show that children gradually acquire an agent-before-patient preference, instead of having a default mapping of the agent to the first noun position. We also found an earlier mastery of the patient voice verbal and nominal marker configuration (patient is the subject), suggesting that children do not initially map the agent to the subject. Children were primed by thematic role but not syntactic role order, suggesting that they prioritize mapping of the thematic roles to sentence positions.
  • Garcia, N., Lenkiewicz, P., Freire, M., & Monteiro, P. (2009). A new architecture for optical burst switching networks based on cooperative control. In Proceeding of the 8th IEEE International Symposium on Network Computing and Applications (IEEE NCA09) (pp. 310-313).

    Abstract

    This paper presents a new architecture for optical burst switched networks where the control plane of the network functions in a cooperative manner. Each node interprets the data conveyed by the control packet and forwards it to the next nodes, making the control plane of the network distribute the relevant information to all the nodes in the network. A cooperation transmission tree is used, thus allowing all the nodes to store the information related to the traffic management in the network, and enabling better network resource planning at each node. A model of this network architecture is proposed, and its performance is evaluated.
  • Garcia, M., & Ravignani, A. (2020). Acoustic allometry and vocal learning in mammals. Biology Letters, 16: 20200081. doi:10.1098/rsbl.2020.0081.

    Abstract

    Acoustic allometry is the study of how animal vocalisations reflect their body size. A key aim of this research is to identify outliers to acoustic allometry principles and pinpoint the evolutionary origins of such outliers. A parallel strand of research investigates species capable of vocal learning, the experience-driven ability to produce novel vocal signals through imitation or modification of existing vocalisations. Modification of vocalizations is a common feature found when studying both acoustic allometry and vocal learning. Yet, these two fields have only been investigated separately to date. Here, we review and connect acoustic allometry and vocal learning across mammalian clades, combining perspectives from bioacoustics, anatomy and evolutionary biology. Based on this, we hypothesize that, as a precursor to vocal learning, some species might have evolved the capacity for volitional vocal modulation via sexual selection for ‘dishonest’ signalling. We provide preliminary support for our hypothesis by showing significant associations between allometric deviation and vocal learning in a dataset of 164 mammals. Our work offers a testable framework for future empirical research linking allometric principles with the evolution of vocal learning.
  • Garcia, M., Theunissen, F., Sèbe, F., Clavel, J., Ravignani, A., Marin-Cudraz, T., Fuchs, J., & Mathevon, N. (2020). Evolution of communication signals and information during species radiation. Nature Communications, 11: 4970. doi:10.1038/s41467-020-18772-3.

    Abstract

    Communicating species identity is a key component of many animal signals. However, whether selection for species recognition systematically increases signal diversity during clade radiation remains debated. Here we show that in woodpecker drumming, a rhythmic signal used during mating and territorial defense, the amount of species identity information encoded remained stable during woodpeckers’ radiation. Acoustic analyses and evolutionary reconstructions show interchange among six main drumming types despite strong phylogenetic contingencies, suggesting evolutionary tinkering of drumming structure within a constrained acoustic space. Playback experiments and quantification of species discriminability demonstrate sufficient signal differentiation to support species recognition in local communities. Finally, we only find character displacement in the rare cases where sympatric species are also closely related. Overall, our results illustrate how historical contingencies and ecological interactions can promote conservatism in signals during a clade radiation without impairing the effectiveness of information transfer relevant to inter-specific discrimination.
  • Garrido, L., Eisner, F., McGettigan, C., Stewart, L., Sauter, D., Hanley, J. R., Schweinberger, S. R., Warren, J. D., & Duchaine, B. (2009). Developmental phonagnosia: A selective deficit of vocal identity recognition. Neuropsychologia, 47(1), 123-131. doi:10.1016/j.neuropsychologia.2008.08.003.

    Abstract

    Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speaker’s vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli.
  • Gaskell, M. G., Warker, J., Lindsay, S., Frost, R. L. A., Guest, J., Snowdon, R., & Stackhouse, A. (2014). Sleep Underpins the Plasticity of Language Production. Psychological Science, 25(7), 1457-1465. doi:10.1177/0956797614535937.

    Abstract

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.

    Additional information

    https://osf.io/zqg9y/
  • Gast, V., & Levshina, N. (2014). Motivating w(h)-Clefts in English and German: A hypothesis-driven parallel corpus study. In A.-M. De Cesare (Ed.), Frequency, Forms and Functions of Cleft Constructions in Romance and Germanic: Contrastive, Corpus-Based Studies (pp. 377-414). Berlin: De Gruyter.
  • Gazendam, L., Wartena, C., Malaise, V., Schreiber, G., De Jong, A., & Brugman, H. (2009). Automatic annotation suggestions for audiovisual archives: Evaluation aspects. Interdisciplinary Science Reviews, 34(2/3), 172-188. doi:10.1179/174327909X441090.

    Abstract

    In the context of large and ever growing archives, generating annotation suggestions automatically from textual resources related to the documents to be archived is an interesting option in theory. It could save a lot of work in the time consuming and expensive task of manual annotation and it could help cataloguers attain a higher inter-annotator agreement. However, some questions arise in practice: what is the quality of the automatically produced annotations? How do they compare with manual annotations and with the requirements for annotation that were defined in the archive? If different from the manual annotations, are the automatic annotations wrong? In the CHOICE project, partially hosted at the Netherlands Institute for Sound and Vision, the Dutch public archive for audiovisual broadcasts, we automatically generate annotation suggestions for cataloguers. In this paper, we define three types of evaluation of these annotation suggestions: (1) a classic and strict evaluation measure expressing the overlap between automatically generated keywords and the manual annotations, (2) a loosened evaluation measure for which semantically very similar annotations are also considered as relevant matches, and (3) an in-use evaluation of the usefulness of manual versus automatic annotations in the context of serendipitous browsing. During serendipitous browsing, the annotations (manual or automatic) are used to retrieve and visualize semantically related documents.
  • Geambasu, A., Toron, L., Ravignani, A., & Levelt, C. C. (2020). Rhythmic recursion? Human sensitivity to a Lindenmayer grammar with self-similar structure in a musical task. Music & Science. doi:10.1177%2F2059204320946615.

    Abstract

    Processing of recursion has been proposed as the foundation of human linguistic ability. Yet this ability may be shared with other domains, such as the musical or rhythmic domain. Lindenmayer grammars (L-systems) have been proposed as a recursive grammar for use in artificial grammar experiments to test recursive processing abilities, and previous work had shown that participants are able to learn such a grammar using linguistic stimuli (syllables). In the present work, we used two experimental paradigms (a yes/no task and a two-alternative forced choice) to test whether adult participants are able to learn a recursive Lindenmayer grammar composed of drum sounds. After a brief exposure phase, we found that participants at the group level were sensitive to the exposure grammar and capable of distinguishing the grammatical and ungrammatical test strings above chance level in both tasks. While we found evidence of participants’ sensitivity to a very complex L-system grammar in a non-linguistic, potentially musical domain, the results were not robust. We discuss the discrepancy within our results and with the previous literature using L-systems in the linguistic domain. Furthermore, we propose directions for future music cognition research using L-system grammars.
  • Gebre, B. G., Wittenburg, P., Heskes, T., & Drude, S. (2014). Motion history images for online speaker/signer diarization. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1537-1541). Piscataway, NJ: IEEE.

    Abstract

    We present a solution to the problem of online speaker/signer diarization - the task of determining "who spoke/signed when?". Our solution is based on the idea that gestural activity (hands and body movement) is highly correlated with uttering activity. This correlation is necessarily true for sign languages and mostly true for spoken languages. The novel part of our solution is the use of motion history images (MHI) as a likelihood measure for probabilistically detecting uttering activities. MHI is an efficient representation of where and how motion occurred for a fixed period of time. We conducted experiments on 4.9 hours of a publicly available dataset (the AMI meeting data) and 1.4 hours of sign language dataset (Kata Kolok data). The best performance obtained is 15.70% for sign language and 31.90% for spoken language (measurements are in DER). These results show that our solution is applicable in real-world applications like video conferences.

    Files private

    Request files
  • Gebre, B. G., Wittenburg, P., Drude, S., Huijbregts, M., & Heskes, T. (2014). Speaker diarization using gesture and speech. In H. Li, & P. Ching (Eds.), Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 582-586).

    Abstract

    We demonstrate how the problem of speaker diarization can be solved using both gesture and speaker parametric models. The novelty of our solution is that we approach the speaker diarization problem as a speaker recognition problem after learning speaker models from speech samples corresponding to gestures (the occurrence of gestures indicates the presence of speech and the location of gestures indicates the identity of the speaker). This new approach offers many advantages: comparable state-of-the-art performance, faster computation and more adaptability. In our implementation, parametric models are used to model speakers' voice and their gestures: more specifically, Gaussian mixture models are used to model the voice characteristics of each person and all persons, and gamma distributions are used to model gestural activity based on features extracted from Motion History Images. Tests on 4.24 hours of the AMI meeting data show that our solution makes DER score improvements of 19% on speech-only segments and 4% on all segments including silence (the comparison is with the AMI system).
  • Gebre, B. G., Crasborn, O., Wittenburg, P., Drude, S., & Heskes, T. (2014). Unsupervised feature learning for visual sign language identification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 370-376). Redhook, NY: Curran Proceedings.

    Abstract

    Prior research on language identification focused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign languages solely from short video samples. The method is trained on unlabelled video data (unsupervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on video samples involving 30 signers (running for a total of 6 hours). Using leave-one-signer-out cross-validation, our evaluation on short video samples shows an average best accuracy of 84%. Given that sign languages are under-resourced, unsupervised feature learning techniques are the right tools and our results indicate that this is realistic for sign language identification.
  • Gentner, D., & Bowerman, M. (2009). Why some spatial semantic categories are harder to learn than others: The typological prevalence hypothesis. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 465-480). New York: Psychology Press.
  • Gentzsch, W., Lecarpentier, D., & Wittenburg, P. (2014). Big data in science and the EUDAT project. In Proceeding of the 2014 Annual SRII Global Conference.
  • Gerakaki, S. (2020). The moment in between: Planning speech while listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Giering, E., Tinbergen, M., & Verbunt, A. (2009). Research Report 2007 | 2008. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Gilbers, S., Hoeksema, N., De Bot, K., & Lowie, W. (2020). Regional variation in West and East Coast African-American English prosody and rap flows. Language and Speech, 63(4), 713-745. doi:10.1177/0023830919881479.

    Abstract

    Regional variation in African-American English (AAE) is especially salient to its speakers involved with hip-hop culture, as hip-hop assigns great importance to regional identity and regional accents are a key means of expressing regional identity. However, little is known about AAE regional variation regarding prosodic rhythm and melody. In hip-hop music, regional variation can also be observed, with different regions’ rap performances being characterized by distinct “flows” (i.e., rhythmic and melodic delivery), an observation which has not been quantitatively investigated yet. This study concerns regional variation in AAE speech and rap, specifically regarding the United States’ East and West Coasts. It investigates how East Coast and West Coast AAE prosody are distinct, how East Coast and West Coast rap flows differ, and whether the two domains follow a similar pattern: more rhythmic and melodic variation on the West Coast compared to the East Coast for both speech and rap. To this end, free speech and rap recordings of 16 prominent African-American members of the East Coast and West Coast hip-hop communities were phonetically analyzed regarding rhythm (e.g., syllable isochrony and musical timing) and melody (i.e., pitch fluctuation) using a combination of existing and novel methodological approaches. The results mostly confirm the hypotheses that East Coast AAE speech and rap are less rhythmically diverse and more monotone than West Coast AAE speech and rap, respectively. They also show that regional variation in AAE prosody and rap flows pattern in similar ways, suggesting a connection between rhythm and melody in language and music.
  • Glaser, B., & Holmans, P. (2009). Comparison of methods for combining case-control and family-based association studies. Human Heredity, 68(2), 106-116. doi:10.1159/000212503.

    Abstract

    OBJECTIVES: Combining the analysis of family-based samples with unrelated individuals can enhance the power of genetic association studies. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power, or robustness to confounding factors. We investigated empirically the power of up to six combined methods using simulated samples of trios and unrelated cases/controls (TDTCC), trios and unrelated controls (TDTC), and affected sibpairs with parents and unrelated cases/controls (ASPFCC). METHODS: We simulated multiplicative, dominant and recessive models with varying risk parameters in single samples. Additionally, we studied false-positive rates and investigated, if possible, the coverage of the true genetic effect (TDTCC). RESULTS/CONCLUSIONS: Under the TDTCC design, we identified four approaches with equivalent power and false-positive rates. Combined statistics were more powerful than single-sample statistics or a pooled chi(2)-statistic when risk parameters were similar in single samples. Adding parental information to the CC part of the joint likelihood increased the power of generalised logistic regression under the TDTC but not the TDTCC scenario. Formal testing of differences between risk parameters in subsamples was the most sensitive approach to avoid confounding in combined analysis. Non-parametric analysis based on Monte-Carlo testing showed the highest power for ASPFCC samples.
  • De Goede, D., Shapiro, L. P., Wester, F., Swinney, D. A., & Bastiaanse, Y. R. M. (2009). The time course of verb processing in Dutch sentences. Journal of Psycholinguistic Research, 38(3), 181-199. doi:10.1007/s10936-009-9117-3.

    Abstract

    The verb has traditionally been characterized as the central element in a sentence. Nevertheless, the exact role of the verb during the actual ongoing comprehension of a sentence as it unfolds in time remains largely unknown. This paper reports the results of two Cross-Modal Lexical Priming (CMLP) experiments detailing the pattern of verb priming during on-line processing of Dutch sentences. Results are contrasted with data from a third CMLP experiment on priming of nouns in similar sentences. It is demonstrated that the meaning of a matrix verb remains active throughout the entire matrix clause, while this is not the case for the meaning of a subject head noun. Activation of the meaning of the verb only dissipates upon encountering a clear signal as to the start of a new clause.
  • Goldin-Meadow, S., Ozyurek, A., Sancar, B., & Mylander, C. (2009). Making language around the globe: A cross-linguistic study of homesign in the United States, China, and Turkey. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 27-39). New York: Psychology Press.
  • Goldin-Meadow, S., Gentner, D., Ozyurek, A., & Gurcanli, O. (2009). Spatial language supports spatial cognition: Evidence from deaf homesigners [abstract]. Cognitive Processing, 10(Suppl. 2), S133-S134.
  • Goldsborough, Z., Van Leeuwen, E. J. C., Kolff, K. W. T., De Waal, F. B. M., & Webb, C. E. (2020). Do chimpanzees (Pan troglodytes) console a bereaved mother? Primates, 61: 20190695, pp. 93-102. doi:10.1007/s10329-019-00752-x.

    Abstract

    Comparative thanatology encompasses the study of death-related responses in non-human animals and aspires to elucidate the evolutionary origins of human behavior in the context of death. Many reports have revealed that humans are not the only species affected by the death of group members. Non-human primates in particular show behaviors such as congregating around the deceased, carrying the corpse for prolonged periods of time (predominantly mothers carrying dead infants), and inspecting the corpse for signs of life. Here, we extend the focus on death-related responses in non-human animals by exploring whether chimpanzees are inclined to console the bereaved: the individual(s) most closely associated with the deceased. We report a case in which a chimpanzee (Pan troglodytes) mother experienced the loss of her fully developed infant (presumed stillborn). Using observational data to compare the group members’ behavior before and after the death, we found that a substantial number of group members selectively increased their affiliative expressions toward the bereaved mother. Moreover, on the day of the death, we observed heightened expressions of species-typical reassurance behaviors toward the bereaved mother. After ruling out several alternative explanations, we propose that many of the chimpanzees consoled the bereaved mother by means of affiliative and selective empathetic expressions.
  • González Alonso, J., Alemán Bañón, J., DeLuca, V., Miller, D., Pereira Soares, S. M., Puig-Mayenco, E., Slaats, S., & Rothman, J. (2020). Event related potentials at initial exposure in third language acquisition: Implications from an artificial mini-grammar study. Journal of Neurolinguistics, 56: 100939. doi:10.1016/j.jneuroling.2020.100939.

    Abstract

    The present article examines the proposal that typology is a major factor guiding transfer selectivity in L3/Ln acquisition. We tested first exposure in L3/Ln using two artificial languages (ALs) lexically based in English and Spanish, focusing on gender agreement between determiners and nouns, and between nouns and adjectives. 50 L1 Spanish-L2 English speakers took part in the experiment. After receiving implicit training in one of the ALs (Mini-Spanish, N = 26; Mini-English, N = 24), gender violations elicited a fronto-lateral negativity in Mini-English in the earliest time window (200–500 ms), although this was not followed by any other differences in subsequent periods. This effect was highly localized, surfacing only in electrodes of the right-anterior region. In contrast, gender violations in Mini-Spanish elicited a broadly distributed positivity in the 300–600 ms time window. While we do not find typical indices of grammatical processing such as the P600 component, we believe that the between-groups differential appearance of the positivity for gender violations in the 300–600 ms time window reflects differential allocation of attentional resources as a function of the ALs’ lexical similarity to English or Spanish. We take these differences in attention to be precursors of the processes involved in transfer source selection in L3/Ln.
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Goodhew, S. C., & Kidd, E. (2020). Bliss is blue and bleak is grey: Abstract word-colour associations influence objective performance even when not task relevant. Acta Psychologica, 206: 103067. doi:10.1016/j.actpsy.2020.103067.

    Abstract

    Humans associate abstract words with physical stimulus dimensions, such as linking upward locations with positive concepts (e.g., happy = up). These associations manifest both via subjective reports of associations and on objective performance metrics. Humans also report subjective associations between colours and abstract words (e.g., joy is linked to yellow). Here we tested whether such associations manifest on objective task performance, even when not task-relevant. Across three experiments, participants were presented with abstract words in physical colours that were either congruent with previously-reported subjective word-colour associations (e.g., victory in red and unhappy in blue), or were incongruent (e.g., victory in blue and unhappy in red). In Experiment 1, participants' task was to identify the valence of words. This congruency manipulation systematically affected objective task performance. In Experiment 2, participants completed two blocks, a valence-identification and a colour-identification task block. Both tasks produced congruency effects on performance, however, the results of the colour identification block could have reflected learning effects (i.e., associating the more common congruent colour with the word). This issue was rectified in Experiment 3, whereby participants completed the same two tasks as Experiment 2, but now matched congruent and incongruent pairs were used for both tasks. Again, both tasks produced reliable congruency effects. Item analyses in each experiment revealed that these effects demonstrated a degree of item specificity. Overall, there was clear evidence that at least some abstract word-colour pairings can systematically affect behaviour.
  • Goodhew, S. C., McGaw, B., & Kidd, E. (2014). Why is the sunny side always up? Explaining the spatial mapping of concepts by language use. Psychonomic Bulletin & Review, 21(5), 1287-1293. doi:10.3758/s13423-014-0593-6.

    Abstract

    Humans appear to rely on spatial mappings to represent and describe concepts. The conceptual cuing effect describes the tendency for participants to orient attention to a spatial location following the presentation of an unrelated cue word (e.g., orienting attention upward after reading the word sky). To date, such effects have predominately been explained within the embodied cognition framework, according to which people’s attention is oriented on the basis of prior experience (e.g., sky → up via perceptual simulation). However, this does not provide a compelling explanation for how abstract words have the same ability to orient attention. Why, for example, does dream also orient attention upward? We report on an experiment that investigated the role of language use (specifically, collocation between concept words and spatial words for up and down dimensions) and found that it predicted the cuing effect. The results suggest that language usage patterns may be instrumental in explaining conceptual cuing.
  • Gordon, J. K., & Clough, S. (2020). How fluent? Part B. Underlying contributors to continuous measures of fluency in aphasia. Aphasiology, 34(5), 643-663. doi:10.1080/02687038.2020.1712586.

    Abstract

    Background: While persons with aphasia (PwA) are often dichotomised as fluent or nonfluent, agreement that fluency is not an all-or-nothing construct has led to the use of continuous variables as a way to quantify fluency, such as multi-dimensional rating scales, speech rate, and utterance length. Though these measures are often used in research, they provide little information about the underlying fluency deficit.
    Aim: The aim of the study was to identify how well commonly used continuous measures of fluency capture variability in spontaneous speech variables at lexical, grammatical, and speech production levels. Methods & Procedures: Speech samples of 254 English-speaking PwA from the AphasiaBank database were analyzed to examine the distributions of four continuous measures of fluency: the WAB-R fluency scale, utterance length, retracing, and speech rate. Linear regression was used to identify spontaneous speech predictors contributing to each fluency outcome measure.
    Outcomes & Results: All the outcome measures reflected the influence of multiple underlying dimensions, although the predictors varied. The WAB-R fluency scale, speech rate, and retracing were influenced by measures of grammatical competence, lexical retrieval, and speech production, whereas utterance length was influenced only by measures of grammatical competence and lexical retrieval. The strongest predictor of WAB-R fluency was aphasia severity, whereas the strongest predictor for all other fluency proxy measures was grammatical complexity.
    Conclusions: Continuous measures allow a variety of ways to objectively quantify speech fluency; however, they reflect superficial manifestations of fluency that may be affected by multiple underlying deficits. Furthermore, the deficits underlying different measures vary, which may reduce the reliability of fluency diagnoses. Capturing these differences at the individual level is critical to accurate diagnosis and appropriately targeted therapy.
  • Goregliad Fjaellingsdal, T., Schwenke, D., Scherbaum, S., Kuhlen, A. K., Bögels, S., Meekes, J., & Bleichner, M. G. (2020). Expectancy effects in the EEG during joint and spontaneous word-by-word sentence production in German. Scientific Reports, 10: 5460. doi:10.1038/s41598-020-62155-z.

    Abstract

    Our aim in the present study is to measure neural correlates during spontaneous interactive sentence production. We present a novel approach using the word-by-word technique from improvisational theatre, in which two speakers jointly produce one sentence. This paradigm allows the assessment of behavioural aspects, such as turn-times, and electrophysiological responses, such as event-related-potentials (ERPs). Twenty-five participants constructed a cued but spontaneous four-word German sentence together with a confederate, taking turns for each word of the sentence. In 30% of the trials, the confederate uttered an unexpected gender-marked article. To complete the sentence in a meaningful way, the participant had to detect the violation and retrieve and utter a new fitting response. We found significant increases in response times after unexpected words and – despite allowing unscripted language production and naturally varying speech material – successfully detected significant N400 and P600 ERP effects for the unexpected word. The N400 EEG activity further significantly predicted the response time of the subsequent turn. Our results show that combining behavioural and neuroscientific measures of verbal interactions while retaining sufficient experimental control is possible, and that this combination provides promising insights into the mechanisms of spontaneous spoken dialogue.
  • Gori, M., Vercillo, T., Sandini, G., & Burr, D. (2014). Tactile feedback improves auditory spatial localization. Frontiers in Psychology, 5: 1121. doi:10.3389/fpsyg.2014.01121.

    Abstract

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gon etal., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial.The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
  • Goriot, C., McQueen, J. M., Unsworth, S., & Van Hout, R. (2020). Perception of English phonetic contrasts by Dutch children: How bilingual are early-English learners? PLoS One, 15(3): e0229902. doi:10.1371/journal.pone.0229902.

    Abstract

    The aim of this study was to investigate whether early-English education benefits the perception
    of English phonetic contrasts that are known to be perceptually confusable for Dutch
    native speakers, comparing Dutch pupils who were enrolled in an early-English programme
    at school from the age of four with pupils in a mainstream programme with English instruction
    from the age of 11, and English-Dutch early bilingual children. Children were 4-5-yearolds
    (start of primary school), 8-9-year-olds, or 11-12-year-olds (end of primary school).
    Children were tested on four contrasts that varied in difficulty: /b/-/s/ (easy), /k/-/ɡ/ (intermediate),
    /f/-/θ/ (difficult), /ε/-/æ/ (very difficult). Bilingual children outperformed the two other
    groups on all contrasts except /b/-/s/. Early-English pupils did not outperform mainstream
    pupils on any of the contrasts. This shows that early-English education as it is currently
    implemented is not beneficial for pupils’ perception of non-native contrasts.

    Additional information

    Supporting information
  • Goudbeek, M., Swingley, D., & Smits, R. (2009). Supervised and unsupervised learning of multidimensional acoustic categories. Journal of Experimental Psychology: Human Perception and Performance, 35, 1913-1933. doi:10.1037/a0015781.

    Abstract

    Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over I dimension are easy to learn and that learning multidimensional categories is more difficult but tractable under specific task conditions. In 2 experiments, adult participants learned either a unidimensional ora multidimensional category distinction with or without supervision (feedback) during learning. The unidimensional distinctions were readily learned and supervision proved beneficial, especially in maintaining category learning beyond the learning phase. Learning the multidimensional category distinction proved to be much more difficult and supervision was not nearly as beneficial as with unidimensionally defined categories. Maintaining a learned multidimensional category distinction was only possible when the distributional information (hat identified the categories remained present throughout the testing phase. We conclude that listeners are sensitive to both trial-by-trial feedback and the distributional information in the stimuli. Even given limited exposure, listeners learned to use 2 relevant dimensions. albeit with considerable difficulty.
  • De Graaf, T. A., Thomson, A., Janssens, S. E. W., Van Bree, S., Ten Oever, S., & Sack, A. T. (2020). Does alpha phase modulate visual target detection? Three experiments with tACS-phase-based stimulus presentation. European Journal of Neuroscience, 51(11), 2299-2313. doi:10.1111/ejn.14677.

    Abstract

    In recent years, the influence of alpha (7–13 Hz) phase on visual processing has received a lot of attention. Magneto‐/encephalography (M/EEG) studies showed that alpha phase indexes visual excitability and task performance. Studies with transcranial alternating current stimulation (tACS) aim to modulate oscillations and causally impact task performance. Here, we applied right occipital tACS (O2 location) to assess the functional role of alpha phase in a series of experiments. We presented visual stimuli at different pre‐determined, experimentally controlled, phases of the entraining tACS signal, hypothesizing that this should result in an oscillatory pattern of visual performance in specifically left hemifield detection tasks. In experiment 1, we applied 10 Hz tACS and used separate psychophysical staircases for six equidistant tACS‐phase conditions, obtaining contrast thresholds for detection of visual gratings in left or right hemifield. In experiments 2 and 3, tACS was at EEG‐based individual peak alpha frequency. In experiment 2, we measured detection rates for gratings with (pseudo‐)fixed contrast. In experiment 3, participants detected brief luminance changes in a custom‐built LED device, at eight equidistant alpha phases. In none of the experiments did the primary outcome measure over phase conditions consistently reflect a one‐cycle sinusoid. However, post hoc analyses of reaction times (RT) suggested that tACS alpha phase did modulate RT for specifically left hemifield targets in both experiments 1 and 2 (not measured in experiment 3). This observation requires future confirmation, but is in line with the idea that alpha phase causally gates visual inputs through cortical excitability modulation.

    Additional information

    Supporting Information
  • Grabe, E. (1998). Comparative intonational phonology: English and German. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057683.
  • Graham, S. A., Jégouzo, S. A. F., Yan, S., Powlesland, A. S., Brady, J. P., Taylor, M. E., & Drickamer, K. (2009). Prolectin, a glycan-binding receptor on dividing B cells in germinal centers. The Journal of Biological Chemistry, 284, 18537-18544. doi:10.1074/jbc.M109.012807.

    Abstract

    Prolectin, a previously undescribed glycan-binding receptor, has been identified by re-screening of the human genome for genes encoding proteins containing potential C-type carbohydrate-recognition domains. Glycan array analysis revealed that the carbohydrate-recognition domain in the extracellular domain of the receptor binds glycans with terminal α-linked mannose or fucose residues. Prolectin expressed in fibroblasts is found at the cell surface, but unlike many glycan-binding receptors it does not mediate endocytosis of a neoglycoprotein ligand. However, compared with other known glycan-binding receptors, the receptor contains an unusually large intracellular domain that consists of multiple sequence motifs, including phosphorylated tyrosine residues, that allow it to interact with signaling molecules such as Grb2. Immunohistochemistry has been used to demonstrate that prolectin is expressed on a specialized population of proliferating B cells in germinal centers. Thus, this novel receptor has the potential to function in carbohydrate-mediated communication between cells in the germinal center.
  • Grasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Ching, C. R. K., McMahon, M. A. B., Shatokhina, N., Zsembik, L. C. P., Thomopoulos, S. I., Zhu, A. H., Strike, L. T., Agartz, I., Alhusaini, S., Almeida, M. A. A., Alnæs, D., Amlien, I. K. and 341 moreGrasby, K. L., Jahanshad, N., Painter, J. N., Colodro-Conde, L., Bralten, J., Hibar, D. P., Lind, P. A., Pizzagalli, F., Ching, C. R. K., McMahon, M. A. B., Shatokhina, N., Zsembik, L. C. P., Thomopoulos, S. I., Zhu, A. H., Strike, L. T., Agartz, I., Alhusaini, S., Almeida, M. A. A., Alnæs, D., Amlien, I. K., Andersson, M., Ard, T., Armstrong, N. J., Ashley-Koch, A., Atkins, J. R., Bernard, M., Brouwer, R. M., Buimer, E. E. L., Bülow, R., Bürger, C., Cannon, D. M., Chakravarty, M., Chen, Q., Cheung, J. W., Couvy-Duchesne, B., Dale, A. M., Dalvie, S., De Araujo, T. K., De Zubicaray, G. I., De Zwarte, S. M. C., Den Braber, A., Doan, N. T., Dohm, K., Ehrlich, S., Engelbrecht, H.-R., Erk, S., Fan, C. C., Fedko, I. O., Foley, S. F., Ford, J. M., Fukunaga, M., Garrett, M. E., Ge, T., Giddaluru, S., Goldman, A. L., Green, M. J., Groenewold, N. A., Grotegerd, D., Gurholt, T. P., Gutman, B. A., Hansell, N. K., Harris, M. A., Harrison, M. B., Haswell, C. C., Hauser, M., Herms, S., Heslenfeld, D. J., Ho, N. F., Hoehn, D., Hoffmann, P., Holleran, L., Hoogman, M., Hottenga, J.-J., Ikeda, M., Janowitz, D., Jansen, I. E., Jia, T., Jockwitz, C., Kanai, R., Karama, S., Kasperaviciute, D., Kaufmann, T., Kelly, S., Kikuchi, M., Klein, M., Knapp, M., Knodt, A. R., Krämer, B., Lam, M., Lancaster, T. M., Lee, P. H., Lett, T. A., Lewis, L. B., Lopes-Cendes, I., Luciano, M., Macciardi, F., Marquand, A. F., Mathias, S. R., Melzer, T. R., Milaneschi, Y., Mirza-Schreiber, N., Moreira, J. C. V., Mühleisen, T. W., Müller-Myhsok, B., Najt, P., Nakahara, S., Nho, K., Olde Loohuis, L. M., Orfanos, D. P., Pearson, J. F., Pitcher, T. L., Pütz, B., Quidé, Y., Ragothaman, A., Rashid, F. M., Reay, W. R., Redlich, R., Reinbold, C. S., Repple, J., Richard, G., Riedel, B. C., Risacher, S. L., Rocha, C. S., Mota, N. R., Salminen, L., Saremi, A., Saykin, A. J., Schlag, F., Schmaal, L., Schofield, P. R., Secolin, R., Shapland, C. Y., Shen, L., Shin, J., Shumskaya, E., Sønderby, I. E., Sprooten, E., Tansey, K. E., Teumer, A., Thalamuthu, A., Tordesillas-Gutiérrez, D., Turner, J. A., Uhlmann, A., Vallerga, C. L., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, L., Van Erp, T. G. M., Van Haren, N. E. M., Van Rooij, D., Van Tol, M.-J., Veldink, J. H., Verhoef, E., Walton, E., Wang, M., Wang, Y., Wardlaw, J. M., Wen, W., Westlye, L. T., Whelan, C. D., Witt, S. H., Wittfeld, K., Wolf, C., Wolfers, T., Wu, J. Q., Yasuda, C. L., Zaremba, D., Zhang, Z., Zwiers, M. P., Artiges, E., Assareh, A. A., Ayesa-Arriola, R., Belger, A., Brandt, C. L., Brown, G. G., Cichon, S., Curran, J. E., Davies, G. E., Degenhardt, F., Dennis, M. F., Dietsche, B., Djurovic, S., Doherty, C. P., Espiritu, R., Garijo, D., Gil, Y., Gowland, P. A., Green, R. C., Häusler, A. N., Heindel, W., Ho, B.-C., Hoffmann, W. U., Holsboer, F., Homuth, G., Hosten, N., Jack Jr., C. R., Jang, M., Jansen, A., Kimbrel, N. A., Kolskår, K., Koops, S., Krug, A., Lim, K. O., Luykx, J. J., Mathalon, D. H., Mather, K. A., Mattay, V. S., Matthews, S., Mayoral Van Son, J., McEwen, S. C., Melle, I., Morris, D. W., Mueller, B. A., Nauck, M., Nordvik, J. E., Nöthen, M. M., O’Leary, D. S., Opel, N., Paillère Martinot, M.-L., Pike, G. B., Preda, A., Quinlan, E. B., Rasser, P. E., Ratnakar, V., Reppermund, S., Steen, V. M., Tooney, P. A., Torres, F. R., Veltman, D. J., Voyvodic, J. T., Whelan, R., White, T., Yamamori, H., Adams, H. H. H., Bis, J. C., Debette, S., Decarli, C., Fornage, M., Gudnason, V., Hofer, E., Ikram, M. A., Launer, L., Longstreth, W. T., Lopez, O. L., Mazoyer, B., Mosley, T. H., Roshchupkin, G. V., Satizabal, C. L., Schmidt, R., Seshadri, S., Yang, Q., Alzheimer’s Disease Neuroimaging Initiative, CHARGE Consortium, EPIGEN Consortium, IMAGEN Consortium, SYS Consortium, Parkinson’s Progression Markers Initiative, Alvim, M. K. M., Ames, D., Anderson, T. J., Andreassen, O. A., Arias-Vasquez, A., Bastin, M. E., Baune, B. T., Beckham, J. C., Blangero, J., Boomsma, D. I., Brodaty, H., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bustillo, J. R., Cahn, W., Cairns, M. J., Calhoun, V., Carr, V. J., Caseras, X., Caspers, S., Cavalleri, G. L., Cendes, F., Corvin, A., Crespo-Facorro, B., Dalrymple-Alford, J. C., Dannlowski, U., De Geus, E. J. C., Deary, I. J., Delanty, N., Depondt, C., Desrivières, S., Donohoe, G., Espeseth, T., Fernández, G., Fisher, S. E., Flor, H., Forstner, A. J., Francks, C., Franke, B., Glahn, D. C., Gollub, R. L., Grabe, H. J., Gruber, O., Håberg, A. K., Hariri, A. R., Hartman, C. A., Hashimoto, R., Heinz, A., Henskens, F. A., Hillegers, M. H. J., Hoekstra, P. J., Holmes, A. J., Hong, L. E., Hopkins, W. D., Hulshoff Pol, H. E., Jernigan, T. L., Jönsson, E. G., Kahn, R. S., Kennedy, M. A., Kircher, T. T. J., Kochunov, P., Kwok, J. B. J., Le Hellard, S., Loughland, C. M., Martin, N. G., Martinot, J.-L., McDonald, C., McMahon, K. L., Meyer-Lindenberg, A., Michie, P. T., Morey, R. A., Mowry, B., Nyberg, L., Oosterlaan, J., Ophoff, R. A., Pantelis, C., Paus, T., Pausova, Z., Penninx, B. W. J. H., Polderman, T. J. C., Posthuma, D., Rietschel, M., Roffman, J. L., Rowland, L. M., Sachdev, P. S., Sämann, P. G., Schall, U., Schumann, G., Scott, R. J., Sim, K., Sisodiya, S. M., Smoller, J. W., Sommer, I. E., St Pourcain, B., Stein, D. J., Toga, A. W., Trollor, J. N., Van der Wee, N. J. A., van 't Ent, D., Völzke, H., Walter, H., Weber, B., Weinberger, D. R., Wright, M. J., Zhou, J., Stein, J. L., Thompson, P. M., & Medland, S. E. (2020). The genetic architecture of the human cerebral cortex. Science, 367(6484): eaay6690. doi:10.1126/science.aay6690.

    Abstract

    The cerebral cortex underlies our complex cognitive capabilities, yet little is known about the specific genetic loci that influence human cortical structure. To identify genetic variants that affect cortical structure, we conducted a genome-wide association meta-analysis of brain magnetic resonance imaging data from 51,665 individuals. We analyzed the surface area and average thickness of the whole cortex and 34 regions with known functional specializations. We identified 199 significant loci and found significant enrichment for loci influencing total surface area within regulatory elements that are active during prenatal cortical development, supporting the radial unit hypothesis. Loci that affect regional surface area cluster near genes in Wnt signaling pathways, which influence progenitor expansion and areal identity. Variation in cortical structure is genetically correlated with cognitive function, Parkinson’s disease, insomnia, depression, neuroticism, and attention deficit hyperactivity disorder.
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
  • De Grauwe, S., Lemhöfer, K., Willems, R. M., & Schriefers, H. (2014). L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs. Frontiers in Human Neuroscience, 8: 802. doi:10.3389/fnhum.2014.00802.

    Abstract

    In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen “put aside.” Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen “fall down”) were preceded by their stem (e.g., vallen “fall”) with a lag of 4–6 words (“primed”); the other half (e.g., inslapen “fall asleep”) were not (“unprimed”). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically

    Additional information

    Data Sheet 1.docx
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Gubian, M., Torreira, F., Strik, H., & Boves, L. (2009). Functional data analysis as a tool for analyzing speech dynamics a case study on the French word c'était. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2199-2202).

    Abstract

    In this paper we introduce Functional Data Analysis (FDA) as a tool for analyzing dynamic transitions in speech signals. FDA makes it possible to perform statistical analyses of sets of mathematical functions in the same way as classical multivariate analysis treats scalar measurement data. We illustrate the use of FDA with a reduction phenomenon affecting the French word c'était /setε/ 'it was', which can be reduced to [stε] in conversational speech. FDA reveals that the dynamics of the transition from [s] to [t] in fully reduced cases may still be different from the dynamics of [s] - [t] transitions in underlying /st/ clusters such as in the word stage.
  • Le Guen, O. (2009). Geocentric gestural deixis among Yucatecan Maya (Quintana Roo, México). In 18th IACCP Book of Selected Congress Papers (pp. 123-136). Athens, Greece: Pedio Books Publishing.
  • Le Guen, O. (2009). The ethnography of emotions: A field worker's guide. In A. Majid (Ed.), Field manual volume 12 (pp. 31-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446076.

    Abstract

    The goal of this task is to investigate cross-cultural emotion categories in language and thought. This entry is designed to provide researchers with some guidelines to describe the emotional repertoire of a community from an emic perspective. The first objective is to offer ethnographic tools and a questionnaire in order to understand the semantics of emotional terms and the local conception of emotions. The second objective is to identify the local display rules of emotions in communicative interactions.
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking. Cognition, 133(3), 535-552. doi:10.1016/j.cognition.2014.07.007.

    Abstract

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants’ reading times for sentences that convey similarity or difference between two abstract nouns (e.g., ‘Peace and war are certainly different...’). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., ‘peace’, ‘war’). In Experiments 2 and 3, they turned but remained blank. Participants’ reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences.

    Additional information

    mmc1.doc
  • Guerra, E., Huettig, F., & Knoeferle, P. (2014). Assessing the time course of the influence of featural, distributional and spatial representations during reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2309-2314). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/402/.

    Abstract

    What does semantic similarity between two concepts mean? How could we measure it? The way in which semantic similarity is calculated might differ depending on the theoretical notion of semantic representation. In an eye-tracking reading experiment, we investigated whether two widely used semantic similarity measures (based on featural or distributional representations) have distinctive effects on sentence reading times. In other words, we explored whether these measures of semantic similarity differ qualitatively. In addition, we examined whether visually perceived spatial distance interacts with either or both of these measures. Our results showed that the effect of featural and distributional representations on reading times can differ both in direction and in its time course. Moreover, both featural and distributional information interacted with spatial distance, yet in different sentence regions and reading measures. We conclude that featural and distributional representations are distinct components of semantic representation.
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance modulates reading times for sentences about social relations: evidence from eye tracking. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2315-2320). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/403/.

    Abstract

    Recent evidence from eye tracking during reading showed that non-referential spatial distance presented in a visual context can modulate semantic interpretation of similarity relations rapidly and incrementally. In two eye-tracking reading experiments we extended these findings in two important ways; first, we examined whether other semantic domains (social relations) could also be rapidly influenced by spatial distance during sentence comprehension. Second, we aimed to further specify how abstract language is co-indexed with spatial information by varying the syntactic structure of sentences between experiments. Spatial distance rapidly modulated reading times as a function of the social relation expressed by a sentence. Moreover, our findings suggest that abstract language can be co-indexed as soon as critical information becomes available for the reader.
  • Guest, O., Caso, A., & Cooper, R. P. (2020). On simulating neural damage in connectionist networks. Computational Brain & Behavior, 3, 289-321. doi:10.1007/s42113-020-00081-z.

    Abstract

    A key strength of connectionist modelling is its ability to simulate both intact cognition and the behavioural effects of neural damage. We survey the literature, showing that models have been damaged in a variety of ways, e.g. by removing connections, by adding noise to connection weights, by scaling weights, by removing units and by adding noise to unit activations. While these different implementations of damage have often been assumed to be behaviourally equivalent, some theorists have made aetiological claims that rest on nonequivalence. They suggest that related deficits with different aetiologies might be accounted for by different forms of damage within a single model. We present two case studies that explore the effects of different forms of damage in two influential connectionist models, each of which has been applied to explain neuropsychological deficits. Our results indicate that the effect of simulated damage can indeed be sensitive to the way in which damage is implemented, particularly when the environment comprises subsets of items that differ in their statistical properties, but such effects are sensitive to relatively subtle aspects of the model’s training environment. We argue that, as a consequence, substantial methodological care is required if aetiological claims about simulated neural damage are to be justified, and conclude more generally that implementation assumptions, including those concerning simulated damage, must be fully explored when evaluating models of neurological deficits, both to avoid over-extending the explanatory power of specific implementations and to ensure that reported results are replicable.
  • Guggenheim, J. A., Williams, C., Northstone, K., Howe, L. D., Tilling, K., St Pourcain, B., McMahon, G., & Lawlor, D. A. (2014). Does Vitamin D Mediate the Protective Effects of Time Outdoors On Myopia? Findings From a Prospective Birth Cohort. Investigative Ophthalmology & Visual Science, 55(12), 8550-8558. doi:10.1167/iovs.14-15839.
  • Güldemann, T., & Hammarström, H. (2020). Geographical axis effects in large-scale linguistic distributions. In M. Crevels, & P. Muysken (Eds.), Language Dispersal, Diversification, and Contact. Oxford: Oxford University Press.
  • Gullberg, M., & Kita, S. (2009). Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior, 33(4), 251-277. doi:10.1007/s10919-009-0073-2.

    Abstract

    There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake.
  • Gullberg, M. (1998). Gesture as a communication strategy in second language discourse: A study of learners of French and Swedish. Lund: Lund University Press.

    Abstract

    Gestures are often regarded as the most typical compensatory device used by language learners in communicative trouble. Yet gestural solutions to communicative problems have rarely been studied within any theory of second language use. The work pre­sented in this volume aims to account for second language learners’ strategic use of speech-associated gestures by combining a process-oriented framework for communi­cation strategies with a cognitive theory of gesture. Two empirical studies are presented. The production study investigates Swedish lear­ners of French and French learners of Swedish and their use of strategic gestures. The results, which are based on analyses of both individual and group behaviour, contradict popular opinion as well as theoretical assumptions from both fields. Gestures are not primarily used to replace speech, nor are they chiefly mimetic. Instead, learners use gestures with speech, and although they do exploit mimetic gestures to solve lexical problems, they also use more abstract gestures to handle discourse-related difficulties and metalinguistic commentary. The influence of factors such as proficiency, task, culture, and strategic competence on gesture use is discussed, and the oral and gestural strategic modes are compared. In the evaluation study, native speakers’ assessments of learners’ gestures, and the potential effect of gestures on evaluations of proficiency are analysed and discussed in terms of individual communicative style. Compensatory gestures function at multiple communicative levels. This has implica­tions for theories of communication strategies, and an expansion of the existing frameworks is discussed taking both cognitive and interactive aspects into account.
  • Gullberg, M. (2009). Gestures and the development of semantic representations in first and second language acquisition. Acquisition et Interaction en Langue Etrangère..Languages, Interaction, and Acquisition (former AILE), 1, 117-139.

    Abstract

    This paper argues that speech-associated gestures can usefully inform studies exploring development of meaning in first and second language acquisition. The example domain is caused motion or placement meaning (putting a cup on a table) where acquisition problems have been observed and where adult native gesture use reflects crosslinguistically different placement verb semantics. Against this background, the paper summarises three studies examining the development of semantic representations in Dutch children acquiring Dutch, and adult learners’ acquiring Dutch and French placement verbs. Overall, gestures change systematically with semantic development both in children and adults and (1) reveal what semantic elements are included in current semantic representations, whether target-like or not, and (2) highlight developmental shifts in those representations. There is little evidence that gestures chiefly act as a support channel. Instead, the data support the theoretical notion that speech and gesture form an integrated system, opening new possibilities for studying the processes of acquisition.
  • Gullberg, M. (2009). Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics, 7, 221-245. doi:10.1075/arcl.7.09gul.

    Abstract

    This study examines to what extent English speakers of L2 Dutch reconstruct the meanings of placement verbs when moving from a general L1 verb of caused motion (put) to two specific caused posture verbs (zetten/leggen ‘set/lay’) in the L2 and whether the existence of low-frequency cognate forms in the L1 (set/lay) alleviates the reconstruction problem. Evidence from speech and gesture indicates that English speakers have difficulties with the specific verbs in L2 Dutch, initially looking for means to express general caused motion in L1-like fashion through over-generalisation. The gesture data further show that targetlike forms are often used to convey L1-like meaning. However, the differentiated use of zetten for vertical placement and dummy verbs (gaan ‘go’ and doen ‘do’) and intransitive posture verbs (zitten/staan/liggen ‘sit, stand, lie’) for horizontal placement, and a positive correlation between appropriate verb use and target-like gesturing suggest a beginning sensitivity to the semantic parameters of the L2 verbs and possible reconstruction.
  • Gullberg, M., Indefrey, P., & Muysken, P. (2009). Research techniques for the study of code-switching. In B. E. Bullock, & J. A. Toribio (Eds.), The Cambridge handbook on linguistic code-switching (pp. 21-39). Cambridge: Cambridge University Press.

    Abstract

    The aim of this chapter is to provide researchers with a tool kit of semi-experimental and experimental techniques for studying code-switching. It presents an overview of the current off-line and on-line research techniques, ranging from analyses of published bilingual texts of spontaneous conversations, to tightly controlled experiments. A multi-task approach used for studying code-switched sentence production in Papiamento-Dutch bilinguals is also exemplified.
  • Gullberg, M. (2009). Why gestures are relevant to the bilingual mental lexicon. In A. Pavlenko (Ed.), The bilingual mental lexicon: Interdisciplinary approaches (pp. 161-184). Clevedon: Multilingual Matters.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language in non-trivial ways. This chapter presents an overview of what gestures can and cannot tell us about the monolingual and the bilingual mental lexicon. Gesture analysis opens for a broader view of the mental lexicon, targeting the interface between conceptual, semantic and syntactic aspects of event construal, and offers new possibilities for examining how languages co-exist and interact in bilinguals beyond the level of surface forms. The first section of this chapter gives a brief introduction to gesture studies and outlines the current views on the relationship between gesture, speech, and language. The second section targets the key questions for the study of the monolingual and bilingual lexicon, and illustrates the methods employed for addressing these questions. It further exemplifies systematic cross-linguistic patterns in gestural behaviour in monolingual and bilingual contexts. The final section discusses some implications of an expanded view of the multilingual lexicon that includes gesture, and outlines directions for future inquiry.

    Files private

    Request files
  • Gussenhoven, C., Chen, Y., & Dediu, D. (Eds.). (2014). 4th International Symposium on Tonal Aspects of Language, Nijmegen, The Netherlands, May 13-16, 2014. ISCA Archive.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.

    Abstract

    Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
  • Haan, E. H. F., Seijdel, N., Kentridge, R. W., & Heywood, C. A. (2020). Plasticity versus chronicity: Stable performance on category fluency 40 years post‐onset. Journal of Neuropsychology, 14(1), 20-27. doi:10.1111/jnp.12180.

    Abstract

    What is the long‐term trajectory of semantic memory deficits in patients who have suffered structural brain damage? Memory is, per definition, a changing faculty. The traditional view is that after an initial recovery period, the mature human brain has little capacity to repair or reorganize. More recently, it has been suggested that the central nervous system may be more plastic with the ability to change in neural structure, connectivity, and function. The latter observations are, however, largely based on normal learning in healthy subjects. Here, we report a patient who suffered bilateral ventro‐medial damage after presumed herpes encephalitis in 1971. He was seen regularly in the eighties, and we recently had the opportunity to re‐assess his semantic memory deficits. On semantic category fluency, he showed a very clear category‐specific deficit performing better that control data on non‐living categories and significantly worse on living items. Recent testing showed that his impairments have remained unchanged for more than 40 years. We suggest cautiousness when extrapolating the concept of brain plasticity, as observed during normal learning, to plasticity in the context of structural brain damage.
  • Hagoort, P. (2000). De toekomstige eeuw der cognitieve neurowetenschap [inaugural lecture]. Katholieke Universiteit Nijmegen.

    Abstract

    Rede uitgesproken op 12 mei 2000 bij de aanvaarding van het ambt van hoogleraar in de neuropsychologie aan de Faculteit Sociale Wetenschappen KUN.
  • Hagoort, P. (2009). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. In B. C. J. Moore, L. K. Tyler, & W. Marslen-Wilson (Eds.), The perception of speech: From sound to meaning (pp. 223-248). New York: Oxford University Press.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2014). Introduction to section on language and abstract thought. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 615-618). Cambridge, Mass: MIT Press.
  • Hagoort, P., & Levinson, S. C. (2014). Neuropragmatics. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 667-674). Cambridge, Mass: MIT Press.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P. (2009). Reflections on the neurobiology of syntax. In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 279-296). Cambridge, MA: MIT Press.

    Abstract

    This contribution focuses on the neural infrastructure for parsing and syntactic encoding. From an anatomical point of view, it is argued that Broca's area is an ill-conceived notion. Functionally, Broca's area and adjacent cortex (together Broca's complex) are relevant for language, but not exclusively for this domain of cognition. Its role can be characterized as providing the necessary infrastructure for unification (syntactic and semantic). A general proposal, but with required level of computational detail, is discussed to account for the distribution of labor between different components of the language network in the brain.Arguments are provided for the immediacy principle, which denies a privileged status for syntax in sentence processing. The temporal profile of event-related brain potential (ERP) is suggested to require predictive processing. Finally, since, next to speed, diversity is a hallmark of human languages, the language readiness of the brain might not depend on a universal, dedicated neural machinery for syntax, but rather on a shaping of the neural infrastructure of more general cognitive systems (e.g., memory, unification) in a direction that made it optimally suited for the purpose of communication through language.
  • Hagoort, P., Baggio, G., & Willems, R. M. (2009). Semantic unification. In M. S. Gazzaniga (Ed.), The cognitive neurosciences, 4th ed. (pp. 819-836). Cambridge, MA: MIT Press.

    Abstract

    Language and communication are about the exchange of meaning. A key feature of understanding and producing language is the construction of complex meaning from more elementary semantic building blocks. The functional characteristics of this semantic unification process are revealed by studies using event related brain potentials. These studies have found that word meaning is assembled into compound meaning in not more than 500 ms. World knowledge, information about the speaker, co-occurring visual input and discourse all have an immediate impact on semantic unification, and trigger similar electrophysiological responses as sentence-internal semantic information. Neuroimaging studies show that a network of brain areas, including the left inferior frontal gyrus, the left superior/middle temporal cortex, the left inferior parietal cortex and, to a lesser extent their right hemisphere homologues are recruited to perform semantic unification.
  • Hagoort, P. (2020). Taal. In O. Van den Heuvel, Y. Van der Werf, B. Schmand, & B. Sabbe (Eds.), Leerboek neurowetenschappen voor de klinische psychiatrie (pp. 234-239). Amsterdam: Boom Uitgevers.
  • Hagoort, P. (2009). Taalontwikkeling: Meer dan woorden alleen. In M. Evenblij (Ed.), Brein in beeld: Beeldvorming bij heersenonderzoek (pp. 53-57). Den Haag: Stichting Bio-Wetenschappen en Maatschappij.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.

Share this page