News

Alle News im Überblick

News

21. Juni 2018; Fabian Bross

Der SRF unterstützt ein Projekt zur psycholinguistischen Annotation linguistischer Daten am Institut für Maschinelle Sprachverarbeitung, das von Sebastian Padó, Sabine Schulte im Walde und Diego Frassinelli durchgeführt wird. Der SRF stellt hierfür finanzielle Mittel für die Anstellung einer Hilfskraft bereit.

7. Juni 2018; Fabian Bross

Zwischen dem 4. und 6. Juli fand der SimPhon-Workshop „ Psycholinguistic, cognitive and neurolinguistic modeling in phonetics and phonology“ in Stuttgart statt. Der Workshop wurde finanziell vom SRF unterstützt.

8. Mai 2018; Fabian Bross

Am 3. und 4. Mai 2018 hatten wir Prof. Martijn Wieling von der Rijksuniversiteit Groningen in Stuttgart zu Gast, der hier einen Workshop zu General Additive Models (GAMs) leitete.

7. Februar 2018; Fabian Bross

Am Donnerstag, den 15.02.2018 und am Freitag, den 16.02.2018 freuen wir uns, die Gebärdensprach-Typologin Ulrike Zeshan (University of Central Lancashire) bei uns begrüßen zu dürfen!

  • Am Donnerstag, den 15.02.2018 um 11:15 Uhr wird sie über die theoretischen Hintergründe ihrer Forschung zu Cross-Signing vortragen (Raum M 17.81; KII)
  • Am Freitag, den 16.02.2018 um 14:00 Uhr wird sie über ihr Projekt SIGNSPACE berichten (wiederum Raum M 17.81; KII)

23. Januar 2018; Fabian Bross

Vom 2. bis zum 4. Mai 2018 findet in Sevilla das 8th International Symposium on Intercultural, Cognitive and Social Pragmatics statt. Der SRF freut sich, dass wir hierfür finanzielle Unterstützung für ein Panel bei diesem Symposium bereitstellen können. Am Panel „Cultural and linguistic knowledge in context“ beteiligen sich Wissenschaftlerinnen und Wissenschaftler der Universitäten Stuttgart und Tübingen:

  • Britta Stolterfoht (Universität Tübingen) & Katrin Ziegler (Universität Tübingen): Manner adverbials and not-at-issue content
  • Natalia Lemmert (Universität Stuttgart) & Sebastian Padó (Universität Stuttgart): Perception of formality levels by native speakers of English
  • Alassane Kiemtoré (Universität Ouagadougou & Universität Stuttgart): Pronominal anaphora in Jula: distribution and interpretation
  • Daniel Hole (Universität Stuttgart): Free indirect discourse has nothing to do with pronouns)
  • Carolin Dudschig (Universität Tübingen), Claudia Maienborn (Universität Tübingen), Barbara Kaup (Universität Tübingen): Cognitive explorations into the difference of linguistic vs. cultural knowledge
  • Fabian Bross (Universität Stuttgart): Differential object marking in German Sign Language: animacy and definiteness as cross-linguistically stable cognitive concepts

15. Januar 2018; Fabian Bross

The SRF will support the workshop “Logophoricity and Perspectivization in Wackershofen” organized by Alassane Kiemtoré and Daniel Hole on October 3rd/4th, 2018. The workshop will be held in Wackershofen (Germany).

For more information visit the workshop's webpage.

11. Dezember 2017; Fabian Bross

Der SRF freut sich, Prof. Dr. h.c. Hans Kamp als Gastprofessor gewinnen zu können. In der Zeit vom 1. November bis zum 31. Dezember 2017 wird der Philosoph und Linguist am Institut für Maschinelle Sprachverarbeitung zu Gast sein.

5. Dezember 2017; Fabian Bross

Der SRF fördert die Anschaffung neuer Hardware für das Linguistik-Labor der Universität Stuttgart, die für die Durchführung von Perzeptionsstudien und Mousetracking-Studien vorgesehen sind.

29. November 2017; Fabian Bross

Der SRF unterstützt die experimentelle Erforschung konzeptuell-semantisch definierter Verbklassen. Ein besonderer Fokus liegt auf der Untersuchung der Argumente von Psych-Verben und ihrer Rolle im Diskurs. Der SRF stellt in diesem Bereich Probandengelder für Frau Anne Temme bereit.

2. Oktober 2017; Fabian Bross

The workshop will focus on the question of how conceptual knowledge (as context and world knowledge) interacts with the linguistic system. In particular, we want to address controversies that arise with regard to the structural encoding of semantic and conceptual information. Broadly speaking, one can distinguish between three approaches:

  1. Traditional approaches link conceptual resources neither to compositional semantics nor to syntactic configurations (see the traditional Montague grammar and approaches based on it).
  2. Conceptual-semantic approaches point to a direct influence of conceptual knowledge on the compositional meaning constitution, without directly reflecting it in the syntax (see two-level semantics as in Bierwisch 1982, Lang & Maienborn 2011, or type compositional logic as in Asher 2011).
  3. Structure-oriented approaches link conceptual-semantic knowledge to syntactic configurations (see Distributed Morphology and (Neo-)Constructional Approaches as, for instance, advanced by Borer 2005, Ramchand 2013).

Against this background, the workshop invites contributions which:

  • demonstrate the advantages of the respective approaches through case studies, or
  • contrast the explanatory potential of the different approaches.

More information: here

2. Oktober 2017; Fabian Bross

The workshop “Transmodal perspectives on secondary meaning” was part of the 12th International Tbilisi Symposium on Language, Logic and Computation, 18–22 September 2017 at Lagodekhi, Georgia. More information can be found here.

22. Juni 2017; Fabian Bross

Title: Data visualisation with R: ggplot2
When: 29.06 - 30.06, 09:30 - 17:30


Wegen der großen Nachfrage ist leider keine Anmeldung zu diesem Workshop mehr möglich.

22. Juni 2017; Fabian Bross

Der SRF freut sich, Prof. Ronnie Wilbur von der Purdue University erneut als Gastprofessorin gewinnen zu können. Vom 1. Juni bis zum 15. Juli 2017 wird die Gebärdensprachlinguistin an der Universität Stuttgart zu Gast sein.

 

9. Januar 2017; Fabian Bross

Program: The Week of Signs and Gestures

If you want to join the conference please register via mail: fabian.bross at ling.uni-stuttgart.de

Venue: Keplerstraße 7, Senatssaal

Monday, 06-12-2017:

Gesture Session

09:00-09:50

Registration with coffee and tea

 

09:50-10:00

Opening remarks

 

10:00-11:00

Masha Esipova (New York University)

Co- and post-speech gestures: a prosody/syntax approach

11:00-12:00

Silva H. Ladewig (European University Viadrina)

Integrating gestures by replacing nouns and verbs of spoken language

12:00-14:00

Lunch break (Mezzogiorno)

 

14:00-15:00

Amir Anvari (U Paris)

On the interpretation of co-nominal pointing gestures

15:00-15:30

Coffee break

 

15:30-16:30

Cornelia Ebert (U Stuttgart)

The semantics of co-speech and post-speech gestures

16:30-17:30

Fabian Bross (U Stuttgart)

On the origin of the head shake

 

19:00

Dinner

 

Tuesday, 06-13-2017:

Signs and Gestures in Society

09:00-10:00

Nadja Schott (U Stuttgart)

Thinking, walking, talking: The motor‐cognitive connection

10:00-11:00

Uta Benner (HAW Landshut)

German Sign Language and its long way into society

11:00-11:30

Coffee break

 

Gestures and Signs

11:30-12:30

Susan Goldin-Meadow (U Chicago)

The resilience of language and gesture

12:30-14:30

Lunch break (dining hall)

 

14:30-15:30

Philippe Schlenker (Institut Jean-Nicod, CNRS; NYU)

Sign language grammar vs. gestural grammar

15:15-15:45

Coffee break

 

Panel Discussion: Gestures and Signs

15:45-16:45

Discussants: Philippe Schlenker, Ronnie Wilbur, Silva H. Ladewig, Cornelia Ebert, Fabian Bross

Moderation: Daniel Hole

 

Social Part

16:45

Reception

19:00

Dinner

 

Wednesday, 06-14-2017: Sign Language Linguistics

Sign Language Linguistics Session

08:00-09:00

Coffee and tea

 

09:00-10:00

Daniel Hole (U Stuttgart) & Fabian Bross

(U Stuttgart)

Scope-taking strategies and the order of clausal categories in German Sign Language

10:00-11:00

Markus Steinbach (U Göttingen)

Show me the next discourse referent! Spatial distinctions and pointing in sign language discourse

11:00-13:00

Lunch (dining hall)

 

The following sessions will be at Breitscheidstraße 2a, room 2.01

13:00-14:00

Elisabeth Volk (U Göttingen)

The integration of  palm-up into sign language grammar: Structuring discourse in DGS

14:00-15:00

Ronnie Wilbur (Purdue University)

How ASL can tell us what the right analysis of GEN and HAB is

15:00 -

Coffee and farewell

 

 

Show me the next discourse referent! Spatial distinctions and pointing in sign language discourse

 Markus Steinbach, U Göttingen

Background:  In  sign  languages,  discourse  referents  are  introduced  and  referred  back  to  by means of referential loci, i.e. regions in the horizontal plane of the signing space. Referential loci are  identified  either  by  overt  grammatical  (manual  or  non-manual)  localization  strategies  or  by covert default strategies. The most obvious strategy to localize and identify discourse referents is the pointing sign INDEX.  In  this  talk  I  argue  that  INDEX  is  a  grammaticalized  gesture  that  fulfills specific functions in the spatial reference tracking system of sign languages. I provide experimental evidence  for  the  discourse  semantic  interpretation  of  referential  loci  and  ipsi-  and  contralateral pointing.  In  addition,  I  discuss  a  discourse  semantic  implementation  of  these  findings  and  the consequences for the impact of modality on anaphora resolution. 

Experiments: In the first part of the talk, I present two event-related potential (ERP) studies on German Sign Language (DGS). These studies investigate the following two hypotheses: (i) Signers assign distinct and contrastive R-loci to different discourse referents even in the absence of any overt localization strategy and (ii) INDEX is subject to discourse semantic (default) constraints, i.e. an  ipsilateral  pointing  refers  back  to  the  discourse  referent  mentioned  first  in  the  previous sentence. The results of the exepriments are in line with both hypotheses. The first experiment (i) revealed that signers of DGS use default strategies for assigning discourse referents to referential loci if they are not linked to referential loci overtly. Additionally, the data show that in case of two discourse  referents,  they  are  assigned  to  two  different  contrastive  areas  in  the  signing  space. While the first discourse referent is typically established on the ipsilateral (right) area, the second discourse  referent  is  linked  to  the  contralateral  (left)  area  in  signing  space.  In  the  second experiment (ii), we observe an increased activity in the contralateral condition, i.e. in case INDEX points  to  the  left  side  of  the  signing  space.  Based  on  the  results  of  the  first  experiment  (i)  that shows that in DGS discourse referents are also covertly associated with areas in space, the effect can be interpreted as an effect of first mention, which suggests increased processing costs for the contralateral INDEX sign. It appears that participants expect the second sentence to continue with the first referent.  In  cases  where  the  second  sentence  continues  with  the  second  referent,  this expectation gets violated and causes the effect observed. 

Analysis: In the second part of the talk, I discuss a theoretical implementation of the interpretation of referential loci and INDEX at the interface between syntax and discourse semantics. The core idea  of  this  approach  is  that  spoken  and  sign  languages  use  similar  strategies  to  distinguish discourse  referents  in  the  discourse  semantic  representation.  However,  while  sign  languages make  full  use  of  the  (recursive)  expressive  power  of  the  three-dimensional  signing  space  and corresponding  pointing  devices  such  as  INDEX,  spoken  languages  are  limited  to  fixed  set  of morphological  markers  such  as  gender.  Following  the  DRT-analysis  developed  in  Steinbach  & Onea  (2015),  I  assume  that  signers  systematically  exploit  the  signing  space  to  distinguish discourse referents. That is, in case of two discourse referents, the signing space is divided into two contrastive areas. I further assume that the first discourse referent (i.e. the referent mentioned first in the examples used in our experimental study) is assigned by default to the ipsilateral area of the signing space (which is assumed to be the right side for right-handed signers). By contrast, the second discourse referent (i.e. the second mentioned referent in our examples) is assigned to the contralateral area of the signing space.  INDEX is the most  obvious  morphosyntactic  device  to identify (i.e. to point to) spatial referents in discourse.

 

Co- and post-speech gestures: a prosody/syntax approach

Masha Esipovo, New York University

Differences in structure and meaning between co- and post-speech gestures are an open question. For example, Schlenker (to appear) observes that, unlike co-speech gestures, post-speech gestures seem to require a discourse referent as an antecedent, similarly to non-restrictive relative clauses and ordinary anaphora. Schlenker (to appear) and Ebert (2017) propose to account for the differences between co- and post-speech gestures by positing different semantics for the two. In my talk I will explore an analysis under which gestures have a uniform syntax and semantics, and the co- vs. post-speech distinction only arises at PF during linearization. The restrictions on the anaphoric potential of post-speech gestures then emerge as a result of syntax-sensitive constraints on prosodic grouping.

 

Integrating gestures by replacing nouns and verbs of spoken language

Silva H. Ladewig, European University Viadrina

This paper takes a usage-based approach to the integration of gestures into speech. Based on linguistic and cognitive-semiotic analyses of multimodal utterances and on naturalistic perception studies it argues that gestures integrate on a syntactic and a semantic level with speech merging into multimodal syntactic constructions. Looking specifically at discontinued utterances and their perception, it is argued that gestures vitally participate in the dynamics of meaning construal and may take over the function of verbs and nouns in their respective syntactic slots contributing to the semantics of the sentence under construction (Ladewig 2014).

In the utterances under scrutiny, gestures are inserted in syntactic gaps in utterance-final position, replace the spoken constituents of nouns and verbs, and complete the utterance. According to perception analyses, these composite utterances do not cause problems in the understanding but recipients treat them as meaningful for the proceeding discourse. Although this phenomenon offers revealing insights into the integration processes of speech and gestures and although many researchers are sensitive to this phenomenon (e.g., Slama-Cazacu 1972; McNeill 2005; Wilcox 2004) it has not been studied in depth before. This study fills this research gap, aiming to reveal the potential of gestures to instantiate grammatical categories of nouns and verbs and thus their conceptual schemas by tracing the interactive processes between speech, in particular grammar of spoken language, and gesture.

With this phenomenon, the study takes a unified take on the integration of gestures with speech following the plea formulated recently by sign language linguists to elaborate an overarching framework for studying and understanding spoken and signed language and gestures. The integration of different modes of expression to form “composite utterances” (Enfield 2009) serves as a sample domain, showing that language and gesture “are manifestations of the same underlying conceptual system that is the basis for the human expressive ability. Thus, we propose that the general principles of cognitive grammar can be applied to the study of gesture” (Wilcox & Xavier 2013: 95).

 

References:

Enfield, Nick J. (2009). The anatomy of meaning : speech, gesture, and composite utterances. Cambridge, UK ; New York: Cambridge University Press.

Ladewig, Silva H. (2014). Creating multimodal utterances: The linear integration of gestures into speech. In Cornelia Müller, Alan Cienki, Ellen Fricke, Silva H. Ladewig, David McNeill & Jana Bressem (Eds.), Body – Language – Communication. An International Handbook on Multimodality in Human Interaction. (Vol. (Handbooks of Linguistics and Communication Science 38.2.) pp. 1662-1677). Berlin / Boston: De Gruyter Mouton.

McNeill, David (2005). Gesture and Thought. Chicago: University of Chicago Press.

Slama-Cazacu, Tatiana (1976). Nonverbal components in message sequence: "Mixed syntax". In William Charles McCormack & Stephen A. Wurm (Eds.), Language and man: Anthropological issues (pp. 217-227). The Hague: Mouton.

Wilcox, Sherman (2004). Cognitive iconicity: Conceptual spaces, meaning, and gesture in signed languages. Cognitive Linguistics 15(2), 119-147.

Wilcox, Sherman, & André Nogueira Xavier (2013). A framework for unifying spoken language, signed language, and gesture. Todas as Letras-Revista de Língua e Literatura, 15(1), 88-110.

 

On the interpretation of co-nominal pointing gestures

Amir Anvari, U Paris

This talk—a report on an on-going research project regarding the formal semantics/pragmatics of those occurrences of pointing gestures that are temporally aligned with quantified nominals—is going to have three parts. Part one is mainly descriptive, with goal of investigating the generalisation that, if pointing co-occurs with an expression that is “responsible” for the smallest clause dominating that expression to entail an existential statement, then pointing provides a witness to that existential statement. In part two I will present an analysis according to which co-nominal pointing triggers an assertion-based presupposition that relies crucially on the anaphoric potential of the relevant determiner. The analysis is fairly successful, however has certain empirical short-comings that might be symptomatic of a more general problem; therefore, in part three I will discuss an alternative analysis built on the observation that pointing reallocates the attention of the interlocutors, coupled with the intuition that as far as quantification is concerned, “attention" is tied to quantifier domain restriction. Although the second theory has impressive achievements, I will argue that the first theory is ultimately superior as things stand with the hope that some insights of the second theory might eventually be incorporated in the first. Finally, time permitting, I will consider the question of the extent to which pointing is interpreted differently from iconic gestures, by discussing some constructions involving the co-occurance of iconic gestures with quantified nominals.

 

Sign language grammar vs. gestural grammar

Philippe Schlenker, Institut Jean-Nicod, CNRS; NYU

I argue that several non-trivial properties of sign language grammar can to some extent be replicated with 'hybrid' spoken language utterances that contain spoken words and 'pro-speech gestures' (gestures that fully replace spoken words, as opposed to 'co-speech gestures', which accompany words). These properties include: multiple loci to realize anaphora, including dynamic anaphora; agreement verbs that target loci corresponding to their arguments; the use of high loci to talk about tall individuals;  the ability of ellipsis to disregard select properties of loci;  the existence of Locative Shift, an operation by which a spatial locus can be co-opted to refer to a person found at the relevant location - as well as some constraints on Locative Shift; and the existence of repetition-based, iconic plurals, both with punctuated and with unpunctuated repetitions.,While the point can be made theory-neutrally, it has theoretical consequences, as it suggests that several non-trivial properties of sign language can be known without prior exposure to sign language.

  

German Sign Language and its long way into society

Uta Benner, HAW Landshut

The status of sign languages has changed significantly over the last decades—from being viewed as “a kind of physical communication tool” (Heßmann et al. 2012, 5) or “a kind of pantomime” (Baker 2016, 1) to the recognition as “natural human languages” (Emmorey 2002, 1). Linguistic research paved the way for political recognitions. In Germany, DGS (German sign language) was officially recognized in 2002 (Behindertengleichstellungsgesetz, BGG § 6). In the years following this political recognition, knowledge about sign languages spread and the right to use sign language in certain circumstances was legally ensured. Nevertheless, deaf people still face disadvantages and equal participation, as demanded in the CRPD (Convention on the Rights of Persons with Disabilities), is still far away. This talk will give an overview of the aforementioned changes regarding the perception of sign language, focusing on Germany. It will also highlight new challenges users of sign languages might be facing today and the role of sign language interpreting.

References:

Baker, A. (2016). Sign languages as natural languages. In A. Baker, B. van den Bogaerde, R. Pfau, & T. Schermer (Hrsg.), The linguistics of sign languages: an introduction (S. 1–24). Amsterdam ; Philadelphia: John Benjamins Publishing Company.

Heßmann, J., Hansen, M., & Eichmann, H. (2012). Einleitung: Gebärdensprachen als soziale Praxis Gehörloser und Gegenstand der Wissenschaft. In H. Eichmann, M. Hansen, & J. Heßmann (Hrsg.), Handbuch Deutsche Gebärdensprache. Sprachwissenschaftliche und anwendungsbezogene Perspektiven. (Bd. 50).

Emmorey, K. (2002). Language, cognition, and the brain: insights from sign language research. Mahwah, N.J: Lawrence Erlbaum Associates.

 

The integration of  palm-up into sign language grammar: Structuring discourse in DGS

Elisabeth Volk, U Göttingen

Face-to-face interaction plays a pivotal role for languages to emerge and evolve freely. The analysis of spontaneous communication therefore offers valuable insights into the development of grammatical structures. The gesture palm-up fulfills various functions in spoken (cf. Kendon 2004) and sign language discourse (cf. Engberg-Pedersen 2002). It has been argued that the use of  palm-up  as  a  co-speech  gesture  originates  in  practical  everyday  actions  such  as  giving, offering, and receiving objects, which are functionally extended to communicative actions (cf. Streeck  1994).  Serving  a  communicative  function,  palm-up  can  be  used  to  handle abstract entities and mark speech acts, while further meaning changes are possible due to varying hand movement patterns (cf. Müller 2004). Based on the observation that signers of different age groups  use  palm-up  for  different  discourse  functions  in  New  Zealand  Sign  Language  (cf. McKee and Wallingford 2011) and the Sign Language of the Netherlands (cf. van Loon 2012), van Loon, Pfau, and Steinbach (2014) argue for a grammaticalization path of palm-up from gesture to functional  linguistic element in sign  languages.  Accordingly, palm-up enters the grammatical system of sign languages as a turn-taking marker, which may further develop more grammatical  meanings  paving  the  way  for  discourse  markers,  conjunctions,  and  epistemic markers, among others. I will present empirical results of a thorough investigation on the use of palm-up in free interaction, which draw on video data collected from 20 Deaf signers of German Sign Language (DGS) and 10 hearing German speakers. I argue that the use of palm-up starts out independently of language modality; thus, the core meaning of palm-up may be traced  back  to  practical  everyday  actions  and  is  reanalyzed  as  a  communicative  signal  to indicate  turn-taking  or  to  express  stance,  e.g.  ignorance,  in  both  the  oral-auditory  and  the gestural-visual modality. Further meaning changes, however, occur modality-dependently, as palm-up can be used simultaneously with speech, but is sequentially integrated into a string of signs  in  sign  languages.  In  line  with  this,  I  will  provide  an  alternative  grammaticalization account of palm-up in DGS supported by the empirical data, which indicate that the subsequent levels of grammaticalization are still visible across different generations of DGS signers. 

References: 

Engberg-Pedersen, E. (2002). Gestures in signing. The presentation  gesture in Danish  Sign Language. In: R. Schulmeister & H. Reinitzer (eds.), Progress in sign language research. In honor of Siegmund Prillwitz. Hamburg: Signum, 143-162.

Kendon, A. (2004). Gesture. Visible action as utterance. Cambridge: Cambridge Univ. Press.

Loon, E. van (2012). What’s in the palm of your hands? Discourse functions of palm-up in Sign Language of the Netherlands. MA thesis, Univ. of Amsterdam.

Loon, E. van, R. Pfau & M.  Steinbach (2014).  The  grammaticalization of  gestures  in  sign languages.  In:  C.  Müller  et  al.  (eds.),  Body-language-communication.  Berlin:  de Gruyter, 2131-2147.

McKee, R. & S. L. Wallingford (2011). ‘So, well, whatever’. Discourse functions of palm-up in New Zealand Sign Language. Sign Language & Linguistics 14(2), 213-247.

Müller, C. (2004). Forms and uses of the Palm Up Open Hand. In: C. Müller & R. Posner (eds.), The semantics and pragmatics of everyday gestures. Berlin: Weidler, 233-256.

Streeck, J. (1994). ‘Speech-handling’. The metaphorical representation of speech in gestures. A cross-cultural study. Manuscript, University of Texas at Austin.

  

The resilience of sign and gesture

Susan Goldin-Meadow, U Chicago

Imagine a child who has never seen or heard any language at all.  Would such a child be able to invent a language on her own?  Despite what one might guess, the answer to this question is "yes".  I have studied children who are congenitally deaf and cannot learn the spoken language that surrounds them.  In addition, these children have not yet been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate––they gesture––and those gestures take on many of the forms and functions of language.  The properties of language that we find in the deaf children's gestures are just those properties that do not need to be handed down from generation to generation, but rather can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop.

In contrast to these deaf children who are inventing a language with their hands, hearing children are learning language from a linguistic model.  But they too produce gestures. Indeed, all speakers gesture when they talk. These gestures are associated with learning, they can index moments of cognitive instability, and they reflect thoughts not yet found in speech. Indeed, these gestures can do more than just reflect learning––they can be involved in the learning process itself. Encouraging children to gesture not only brings out ideas that the children were not able to express prior to gesturing, but can also teach children new ideas not found anywhere in their repertoire, either spoken or gestured.

Gesture is versatile in form and function.  Under certain circumstances, gesture can substitute for speech, and when it does, it embodies the resilient properties of language. Under other circumstances, gesture can form a fully integrated system with speech.  When it does, it both predicts and promotes learning.

 

On the origin of the head shake

Fabian Bross, U Stuttgart

This talk discusses several arguments in favor of the hypothesis that the head shake as a gesture for negation has its origins in early childhood experiences. It will elaborate on Charles Darwin’s observation that children inevitably shake their heads in order to stop food intake when sated, thereby establishing a connection between rejection and the head gesture. It is argued that later in life the semantics of the head shake extends from rejection to negation---just as it can be observed in the development of spoken language negation. The question of how head gestures are used in cultures where the head shake is not a sign for negation or where other negative head gestures are in use will also be discussed.

 

The semantics of co-speech and post-speech gestures

Cornelia Ebert, U Stuttgart

Recently, two different formal approaches have been put forth to explain the semantic interplay of co-speech gestures, i.e. gestures that accompany speech. Both argue that by default, gesture meaning enters into composition as non-at-issue material, either supposition-like (Ebert & Ebert 2014) or co-suppositional, i.e. as a special kind of presupposition (Schlenker to appear). I will compare the predictions of these two approaches, discuss possibilities to test for them experimentally, and present preliminary results of a pilot study.

 

Furthermore, I will compare the semantic behaviour of co-speech gestures with that of post-speech gestures (gestures that come after speech) and pro-speech gestures (gestures that replace speech), discuss Schlenker’s (2016) account to post-speech gestures and suggest an alternative approach.

 

Scope-taking strategies and the order of clausal categories in German Sign Language

Daniel Hole, U Stuttgart
Fabian Bross, U Stuttgart

The scope order of clausal categories has been claimed to be universal. In this talk we adopt a universalist cartographic approach to clausal syntax. By discussing the categories of speech acts, evaluation, epistemic modality, scalarity, volition and deontic, as well as other kinds of modality, we illustrate a striking regularity in strategies of scope-taking in German Sign Language (DGS): The wider/higher the scope of a clausal operator is, the more likely its expression will occur with a high body part by way of layering. Namely, descending from the eyebrows to the lower face, tentatively to the shoulders, and finally switching to manual expressions. For intermediate operators a left-to-right concatenation strategy is employed, and low categories are expressed by way of a manual right-to-left concatenation strategy. Hence, we propose a highly regular natural mapping of the scope-order of clausal categories onto the body. This sort of mapping can also be observed in other sign languages and may turn out to be universal.

 

Thinking, walking, talking: The motor‐cognitive connection

Nadja Schott, U Stuttgart

Recent studies suggest that motor and cognitive development are more closely related than previously assumed, depending on movement experiences, skills, age, and gender. Especially gross-motor performance such as functional goal-oriented locomotion is not a merely automatic process, but requires higher-level cognitive input, highlighting the relationship existing between cognitive function and fundamental motor skills across the lifespan. Motor and cognitive development might even share similar trajectories and characteristics across the lifespan. Similarly, Roebers et al. (2014) reported that fine motor skills, non-verbal intelligence and executive functioning are significantly interrelated. Additional findings show a strong relationship between age-related changes in motor and cognitive performance and motor skill acquisition (Favazza & Siperstein, 2016). Motor and cognitive functions appear to be even more strongly correlated in children with motor and/or cognitive impairment (e.g. Developmental Coordination Disorder, Down Syndrome, Autism) compared to typically developing (TD) children (Schott & Holfelder, 2015; Schott, El-Rajab, & Klotzbier, 2016).

To investigate the nature of motor and cognitive development, researches have studied components mostly independently. While on a behavioural level, an elegant approach to assess the interdependence of motor and cognitive function comes from the cognitive-motor interference (CMI) research using dual task (DT) conditions (Schott et al., 2016), recent developments in neuroimaging methods supports the notion that the brain is embodied meaning that bodily experience underlies thinking, talking, feeling, and action (Wilson, 2002). To support the multifaceted aspects of motor and cognitive changes across the lifespan several frameworks were proposed to investigate interactions between the structure and function of the brain, cognition, motor learning, physical activity, and life-style factors (Prado & Dewey, 2014; Ren, Wu, Chan, & Yan, 2013; Reuter-Lorenz & Park, 2014). However, they either focus on cognitive aging or on the effect of cognitive aging on skill acquisition, or more general on the impact of nutrition on brain development in early life.

In this presentation I first review cross-sectional studies using the dual-task approach in children and adolescents, particularly in the areas of gait and verbal responses. Second, I will present an integrative framework for the interaction of cognition, motor performance, and life-course factors.

9. Januar 2017; Fabian Bross

Ein-Tages-Workshop mit Round-Table-Gespräch

9:15 Yael Greenberg (Bar Ilan University): Gradability-scalarity interfaces in the semantics and association behavior of even

10:15 Malte Zimmermann (Universität Potsdam): Scale-sensitivity of scalar particles - uniform or heterogeneous
 
11:15 Coffee break
 
11:45 Daniel Hole (Universität Stuttgart): Arguments for a distributed syntax of evaluation, scalarity and basic focus quantification
 
12:45 Lunch break
 
14:15 Thuan Tran (Universität Potsdam): Information Structure-related displacement and temporal interpretation in Vietnamese'
 
15:15 Mira Grubic (Universität Potsdam): Additive and additive-scalar particles in Ngamo (West Chadic)
 
16:15 general discussion

13. Dezember 2016; Fabian Bross

R-Workshop vom 22. bis zum 24.02.2017 mit Bodo Winter (University of California).

5. Juli 2016; Fabian Bross

Workshop TELIC 2017: Non-culminating, Irresultative and Atelic Readings of Telic Predicates. Combining Theoretical and Experimental Perspectives. Genauere Informationen finden Sie hier.

5. Juli 2016; Fabian Bross

Statistik-Workshop vom 06. bis zum 08.07.2016 mit Bodo Winter (University of California).

13. Mai 2016; Fabian Bross

Vom 13. bis zum 18.06 findet die „Super Cognition Week“ als Kick-Off-Event des SRF statt!

Programm
Dienstag, 14.06.2016, V 5.01 (IMS, Pfaffenwaldring 5b)

Mittwoch, 15.06.2016, 17.01 (Tiefenhörsaal)

  • 15:45 Uhr: Round-Table-Gespräch mit Jennifer Mankin, Ronnie Wilbur, Julia Krebs und Daniel Hole
  • 17:00 Uhr: Begrüßung
  • 17:10 Uhr: Vortrag Sebastian Löbner (Heinrich-Heine-Universität Düsseldorf): Wovon wir reden, wenn wir von Gefühlen reden oder: Die Innenwelt der Außenwelt der Innenwelt
  • 17:45 Uhr: Jennifer Mankin (University of Sussex): The Psycholinguistics of Synaesthesia
  • 18:30 Uhr: Empfang und Poster-Präsentationen

Donnerstag, 16.06.2016, Raum 17.24

Freitag, 17.06.2016

  • Exkursion nach Dornach (Schweiz) zum Thema modalitätsübergreifende Kognition

Samstag, 18.06.2016

  • 10:00 Uhr: Round-Table-Gespräch „Gebärdensprachensyntax“ mit Markus Steinbach, Ronnie Wilbur, Daniel Hole und Fabian Bross

20. April 2016;

27.4.16, Raum 17.17

Vortrag Hubertus Kohle (LMU München): Artigo: Ein Crowdsourcing-Annotationsspiel in der Kunstgeschichte zur Datengewinnung und Datenauswertung

Zum Seitenanfang