Yoonjeong Lee:
Yoonjeong Lee

Yoonjeong Lee
[junɕʌŋ li]

​​My work considers questions such as: How do the abstract component gestures of a phonological segment vary in their dynamical control as a function of phrasal structure? How do the structured surface variations guide listener perception and play a potential role in sound change? How are acoustic and articulatory prosodic signatures shaped by the language-specific phonological system? How is accommodation between conversing individuals related to prosodically salient positions? I pursue this work in the domain of F0 and articulation through empirical experimentation using laboratory techniques including electromagnetic articulography (EMA), real-time magnetic resonance imaging (rtMRI), and response mouse-tracking methods. I also place an importance on data-driven computational modeling to further a unified, coherent account of suprasegmental and word-level speech production.​ ​​​
DISSERTATION: The Prosodic Substrate of Consonant and Tone Dynamics 
In my dissertation [.pdf], I employed a dynamic approach to relating consonant and tone by examining the segmental and tonal sensitivity to phrasal prosodic structure in a language with specified (i.e., relatively rigid) intonational patterns. To tackle the intricacies of segment/prosody interaction, I look at intonation patterns in the contemporary Seoul dialect of Korean. Given the segmental tone (i.e., F0) contrast for LAX versus TENSE stops, the convolution of these stops with the language’s strict Accentual Phrase (AP) tonal pattern provides a test bed for examining the co-expression of segmental and prosodic tonal specifications. In addition to clarifying the phonetic description needed for new pronunciation norms of younger generations speakers of this language, this work establishes how the local phonetic organization of a system of contrast is modulated as a function of phrasal positions by uncovering the tone gestures deployed for segmental "tenseness" and how these interact with the language's AP intonational patterning. Taken together, these results uncover an intricate tone pattern in which local segmental information is co-expressed with phrasal information and shows that dynamical modeling has the ability to account for how both categorical and gradient aspects of tone realization and how a prosodically asymmetric pattern emerges from a single underlying system. ​​(Associated WorkLee, Goldstein & Byrd, manuscript in preparation and talk at the LSA 2019; Oh & Lee, 2018)​​​​
Prosodic Structuring in Spoken Language
In my work on phonetic signatures of prosodic boundary and accentual prominence in Korean and English ( Lee, 2011, 2013; Cho, Lee & Kim, 2011, 2014), I show that prosodically structured variability can be accounted for by the coordination between prosodic gestures and vocal tract gestures.

In my recent work with Louis Goldstein and Elsi Kaiser (Lee, Goldstein & Kaiser, under invited revision), using a mouse-tracking method, we examine listeners' segmentation of internal open juncture sequences, specifically looking at listeners' sensitivity to the sub-phonemic information. Our findings suggest that segmentation and lexical access are highly attuned to bottom-up phonetic information and that this kind of information is stored as abstract knowledge in the listeners' lexicons.

In my work with Louis Goldstein and Shrikanth Narayanan (Lee, Goldstein & Narayanan, 2015), using rtMRI we shed light on the articulatory composition of the Korean liquid in terms of different prosodic contexts. Our results suggest that the positional allophony between lateral and flap is not only attributable to overall gestural reduction, but also to a categorical distinction in gestural composition.

In NIH-funded work on conversational interaction published in PLoS One (Lee, Y., Gordon Danner, Parrell, Lee, S., Goldstein & Byrd, 2018), we employed a dual-EMA setup to investigate how speakers' prosodic, acoustic, and articulatory speech behavior adapts to their dyad partner's speech over the course of an interaction. Our results contribute to an understanding of how the realization of linguistic phrasal structure is coordinated across interacting speakers and provide novel evidence that speakers do exhibit accommodation with one another at the level of cognitively specified motor control of articulatory gestures and that this accommodation is sensitive to prosodic structure.
Voice Identity from Variability 
In my current postdoctoral project, I am collaborating with Jody Kreiman, Patricia Keating, and Abeer Alwan, pursuing an interdisciplinary approach to the study of voice quality. Jody Kreiman and her team have identified a suite of measures that constitute a psychoacoustic model of voice quality, paving the way for a long overdue refinement in characterizing voice variation both within and between speakers. The voice quality field has largely focused on the variability between speakers, but little is known about the nature and importance of everyday variability in voice quality within a speaker. Since I joined the Kreiman laboratory, I have been tackling the challenge of identifying which of the model's indices account for perceptually relevant acoustic variance within speakers. Based on studies of faces and cognitive models of speaker recognition, Kreiman and I hypothesize that a few sets of these measures would be shared across speakers, but that much of what characterizes individual speakers would be idiosyncratic. I employ a dimension reduction technique (i.e., principal component analysis) to test this hypothesis against a large voice data set of within- and between-speaker variability (100 native speakers of English) and find novel evidence supporting our hypothesis. Our current findings and subsequent studies testing the assumption that learning a new voice entails learning how that voice varies can serve as a basis for research on voice production and recognition, and for clinical practice/treatment of deviation in voice quality. ​​(Associated Work: Lee & Kreiman, accepted and talk to appear at the ICPhS 2019)