During my PhD, I worked on unifying logical and statistical perspectives on meaning in natural language using probabilistic models of pragmatic reasoning.

The general idea is to model the interpretation of a linguistic expression (e.g. a sentence) as a process of Bayesian inference, to ask: given that this sentence is true (or, more to the point, given that someone said it) what must the world be like. This turns out to be a nice viewpoint for integrating a traditional logical perspective on meaning with an information-theoretic one, as well as handling semantic and pragmatic meaning in a single framework. I say a little bit more about that in the introduction of my dissertation.

Below are some of the projects I never quite finished.

Direction 1 of PhD research: scaling the models

  • Metaphor and Linguistic Creativity
    This paper explores the technical and conceptual consequences of a model of meaning where the listener’s prior over worlds is over a vector space. This allows integration with word embedding semantics.
    (unpublished draft, experiment section should be disregarded; the baseline model was implemented incorrectly. Cohn-Gordon and Bergen).

  • Lost in Machine Translation: A Method to Reduce Meaning Loss
    This and some related papers look at models of meaning where the utterance space is recursively generated. This allows for integration with a neural semantics, in particular a conditional language model.
    (NAACL 2019 - Cohn-Gordon and Goodman).

Direction 2 of PhD research: enriching the models

  • Verbal Irony, Pretense, and the Common Ground
    This paper looks at models where the listener is uncertain not only of the state of the world, but also the state of the common ground. In a nutshell, if I tell you something, you learn not only that thing, but also that I believed you didn’t already know it (an inference about my belief about your prior). A speaker can leverage this to communicate and that yields a very satisfying account of a very distinctive feature of natural languages, namely sarcasm.
    (unpublished draft, Cohn-Gordon and Bergen).
  • The Pragmatics of Multiparty Communication
    This project looked at what novel dynamics emerge when there are multiple listeners, so any one can explain away a speaker’s utterance on the assumption that it was directed towards a different listener. The interesting idea lurking in the background is that the joint common ground is not the union of the pairwise common grounds; at some point I should sit down and write out clearly what this means. It also gives a nice model of the semantics of proper names as presupposed variable assignments, which shows how parts of a 1st order logical semantics can be lifted into a Bayesian model.
    (unpublished abstract, Cohn-Gordon, Levy, and Bergen).

  • An Incremental Iterated Response Model of Pragmatics
    This paper looks at what happens if the listener starts reasoning pragmatically before an utterance is complete. It’s pretty simplistic.
    (SCiL 2019, ACL Proceedings - Cohn-Gordon, Goodman and Potts).