Language, Logic, and Machines (LLaMa) Group

Humans use language to transmit information and to support reasoning. If you ask me how many of the tacos are left, and I say "some", you will defeasibly infer that it's not the case that all are left. This inference depends both on your beliefs about my intentions, but also facts about the meaning of words like "some" and "all", as well as their relations to each other in the language. Even this simple example shows that we need a theory of natural language semantics that explains how people compute the meanings of expressions based on their parts, but also how those meanings become enriched in context in virtue of how interlocutors reason about each others beliefs, goals, and intentions.

Research in the LLaMa Group aims to address this question by combining tools and insights from formal, logic-based approaches to meaning with tools and insights from computational linguistics and work in computational cognitive modeling. In the ideal, the result are theories that combine the richly structured repesentations we believe underlie natural language meaning with a plausible model of how humans learn and perform computations over these representations that accord with behavioral data.


Current Reseach

Past Reseach

Courses & Workshops