Towards more human-like language in multi-agent communication
Recent research has trained artificial agents to communicate with each other via task-oriented language, for the dual aims of (1) improving agent collaboration and (2) studying language evolution in artificial settings. This talk will describe two studies of the emergent linguistic phenomena in these multi-agent systems. First, pragmatics: I’ll present an amortized version of the Rational Speech Acts (RSA) model that learns to produce pragmatic language not via online Bayesian reasoning, but rather directly from a communicative training objective. Second, generalization: I’ll propose an extension of Lewis-style signaling games to *sets* of objects encoding abstract visual concepts, showing how the need to communicate generalizations encourages more systematic and compositional agent language. Together, these results shed light on the environmental pressures that give rise to human language, and make progress towards communication-based self-play as an effective way to endow agents with rich linguistic abilities.