Learning Neuro-symbolic Dialogue Strategies for Interactive Symbol Grounding
DOI:
https://doi.org/10.33011/lilt.v20.a1Keywords:
Formal Semantics, Symbol Grounding, Decision Making under Uncertainty, Interactive Task LearningAbstract
Interactive task learning studies situations in which a teacher (task instructor) interacts with a learner (task executor) to perform a novel task in an embodied environment. To successfully interpret the teacher's utterances, the learner has to perform interactive symbol grounding: it must update its prior beliefs about the mapping from symbols to visual referents each time the teacher speaks. Interactive symbol grounding is even more challenging if the learner starts out unaware of concepts that are critical to task success. In that case, the learner must use the embodied conversation to discover and adapt to unforeseen possibilities, and so must cope with a continuously expanding hypothesis space and hence a non-stationary domain model, requiring structure-level updates during interaction. In this paper, we propose a neuro-symbolic model for learning dialogue strategies for achieving interactive symbol grounding. In particular, we study the effects of enriching the model with symbolic reasoning that captures the valid consequences of quantifiers (e.g., both, every). Our hypothesis is that utilizing such reasoning makes interactive task learning more data efficient. We test this empirically via a task of interactive reference resolution, in which the learner must jointly learn a grounding model and a policy for querying the teacher to enhance its accuracy in grounding. Our results show that a learner that exploits such symbolic reasoning for both decision-making and grounding is more data-efficient than learners that ignore such linguistic insights.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Rimvydas Rubavicius, Alex Lascarides, Subramanian Ramamoorthy

This work is licensed under a Creative Commons Attribution 4.0 International License.
This work is licensed under CC BY 4.0, which permits you to use, share, adapt, distribute, and reproduce it in any medium or format, provided you credit the original author(s) and source.