Using Deep Neural Networks to Learn Syntactic Agreement

Authors

  • Jean-Philippe Bernardy University of Gothenburg
  • Shalom Lappin University of Gothenburg, King's College London & Queen Mary University of London

DOI:

https://doi.org/10.33011/lilt.v15i.1413

Abstract

We consider the extent to which different deep neural network (DNN) configurations can learn syntactic relations, by taking up Linzen et al.’s (2016) work on subject-verb agreement with LSTM RNNs. We test their methods on a much larger corpus than they used (a ⇠24 million example part of the WaCky corpus, instead of their ⇠1.35 million example corpus, both drawn from Wikipedia). We experiment with several different DNN architectures (LSTM RNNs, GRUs, and CNNs), and alternative parameter settings for these systems (vocabulary size, training to test ratio, number of layers, memory size, drop out rate, and lexical embedding dimension size). We also try out our own unsupervised DNN language model. Our results are broadly compatible with those that Linzen et al. report. However, we discovered some interesting, and in some cases, surprising features of DNNs and language models in their performance of the agreement learning task. In particular, we found that DNNs require large vocabularies to form substantive lexical embeddings in order to learn structural patterns. This finding has interesting consequences for our understanding of the way in which DNNs represent syntactic information. It suggests that DNNs learn syntactic patterns more efficiently through rich lexical embeddings, with semantic as well as syntactic cues, than from training on lexically impoverished strings that highlight structural patterns.

Downloads

Published

2017-01-01

How to Cite

Bernardy, J.-P., & Lappin, S. (2017). Using Deep Neural Networks to Learn Syntactic Agreement. Linguistic Issues in Language Technology, 15. https://doi.org/10.33011/lilt.v15i.1413

Issue

Section

Articles