Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (2024)

Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (1) https://doi.org/10.3389/fpsyg.2022.741321 · Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (2)

Journal: Frontiers in Psychology, 2022

Publisher: Frontiers Media SA

Authors: Ludovica Pannitto, Aurelie Herbelot

Abstract

It has been shown that Recurrent Artificial Neural Networks automatically acquire some grammatical knowledge in the course of performing linguistic prediction tasks. The extent to which such networks can actually learn grammar is still an object of investigation. However, being mostly data-driven, they provide a natural testbed for usage-based theories of language acquisition. This mini-review gives an overview of the state of the field, focusing on the influence of the theoretical framework in the interpretation of results.

List of references

  1. Alishahi, Analyzing and interpreting neural networks for NLP: a report on the first BlackboxNLP workshop, Nat. Lang. Eng., № 25, с. 543
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (3) https://doi.org/10.1017/S135132491900024X
  2. Arehalli, Neural language models capture some, but not all, agreement attraction effects, CogSci 2020
  3. Baroni, Linguistic generalization and compositionality in modern artificial neural networks, Philos. Trans. R. Soc. Lond. B Biol. Sci., № 375, с. 1
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (4) https://doi.org/10.1098/rstb.2019.0307
  4. Barsalou, The instability of graded structure: implications for the nature of concepts, Concepts and Conceptual Development: Ecological and Intellectual Factors in Categorization, с. 101
  5. Boyd, Input effects within a constructionist framework, Mod. Lang. J., № 93, с. 418
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (5) https://doi.org/10.1111/j.1540-4781.2009.00899.x
  6. Chelba, One billion word benchmark for measuring progress in statistical language modeling, arXiv [cs.CL]
  7. Chowdhury, RNN simulations of grammaticality judgments on long-distance dependencies, Proceedings of the 27th International Conference on Computational Linguistics, с. 133
  8. Christiansen, Implicit statistical learning: a tale of two literatures, Top. Cogn. Sci., № 11, с. 468
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (6) https://doi.org/10.1111/tops.12332
  9. Christiansen, The now-or-never bottleneck: a fundamental constraint on language, Behav. Brain Sci., № 39, с. 1
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (7) https://doi.org/10.1017/S0140525X1500031X
  10. Christiansen, Creating Language: Integrating Evolution, Acquisition, and Processing
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (8) https://doi.org/10.7551/mitpress/10406.001.0001
  11. Cornish, Sequence memory constraints give rise to language-like structure through iterated learning, PLoS ONE, № 12, с. 1
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (9) https://doi.org/10.1371/journal.pone.0168532
  12. Davis, Discourse structure interacts with reference but not syntax in neural language models, Proc. 24th Conf. Comput. Nat. Lang. Learn, с. 396
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (10) https://doi.org/10.18653/v1/2020.conll-1.32
  13. Davis, Recurrent neural network language models always learn English-like relative clause attachment, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, с. 1979
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (11) https://doi.org/10.18653/v1/2020.acl-main.179
  14. Dyer, Recurrent neural network grammars, 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference, с. 199
  15. Elman, On the meaning of words and dinosaur bones: lexical knowledge without a lexicon, Cogn. Sci., № 33, с. 547
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (12) https://doi.org/10.1111/j.1551-6709.2009.01023.x
  16. Fazekas, Do children learn from their prediction mistakes? a registered report evaluating error-based theories of language acquisition, R. Soc. Open Sci., № 7, с. 180877
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (13) https://doi.org/10.1098/rsos.180877
  17. Giulianelli, Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, с. 240
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (14) https://doi.org/10.18653/v1/W18-5426
  18. Goldberg, Constructions at Work: The Nature of Generalization in Language
  19. Gomez, Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge, Cognition, № 70, с. 109
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (15) https://doi.org/10.1016/S0010-0277(99)00003-7
  20. Gómez, Infant artificial language learning and language acquisition, Trends Cogn. Sci., № 4, с. 178
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (16) https://doi.org/10.1016/S1364-6613(00)01467-4
  21. Gulordava, Colorless green recurrent networks dream hierarchically, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, № 1, с. 1195
  22. Hart, Meaningful differences in the everyday experience of young american children, Can. J. History Sport Phys. Educ., № 22, с. 323
  23. Hawkins, Investigating representations of verb bias in neural language models, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), с. 4653
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (17) https://doi.org/10.18653/v1/2020.emnlp-main.376
  24. Hochreiter, Long short-term memory, Neural Comput., № 9, с. 1735
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (18) https://doi.org/10.1162/neco.1997.9.8.1735
  25. Hu, A systematic assessment of syntactic generalization in neural language models, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, с. 1725
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (19) https://doi.org/10.18653/v1/2020.acl-main.158
  26. Huebner, BabyBERTa: Learning more grammar with small-scale child-directed language, Proceedings of the 25th Conference on Computational Natural Language Learning, с. 624
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (20) https://doi.org/10.18653/v1/2021.conll-1.49
  27. Jackendoff, Foundations of Language
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (21) https://doi.org/10.1093/acprof:oso/9780198270126.001.0001
  28. Kharitonov, How bpe affects memorization in transformers, arXiv preprint
  29. Kuncoro, What do recurrent neural network grammars learn about syntax?, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, с. 1249
  30. Kuncoro, LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, с. 1426
  31. Lakretz, The emergence of number and syntax units in LSTM language models, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), с. 11
  32. Lepori, Representations of syntax mask useful: Effects of constituency and dependency structure in recursive lstms, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, с. 3306
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (22) https://doi.org/10.18653/v1/2020.acl-main.303
  33. Linzen, Syntactic structure from deep learning, Ann. Rev. Linguist., № 7, с. 1
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (23) https://doi.org/10.1146/annurev-linguistics-032020-051035
  34. LinzenT. ChrupalaG. AlishahiA. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Brussels: Association for Computational Linguistics2018
  35. LinzenT. ChrupalaG. BelinlovY. HupkesD. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Florence: Association for Computational Linguistics2019
  36. Liu, Probing across time: what does roberta know and when?, Findings of the Association for Computational Linguistics: EMNLP 2021, с. 820
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (24) https://doi.org/10.18653/v1/2021.findings-emnlp.71
  37. Marvin, Targeted syntactic evaluation of language models, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, с. 1192
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (25) https://doi.org/10.18653/v1/D18-1151
  38. McCoy, Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks, CogSci, с. 2096
  39. McCoy, Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks, Trans. Assoc. Comput. Linguist., № 8, с. 125
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (26) https://doi.org/10.1162/tacl_a_00304
  40. McRae, People use their knowledge of common events to understand language, and do so as quickly as possible, Lang. Linguist. Compass, № 3, с. 1417
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (27) https://doi.org/10.1111/j.1749-818X.2009.00174.x.People
  41. Pannitto, Recurrent babbling: evaluating the acquisition of grammar from limited input data, Proceedings of the 24th Conference on Computational Natural Language Learning, с. 165
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (28) https://doi.org/10.18653/v1/2020.conll-1.13
  42. Pickering, An integrated theory of language production and comprehension, Behav. Brain Sci., № 36, с. 329
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (29) https://doi.org/10.1017/S0140525X12001495
  43. Ramscar, Error and expectation in language learning: the curious absence of mouses in adult speech, Language, № 89, с. 760
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (30) https://doi.org/10.1353/lan.2013.0068
  44. Romberg, Statistical learning and language acquisition, Wiley Interdiscipl. Rev. Cogn. Sci., № 1, с. 906
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (31) https://doi.org/10.1515/9781934078242
  45. Saffran, Statistical learning by 8-month-old infants, Science, № 274, с. 1926
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (32) https://doi.org/10.1126/science.274.5294.1926
  46. Tomasello, Constructing a Language: A Usage-Based Theory of Language Acquisition
  47. Attention is all you need VaswaniA. ShazeerN. ParmerN. UszkoreitJ. JonesL. GomezA. N. Long Beach, CACurran AssociatesAdvances in Neural Information Processing Systems2017
  48. Warstadt, Blimp: the benchmark of linguistic minimal pairs for english, Trans. Assoc. Comput. Linguist., № 8, с. 377
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (33) https://doi.org/10.1162/tacl_a_00321
  49. Warstadt, Learning which features matter: RoBERTa acquires a preference for linguistic generalizations (eventually), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (34) https://doi.org/10.18653/v1/2020.emnlp-main.16
  50. Can neural networks acquire a structural bias from raw linguistic data? WarstadtA. BowmanS. R. Proceedings of the 42th Annual Meeting of the Cognitive Science Society - Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 20202020
  51. Wilcox, What do RNN language models learn about filler gap dependencies?, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
  52. Yu, Word frequency does not predict grammatical knowledge in language models, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), с. 4040
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (35) https://doi.org/10.18653/v1/2020.emnlp-main.331

Publications that cite this publication

Quantum projections on conceptual subspaces

Alejandro Martínez-Mingo, Guillermo Jorge-Botana, José Ángel Martinez-Huertas, Ricardo Olmos Albacete

Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (36) https://doi.org/10.1016/j.cogsys.2023.101154

2023, Cognitive Systems Research, p.101154

Scopus

WoS

Crossref citations:0

Find all citations of the publication

About this publication

Number of citations 0
Number of works in the list of references 52
Journal indexed in Scopus Yes
Journal indexed in Web of Science Yes
Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition? (2024)
Top Articles
Latest Posts
Article information

Author: Prof. An Powlowski

Last Updated:

Views: 6338

Rating: 4.3 / 5 (44 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Prof. An Powlowski

Birthday: 1992-09-29

Address: Apt. 994 8891 Orval Hill, Brittnyburgh, AZ 41023-0398

Phone: +26417467956738

Job: District Marketing Strategist

Hobby: Embroidery, Bodybuilding, Motor sports, Amateur radio, Wood carving, Whittling, Air sports

Introduction: My name is Prof. An Powlowski, I am a charming, helpful, attractive, good, graceful, thoughtful, vast person who loves writing and wants to share my knowledge and understanding with you.