Word-object learning via visual exploration in space (WOLVES) : a neural process model of cross-situational word learning / Ajaz A. Bhat, John P. Spencer, and Larissa K. Samuelson

By: Contributor(s): Material type: TextTextPublication details: Washington, D.C. : American Psychological Association, c2022Description: pages 640-695 : tables, figuresISSN:
  • 0033-295X
Subject(s): Online resources: In: Psychological Review Volume 129, Number 4, (July 2022)Summary: Infants, children and adults have been shown to track co-occurrence across ambiguous naming situations to infer the referents of new words. The extensive literature on this cross-situational word learning (CSWL) ability has produced support for two theoretical accounts—associative learning (AL) and hypothesis testing (HT)—but no comprehensive model of the behaviour. We propose WOLVES, an implementation-level account of CSWL grounded in real-time psychological processes of memory and attention that explicitly models the dynamics of looking at a moment-to-moment scale and learning across trials. We use WOLVES to capture data from 12 studies of CSWL with adults and children, thereby providing a comprehensive account of data purported to support both AL and HT accounts. Direct model comparison shows that WOLVES performs well relative to two competitor models. In particular, WOLVES captures more data than the competitor models (132 vs. 69 data values) and fits the data better than the competitor models (e.g., lower percent error scores for 12 of 17 conditions). Moreover, WOLVES generalizes more accurately to three ‘held-out’ experiments, although a model by Kachergis and colleagues (2012) fares better on another metric of generalization (AIC/BIC). Critically, we offer the first developmental account of CSWL, providing insights into how memory processes change from infancy through adulthood. WOLVES shows that visual exploration and selective attention in CSWL are both dependent on and indicative of learning within a task-specific context. Further, learning is driven by real-time synchrony of words and gaze and constrained by memory processes over multiple timescales.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Materials specified Status Date due Barcode
Continuing Resources Continuing Resources NU Clark Journals Reference Available

Includes appendices (pages 687-695).

Includes bibliographical references (pages 682-687).

Infants, children and adults have been shown to track co-occurrence across ambiguous naming situations to infer the referents of new words. The extensive literature on this cross-situational word learning (CSWL) ability has produced support for two theoretical accounts—associative learning (AL) and hypothesis testing (HT)—but no comprehensive model of the behaviour. We propose WOLVES, an implementation-level account of CSWL grounded in real-time psychological processes of memory and attention that explicitly models the dynamics of looking at a moment-to-moment scale and learning across trials. We use WOLVES to capture data from 12 studies of CSWL with adults and children, thereby providing a comprehensive account of data purported to support both AL and HT accounts. Direct model comparison shows that WOLVES performs well relative to two competitor models. In particular, WOLVES captures more data than the competitor models (132 vs. 69 data values) and fits the data better than the competitor models (e.g., lower percent error scores for 12 of 17 conditions). Moreover, WOLVES generalizes more accurately to three ‘held-out’ experiments, although a model by Kachergis and colleagues (2012) fares better on another metric of generalization (AIC/BIC). Critically, we offer the first developmental account of CSWL, providing insights into how memory processes change from infancy through adulthood. WOLVES shows that visual exploration and selective attention in CSWL are both dependent on and indicative of learning within a task-specific context. Further, learning is driven by real-time synchrony of words and gaze and constrained by memory processes over multiple timescales.

There are no comments on this title.

to post a comment.

© 2024 NU LRC CLARK. All rights reserved. Privacy Policy I Powered by: KOHA