AI Learns Through the Eyes and Ears of a Child – New York University

Posted: Published on February 4th, 2024

This post was added by Dr Simmons

AI systems, such as GPT-4, can now learn and use human language, but they learn from astronomical amounts of language inputmuch more than children receive when learning how to understand and speak a language. The best AI systems train on text with a word count in the trillions, whereas children receive just millions per year.

Due to this enormous data gap, researchers have been skeptical that recent AI advances can tell us much about human learning and development. An ideal test for demonstrating a connection would involve training an AI model, not on massive data from the web, but on only the input that a single child receives. What would the model be able to learn then?

A team of New York University researchers ran this exact experiment. They trained a multimodal AI system through the eyes and ears of a single child, using headcam video recordings from when the child was six months and through their second birthday. They examined if the AI model could learn words and concepts present in a childs everyday experience.

Their findings, reported in the latest issue of the journal Science, showed that the model, or neural network, could, in fact, learn a substantial number of words and concepts using limited slices of what the child experienced. That is, the video only captured about 1% of the childs waking hours, but that was sufficient for genuine language learning.

We show, for the first time, that a neural network trained on this developmentally realistic input from a single child can learn to link words to their visual counterparts, says Wai Keen Vong, a research scientist at NYUs Center for Data Science and the papers first author. Our results demonstrate how recent algorithmic advances paired with one childs naturalistic experience has the potential to reshape our understanding of early language and concept acquisition.

By using AI models to study the real language-learning problem faced by children, we can address classic debates about what ingredients children need to learn wordswhether they need language-specific biases, innate knowledge, or just associative learning to get going, adds Brenden Lake, an assistant professor in NYUs Center for Data Science and Department of Psychology and the papers senior author. It seems we can get more with just learning than commonly thought.

Vong, Lake, and their NYU colleagues, Wentao Wang and Emin Orhan, analyzed a childs learning process captured on first-person videovia a light, head-mounted cameraon a weekly basis beginning at six months and through 25 months, using more than 60 hours of footage. The footage contained approximately a quarter of a million word instances (i.e., the number of words communicated, many of them repeatedly) that are linked with video frames of what the child saw when those words were spoken and included a wide range of different activities across development, including mealtimes, reading books, and the child playing.

Go here to read the rest:

AI Learns Through the Eyes and Ears of a Child - New York University

Related Posts
This entry was posted in Ai. Bookmark the permalink.

Comments are closed.