Tech

AI goes to ‘kindergarten’ in order to learn more complex tasks

Share
Share
Researchers take AI to 'kindergarten' in order to learn more complex tasks
Modeling the animal’s learning experience. Credit: bioRxiv DOI: 10.1101/2024.01.12.575461

We need to learn our letters before we can learn to read and our numbers before we can learn how to add and subtract. The same principles are true with AI, a team of New York University scientists has shown through laboratory experiments and computational modeling.

In their work, published in the journal Nature Machine Intelligence, researchers found that when recurrent neural networks (RNNs) are first trained on simple cognitive tasks, they are better equipped to handle more difficult and complex ones later on.

The paper’s authors labeled this form of training kindergarten curriculum learning as it centers on first instilling an understanding of basic tasks and then combining knowledge of these tasks in carrying out more challenging ones.

“From very early on in life, we develop a set of basic skills like maintaining balance or playing with a ball,” explains Cristina Savin, an associate professor in NYU’s Center for Neural Science and Center for Data Science.

“With experience, these basic skills can be combined to support complex behavior—for instance, juggling several balls while riding a bicycle.

“Our work adopts these same principles in enhancing the capabilities of RNNs, which first learn a series of easy tasks, store this knowledge, and then apply a combination of these learned tasks to successfully complete more sophisticated ones.”

RNNs—neural networks that are designed to process sequential information based on stored knowledge—are particularly useful in speech recognition and language translation.

However, when it comes to complex cognitive tasks, training RNNs with existing methods can prove difficult and fall short of capturing crucial aspects of animal and human behavior that AI systems aim to replicate.

To address this, the study’s authors—who also included David Hocker, a postdoctoral researcher in NYU’s Center for Data Science, and Christine Constantinople, a professor in NYU’s Center for Data Science—first conducted a series of experiments with laboratory rats.

The animals were trained to seek out a water source in a box with several compartmentalized ports. However, in order to know when and where the water would be available, the rats needed to learn that delivery of the water was associated with certain sounds and the illumination of the port’s lights—and that the water was not delivered immediately after these cues.

In order to reach the water, then, the animals needed to develop basic knowledge of multiple phenomena (e.g., sounds precede water delivery, waiting after the visual and audio cues before trying to access the water) and then learn to combine these simple tasks in order to complete a goal (water retrieval).

These results pointed to principles of how the animals applied knowledge of simple tasks in undertaking more complex ones.

The scientists took these findings to train RNNs in a similar fashion—but, instead of water retrieval, the RNNs managed a wagering task that required these networks to build upon basic decision-making in order to maximize the payoff over time. They then compared this kindergarten curriculum learning approach to existing RNN-training methods.

Overall, the team’s results showed that the RNNs trained on the kindergarten model learned faster than those trained on current methods.

“AI agents first need to go through kindergarten to later be able to better learn complex tasks,” observes Savin.

“Overall, these results point to ways to improve learning in AI systems and call for developing a more holistic understanding of how past experiences influence learning of new skills.”

More information:
Compositional pretraining improves computational efficiency and matches animal behaviour on complex tasks, Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01029-3 On bioRxiv: DOI: 10.1101/2024.01.12.575461

Provided by
New York University


Citation:
AI goes to ‘kindergarten’ in order to learn more complex tasks (2025, May 19)
retrieved 19 May 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Solar outshines nuclear power plants
Tech

Solar outshines nuclear power plants

Credit: Pixabay/CC0 Public Domain Between now and 2050, the International Energy Agency...

AI chip developed for decentralized use without the cloud
Tech

AI chip developed for decentralized use without the cloud

The new AI chip is mounted on circuit board by Prof. Hussam’s...