Tech

Bilinear sequence regression model shows why AI excels at learning from word sequences

Share
Share
ChatGPT
Credit: Unsplash/CC0 Public Domain

Researchers at EPFL have created a mathematical model that helps explain how breaking language into sequences makes modern AI-like chatbots so good at understanding and using words. The work is published in the journal Physical Review X.

There is no doubt that AI technology is dominating our world today. Progress seems to be moving in leaps and bounds, especially focused on large language models (LLMs) like chatGPT.

But how do they work? LLMs are made up of neural networks that process long sequences of “tokens.” Each token is typically a word or part of a word and is represented by a list of hundreds or thousands of numbers—what researchers call a “high-dimensional vector.” This list captures the word’s meaning and how it’s used.

For example, the word “cat” might become a list like [0.15, -0.22, 0.47, …, 0.09], while “dog” is encoded in a similar way but with its own unique numbers. Words with similar meanings get similar lists, so the LLM can recognize that “cat” and “dog” are more alike than “cat” and “banana.”

A black box, even for experts

Processing language as sequences of these vectors is clearly effective, but, ironically, we don’t really understand why. Simple mathematical models for long sequences of these high-dimensional tokens are still mostly unexplored.

This leaves a gap in our understanding: Why does this approach work so well, and what makes it fundamentally different from older methods? Why is it better to present data to neural networks as sequences of high-dimensional tokens rather than as a single, long list of numbers? While today’s AI can write stories or answer questions impressively, the inner workings that make this possible are still a black box—even for experts.

Now, a team of scientists led by Lenka Zdeborová at EPFL has built the simplest possible mathematical model that still captures the heart of learning from tokens as LLMs do.

Their model, called bilinear sequence regression (BSR), strips away the complexity of real-world AI but keeps some of its essential structure and acts as a “theoretical playground” for studying how AI models learn from sequences.

How does BSR work? Imagine a sentence where you can turn each word into a list of numbers that captures its meaning—just like LLMs do. You line these lists up into a table, with one row per word. This table keeps track of the whole sequence and all the details packed into each word.

A clear mathematical benchmark

Instead of processing all the information at once like older AI models, BSR looks at the rows of the table in one way and at the column in another. The model then uses this information to predict a single outcome, such as the sentiment of the sentence.

The power of BSR is that it is simple enough to be fully solved with mathematics. This lets researchers see exactly when sequence-based learning starts to work, and how much data is needed for a model to reliably learn from patterns in sequences.

BSR sheds light on why we get better results using a sequence of embeddings rather than flattening all the data into one big vector. The model revealed sharp thresholds where learning jumps from useless to effective once it “sees” enough examples.

This research offers a new lens for understanding the inner workings of large language models. By solving BSR exactly, the team provides a clear mathematical benchmark that takes a step toward a theory that can guide the design of future AI systems.

These insights could help scientists build models that are simpler, more efficient, and possibly more transparent.

More information:
Vittorio Erba et al, Bilinear Sequence Regression: A Model for Learning from Long Sequences of High-Dimensional Tokens, Physical Review X (2025). DOI: 10.1103/l4p2-vrxt

Provided by
Ecole Polytechnique Federale de Lausanne


Citation:
Bilinear sequence regression model shows why AI excels at learning from word sequences (2025, June 20)
retrieved 20 June 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
ChatGPT is crushing the AI referral game, but China’s DeepSeek has shut it out in one bold move
Tech

ChatGPT is crushing the AI referral game, but China’s DeepSeek has shut it out in one bold move

ChatGPT now leads AI referral traffic worldwide, leaving Google’s Gemini far behind...

TP-Link’s rugged new router can survive underwater, but still won’t save your signal from drowning in reality
Tech

TP-Link’s rugged new router can survive underwater, but still won’t save your signal from drowning in reality

TP-Link’s EAP772-Outdoor survives immersion, but the signal won’t follow it into the...

Are we making hackers sound too cool? These security experts think so
Tech

Are we making hackers sound too cool? These security experts think so

Cybersecurity experts recommend we rethink the way we name attackers Names like...