AI News

“Periodic table of machine learning” could fuel AI discovery | MIT News | Massachusetts Institute of Technology

MIT researchers have developed a periodic table that organizes over 20 classical machine-learning algorithms, revealing connections and potential for algorithm fusion. The framework centers…

AI News Topic

“Periodic table of machine learning” could fuel AI discovery | MIT News | Massachusetts Institute of Technology

Apr 25, 2025

“Periodic table of machine learning” could fuel AI discovery | MIT News | Massachusetts Institute of Technology
AI News

MIT researchers have developed a periodic table that organizes over 20 classical machine-learning algorithms, revealing connections and potential for algorithm fusion. The framework centers…

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT." Previous image Next image MIT researchers have created a periodic table that shows how more than 20 classical machine-learning algorithms are connected.

The new framework sheds light on how scientists could fuse strategies from different methods to improve existing AI models or come up with new ones. For instance, the researchers used their framework to combine elements of two different algorithms to create a new image-classification algorithm that performed 8 percent better than current state-of-the-art approaches.

The periodic table stems from one key idea: All these algorithms learn a specific kind of relationship between data points. While each algorithm may accomplish that in a slightly different way, the core mathematics behind each approach is the same. Building on these insights, the researchers identified a unifying equation that underlies many classical AI algorithms.

They used that equation to reframe popular methods and arrange them into a table, categorizing each based on the approximate relationships it learns. Just like the periodic table of chemical elements, which initially contained blank squares that were later filled in by scientists, the periodic table of machine learning also has empty spaces.

These spaces predict where algorithms should exist, but which haven’t been discovered yet. The table gives researchers a toolkit to design new algorithms without the need to rediscover ideas from prior approaches, says Shaden Alshammari, an MIT graduate student and lead author of a paper on this new framework.

“It’s not just a metaphor,” adds Alshammari. “We’re starting to see machine learning as a system with structure that is a space we can explore rather than just guess our way through.” She is joined on the paper by John Hershey, a researcher at Google AI Perception; Axel Feldmann, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Mark Hamilton, an MIT graduate student and senior engineering manager at Microsoft.

The research will be presented at the International Conference on Learning Representations. The researchers didn’t set out to create a periodic table of machine learning. After joining the Freeman Lab, Alshammari began studying clustering, a machine-learning technique that classifies images by learning to organize similar images into nearby clusters.

She realized the clustering algorithm she was studying was similar to another classical machine-learning algorithm, called contrastive learning, and began digging deeper into the mathematics. Alshammari found that these two disparate algorithms could be reframed using the same underlying equation.

“We almost got to this unifying equation by accident. Once Shaden discovered that it connects two methods, we just started dreaming up new methods to bring into this framework. Almost every single one we tried could be added in,” Hamilton says. The framework they created, information contrastive learning (I-Con), shows how a variety of algorithms can be viewed through the lens of this unifying equation.

It includes everything from classification algorithms that can detect spam to the deep learning algorithms that power LLMs. The equation describes how such algorithms find connections between real data points and then approximate those connections internally. Each algorithm aims to minimize the amount of deviation between the connections it learns to approximate and the real connections in its training data.