A periodic table of machine learning

MIT researchers have developed a single equation that unifies many machine-learning loss functions, streamlining how algorithms solve different problems.
They created a “periodic table” of over 20 classical machine-learning algorithms, showing how they are all connected through a shared mathematical principle. These methods work by refining learned data representations to match supervisory information.
Using this framework, they merged elements of two existing algorithms to develop a new image classification method that outperformed state-of-the-art models by 8%.
Like the periodic table of chemical elements, their table has empty spaces, suggesting future algorithms yet to be discovered. This tool helps researchers create new machine-learning methods without reinventing past ideas.
World’s oldest periodic table chart found in St Andrews
The researchers didn’t plan to create a periodic table for machine learning—it happened unexpectedly.
Alshammari, a researcher in the Freeman Lab, was studying clustering, a technique for grouping similar images together. She noticed similarities between clustering and contrastive learning, another machine-learning method. As she explored the math behind both, she realized they could be explained using the same equation.
This discovery led to the development of information contrastive learning (I-Con), a framework that unifies various machine-learning algorithms. It applies to everything from spam detection to deep learning models like LLMs. The key idea behind I-Con is that all these algorithms attempt to approximate real-world connections between data points while minimizing errors.
The team organized I-Con into a periodic table to make sense of their findings, categorizing algorithms based on how they connect data and approximate relationships. Like the chemical periodic table, it has blank spaces—suggesting future algorithms waiting to be discovered.
As researchers arranged their periodic table of machine learning, they noticed gaps—spaces where algorithms should exist but haven’t been developed yet.
They adapted ideas from contrastive learning and applied them to image clustering to fill one such gap. The result? A new algorithm that classified unlabeled images 8% more accurately than a leading approach. They also demonstrated that a data debiasing method, originally designed for contrastive learning, could improve clustering accuracy.
Because their table is flexible, researchers can add new rows and columns to represent additional types of connections between data points.
Ultimately, the ‘I-Con framework’ guides machine-learning scientists, helping them think in new ways and combine methods that might not have been linked before. Hamilton sees it as a powerful tool, noting that ‘one elegant equation’ — rooted in information theory—has unified algorithms spanning a century of research.
In a field flooded with new studies, unifying frameworks like I-Con provide clarity and open new doors for discovery.
Journal Reference:
- Shaden Naif Alshammari, John R. Hershey, Axel Feldmann, William T. Freeman, Mark Hamilton. A Unifying Framework for Representation Learning. Paper.
link