A new study challenges decades-old assumptions about the physics of foams, revealing that bubbles continue to reorganize internally even as the material appears solid. Strikingly, the underlying mathematics mirrors how modern artificial intelligence systems learn.
A microscopic close-up of the bubbles in foam, whose movements mathematically mirror the process of deep learning, used to train modern AI systems.
(Source: Crocker Lab)
Foams are everywhere: soap suds, shaving cream, whipped toppings and food emulsions like mayonnaise. For decades, scientists believed that foams behave like glass, their microscopic components trapped in static, disordered configurations.
Now, engineers at the University of Pennsylvania have found that foams actually flow ceaselessly inside while holding their external shape. More strangely, from a mathematical perspective, this internal motion resembles the process of deep learning, the method typically used to train modern AI systems.
The discovery could hint that learning, in a broad mathematical sense, may be a common organizing principle across physical, biological and computational systems, and provide a conceptual foundation for future efforts to design adaptive materials. The insight could also shed new light on biological structures that continuously rearrange themselves, like the scaffolding in living cells.
In a paper in Proceedings of the National Academy of Sciences, the team describes using computer simulations to track the movement of bubbles in a wet foam. Rather than eventually staying put, the bubbles continued to meander through possible configurations. Mathematically speaking, the process mirrors how deep learning involves continually adjusting an AI system’s parameters — the information that encodes what an AI “knows” — during training.
“Foams constantly reorganize themselves,” says John C. Crocker, Professor in Chemical and Biomolecular Engineering (CBE) and the paper’s co-senior author. “It’s striking that foams and modern AI systems appear to follow the same mathematical principles. Understanding why that happens is still an open question, but it could reshape how we think about adaptive materials and even living systems.”
In some ways, foams behave mechanically like solids: they more or less hold their shape and can rebound when pressed. At a microscopic level, however, foams are “two-phase” materials, made up of bubbles suspended in a liquid or solid. Because foams are relatively easy to create and observe yet exhibit complex mechanical behavior, they have long served as model systems for studying other crowded, dynamic materials, including living cells.
To describe foams mathematically, researchers long assumed that, like the atoms in glass, bubbles behave like boulders: in a landscape of possible positions that require more or less energy to maintain, the bubbles “fall” into certain locations, as if rolling downhill. This picture neatly explains why foams can seem solid. Once a bubble settles into a low-energy position, this theory suggests, the bubble should remain in place, like a boulder resting in a valley.
“When we actually looked at the data, the behavior of foams didn’t match what the theory predicted,” says Crocker. “We started seeing these discrepancies nearly 20 years ago, but we didn’t yet have the mathematical tools to describe what was really happening.” Resolving that mismatch required a different mathematical perspective, one capable of characterizing systems that continue to reorganize without ever settling into a single, fixed configuration.
A New Mathematical Lens
During training, modern AI systems continually adjust their parameters — the numerical values that encode what they “know.” Much like bubbles in foams were once thought to descend into metaphorical valleys, searching for the positions that require the least energy to maintain, early approaches to AI training aimed to optimize systems as tightly as possible to their training data.
Deep learning accomplishes this using optimization algorithms related to the mathematical technique “gradient descent,” which involves repeatedly nudging a system in the direction that most improves its performance. If an AI’s internal representation of its training data were a landscape, the optimizers guide the system downhill, step by step, toward configurations that reduce error — those that best match the examples it has seen before.
Over time, researchers realized that forcing systems into the deepest possible valleys was counterproductive. Models that optimized too precisely became brittle, unable to generalize beyond the data they had already seen. “The key insight was realizing that you don’t actually want to push the system into the deepest possible valley,” says Robert Riggleman, Professor in CBE and co-senior author of the new paper. “Keeping it in flatter parts of the landscape, where lots of solutions perform similarly well, turns out to be what allows these models to generalize.”
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
When the Penn researchers looked again at their foam data through this lens, the parallel was hard to miss. Rather than settling into “deep” positions in this metaphorical landscape, bubbles in foams also remained in motion, much like the parameters in modern AI systems, continuously reorganizing within broad, flat regions with similar characteristics. The same mathematics that explains why deep learning works turned out to describe what foams had been doing all along.
Future Directions
The new paper raises questions as well as answers, but in a field long thought to be settled, that may be the work’s most important contribution.
By showing that, rather than being stuck in glass-like, stable configurations, the bubbles in foam constantly meander in ways that mirror how AI models learn, the work invites researchers to consider what other complex systems this mathematical lens might help clarify.
Crocker’s team is now turning back to the system that initially motivated his interest in foams — the cytoskeleton, the microscopic scaffolding inside cells that plays a central role in supporting life. Much like the foams in this paper, the cytoskeleton must constantly rearrange itself while maintaining overall structure.
“Why the mathematics of deep learning accurately characterizes foams is a fascinating question,” says Crocker. “It hints that these tools may be useful far outside of their original context, opening the door to entirely new lines of inquiry.”
Original Article: Slow relaxation and landscape-driven dynamics in viscous ripening foams; Proceedings of the National Academy of Sciences; DOI:10.1073/pnas.2518994122