Kinematic Self-AwarenessRobots Learn how to Move by Watching Themselves
Source:
Columbia University
3 min Reading Time
Columbia Engineering researchers have developed a method for robots to gain kinematic self-awareness — learning about their own bodies and movements simply by watching themselves on camera. This allows robots to adapt to damage, refine their actions, and operate without constant human intervention.
A robot observes its reflection in a mirror, learning its own morphology and kinematics for autonomous self-simulation. The process highlights the intersection of vision-based learning and robotics, where the robot refines its movements and predicts its spatial motion through self-observation.
(Source: Jane Nisselson/ Columbia Engineering)
By watching their own motions with a camera, robots can teach themselves about the structure of their own bodies and how they move, a new study from researchers at Columbia Engineering now reveals. Equipped with this knowledge, the robots could not only plan their own actions, but also overcome damage to their bodies.
“Like humans learning to dance by watching their mirror reflection, robots now use raw video to build kinematic self-awareness,” says study lead author Yuhang Hu, a doctoral student at the Creative Machines Lab at Columbia University, directed by Hod Lipson, James and Sally Scapa Professor of Innovation and chair of the Department of Mechanical Engineering. “Our goal is a robot that understands its own body, adapts to damage, and learns new skills without constant human programming.”
Most robots first learn to move in simulations. Once a robot can move in these virtual environments, it is released into the physical world where it can continue to learn. “The better and more realistic the simulator, the easier it is for the robot to make the leap from simulation into reality,” explains Lipson.
However, creating a good simulator is an arduous process, typically requiring skilled engineers. The researchers taught a robot how to create a simulator of itself simply by watching its own motion through a camera. “This ability not only saves engineering effort, but also allows the simulation to continue and evolve with the robot as it undergoes wear, damage, and adaptation,” Lipson says.
In the new study, the researchers instead developed a way for robots to autonomously model their own 3D shapes using a single regular 2D camera. This breakthrough was driven by three brain-mimicking AI systems known as deep neural networks. These inferred 3D motion from 2D video, enabling the robot to understand and adapt to its own movements. The new system could also identify alterations to the bodies of the robots, such as a bend in an arm, and help them adjust their motions to recover from this simulated damage.
Such adaptability might prove useful in a variety of real-world applications. For example, “imagine a robot vacuum or a personal assistant bot that notices its arm is bent after bumping into furniture,” Hu says. “Instead of breaking down or needing repair, it watches itself, adjusts how it moves, and keeps working. This could make home robots more reliable — no constant reprogramming required.”
Another scenario might involve a robot arm getting knocked out of alignment at a car factory. “Instead of halting production, it could watch itself, tweak its movements, and get back to welding — cutting downtime and costs,” Hu says. “This adaptability could make manufacturing more resilient.”
As we hand over more critical functions to robots, from manufacturing to medical care, we need these robots to be more resilient. “We humans cannot afford to constantly baby these robots, repair broken parts and adjust performance. Robots need to learn to take care of themselves, if they are going to become truly useful,” says Lipson. “That’s why self-modeling is so important.”
The ability demonstrated in this study is the latest in a series of projects that the Columbia team has released over the past two decades, where robots are learning to become better at self-modeling using cameras and other sensors.
In 2006, the research team’s robots were able to use observations to only create simple stick-figure-like simulations of themselves. About a decade ago, robots began creating higher fidelity models using multiple cameras. In this study, the robot was able to create a comprehensive kinematic model of itself using just a short video clip from a single regular camera, akin to looking in the mirror. The researchers call this newfound ability “Kinematic Self-Awareness.”
“We humans are intuitively aware of our body; we can imagine ourselves in the future and visualize the consequences of our actions well before we perform those actions in reality,” explains Lipson. “Ultimately, we would like to imbue robots with a similar ability to imagine themselves, because once you can imagine yourself in the future, there is no limit to what you can do.”
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Original Article: Teaching robots to build simulations of themselves; Nature Machine Intelligence; DOI:10.1038/s42256-025-01006-w