Science

Self-driving cars learn to share road knowledge through digital word of mouth

The research team led by NYU Tandon has developed a method for self-driving cars to share their knowledge of road conditions indirectly, making it possible for each vehicle to learn from the experience of others, even if they rarely meet on the road.

The study, proposed in a paper by the Association for the Promotion of the Artificial Intelligence Conference on February 27, 2025, addresses an ongoing problem in artificial intelligence: how to help vehicles learn from each other while keeping their data private. Often, vehicles share what they have learned only in brief direct encounters, limiting the speed at which they can adapt to new conditions.

“Thinking about it is like building a shared experience network for self-driving cars,” said Yang Liu. Student Xiao Wang. Liu is a professor in the Department of Electrical and Computer Engineering at NYU Tandon and a member of its Advanced Technology Center for Telecommunications and Distributed Information Systems and NYU Wireless.

“Only cars driving in Manhattan can now know the road conditions in Brooklyn from other vehicles, even if it never drove there. This will make every car smarter and ready to prepare without encountering it in person,” Liu said.

The researchers say their new approach caches decentralized federated learning (Cached-DFL). Unlike traditional joint learning, which relies on a central server to coordinate updates, Cached-DFL enables vehicles to train their own AI models locally and share them directly with others.

When vehicles are within 100 meters of each other, they use high-speed device-to-device communication to exchange trained models instead of raw data. Crucially, they can also pass through models received by previous encounters, allowing information to spread far beyond instant interactions. Each car keeps caches of up to 10 external models and updates the AI ​​every 120 seconds.

To prevent outdated information from degrading performance, the system automatically deletes older models based on stability thresholds, ensuring that vehicles prioritize the latest and relevant knowledge.

The researchers tested their system through computer simulations using Manhattan’s street layout as a template. In their experiments, virtual vehicles moved along the grid of cities at a speed of about 14 meters per second, turning at intersections according to the probability, and had a 50% chance of continuing to turn directly and the same chance to other available roads.

Unlike traditional decentralized learning methods, which are affected when vehicles do not encounter frequently, Cached-DFL allows models to propagate indirectly through the network, just like how messages propagate in delay-tolerant networks, which are designed to process intermittent connections by storing and forwarding data until the connection is available. By acting as a relay, vehicles can pass on knowledge even if they never experience certain conditions in person.

“It’s kind of like how information spreads across social networks,” Liu explained. “Even if these devices never directly meet each other, devices can now pass on the knowledge of other people they meet.”

This multi-jump mechanism reduces the limitations of traditional model sharing methods that rely on immediate one-to-one communication. By allowing vehicles to act as relays, Cached-DFL enables learning to spread more effectively across the fleet than each vehicle is limited to direct interaction.

This technology allows connected vehicles to understand road conditions, signals and obstacles while keeping data private. This is especially useful in cities where cars face various conditions but rarely meet traditional learning methods.

Research shows that vehicle speed, cache size and model expiration affect learning efficiency. Faster speeds and frequent communication improve results, while outdated models reduce accuracy. Group-based caching strategies further enhance learning by prioritizing different models from different fields, rather than just the latest models.

As AI moves from centralized servers to edge devices, Cached-DFL provides a safe and effective way for self-driving cars to learn collectively, making them smarter and more adaptable. Cached-DFL can also be applied to other intelligent mobile agent network systems, such as drones, robots and satellites, to achieve powerful and effective decentralized learning to achieve group intelligence.

The researchers have provided the code publicly. More details can be found in its technical reports. In addition to Liu and Wang, the research team also included Gujun Xiong and Jian Li of Stony Brook University; and Houwei Cao of New York Institute of Technology.

The research was supported by several National Science Foundation grants, the Resilience and Intelligence Nextg Systems (RINGS) program, which includes funding from the Ministry of Defense and the National Institute of Standards and Technology, as well as NYU’s computing resources.

If you find this piece useful, consider supporting our work with a one-time or monthly donation. Your contribution allows us to continue to bring you accurate, thought-provoking scientific and medical news that you can trust. Independent reporting requires time, effort, and resources, and your support makes it possible for us to continue exploring stories that are important to you. Together, we can ensure that important discoveries and developments attract those who need them the most.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button