Bee Flying Sport Holds the Key to Intelligent AI System

Bees use flight movement and body swing to help their brains learn and recognize visual patterns, and according to research from the University of Sheffield, they can reshape the way next-generation artificial intelligence is developed.
The discovery reveals how even tiny insect brains use brain cells to solve complex visual tasks, challenging assumptions about intelligence and computing power.
The research team built a computational model of the bee brain to understand how flying movements create clear neural signals, allowing bees to effectively identify features in their environment. This biological approach suggests that robots of the future may become smarter by using movement to collect information rather than relying on a large number of computer networks.
Motor drive neural accuracy
Professor James Marshall, director of the Center for Machine Intelligence at the University of Sheffield, emphasized the implications of the study: “In this study, we have successfully shown that even the smallest brain can use motion to sense and understand the world around us. This shows that we show a small, efficient system – more complex than before, we are more complex, we are able to be more complex.
The model shows how bees create unique electrical patterns in the brain through scanning movement during flight. These movements help create sparse, anyway related neural responses, where only specific neurons can activate specific visual features, an efficient coding strategy that preserves energy and processing power.
The main findings of the study include:
- Active vision superiority: The bees scan the lower half of the pattern with 96-98% accuracy, while the fixed observations are 60%
- Minimum Neural Requirements: Only 16 lobular neurons are enough to complete complex model discrimination tasks
- Speed optimization: Normal scanning speeds are more likely to behave faster than faster movements, indicating evolutionary time accuracy
- Facial recognition capability: The model successfully identified the human face, matching the real bee performance
Brain network adapts through experience
The study reveals how exposure of natural images during flight automatically shapes neural connectivity in bees’ visual system. Lead researcher Dr. Hadi Maboudi explains the learning process: “Our bee brain model shows that its neural circuits have been optimized to process visual information rather than isolation, but through positive interactions with flying movements in the natural environment.”
Through non-associative learning (no reinforced neural adaptation), the model’s brain network gradually adjusts itself to specific directions and movements. This produces directional selective neurons that respond to specific visual features the greatest possible response while maintaining irrelevant stimuli to a large extent.
The researchers verified their computational model using the same visual challenges faced by real bees. In experiments that distinguish plus signs from multiplication signs, the model performs significantly better when mimicking the actual bee scanning strategy for a specific pattern area.
Impact on robots and AI
Professor Lars Chittka of Queens University London highlighted the broader significance: “Scientists are fascinated by the question of whether brain size predicts the intelligence of animals. Here we identify the minimum number of neurons required for difficult visual discrimination tasks and find that the number is small, even complex tasks such as facial recognition for people.”
The results show that intelligence emerges from how the brain, body and environment work together rather than purely computational power. This principle allows for more efficient robotic systems that actively shape their sensory inputs through motion rather than passively processing large data sets.
Professor Mikko Juusola notes: “This work enhances the growing body of evidence that animals do not passively receive information – they actively shape it. Our new model extends this principle to be higher-order visual processing of bees, revealing how behavior-driven scans create compressed, compressible, learnable neural code.”
The study provides a way to develop biologically inspired AI systems that can greatly reduce computing needs while improving performance in real-world applications such as automatic navigation, robot vision and adaptive learning systems. By leveraging evolutionary insights on effective information processing, these findings can drive advances in autonomous vehicles and environmental robotics.
Related
If our report has been informed or inspired, please consider donating. No matter how big or small, every contribution allows us to continue to provide accurate, engaging and trustworthy scientific and medical news. Independent news takes time, energy and resources – your support ensures that we can continue to reveal the stories that matter most to you.
Join us to make knowledge accessible and impactful. Thank you for standing with us!