CONNECTING MAKES POSSIBILITY.
Machine Life and Intelligence research center, Guangzhou University, China
School of Computing and Mathematical Sciences, University of Leicester, Leicester, United Kingdom
Seeking in the wilderness of human knowledge
Sharing ideas with the world
![]() |
Xuelong Sun, Shigang Yue and Michael Mangan, 2020 |
We present a bio-constrained computational model that incorporates key elements for insects’ visual navigation, including frequency-encoded image processing for visual homing and route following. Our model employs a ring attractor mechanism for optimal integration of navigational cues. Additionally, we investigate the roles of the central complex (CX) and mushroom bodies (MB) in insect navigation.
![]() |
Xuelong Sun, Shigang Yue and Michael Mangan, 2021 |
Our findings suggest that the model proposed in my previous eLife paper can be applied to elucidate the olfactory navigation abilities of insects. This further supports the notion that the central complex (CX) serves as the navigation center in insects, highlighting the presence of shared mechanisms across sensory domains and even among different insect species.
![]() |
Xuelong Sun, Qinbing Fu, Jigen Peng and Shigang Yue, 2023 |
We have developed a navigation model solely inspired by insects, enabling autonomous navigation in a three-dimensional (3D) environment encompassing both static and dynamic obstacles.
![]() |
Tian Liu, Xuelong Sun, Cheng Hu, Qingbing Fu and Shigang Yue, 2021 |
We are pleased to announce the successful implementation of a dynamic changeable wall in the arena, marking a significant advancement in our research. This addition introduces a dynamic element, enabling versatile and realistic experiments. The ability to modify the wall configuration opens new possibilities for studying diverse scenarios and investigating the impact of dynamic obstacles on behavior. This breakthrough underscores the dynamic nature of our research journey and our commitment to innovative ideas in our field.
![]() |
Xuelong Sun, Cheng Hu, Tian Liu, Shigang Yue, Jigen Peng and Qinbing Fu, 2023 |
We have designed and implemented a prey-predator interaction scenario that incorporates visual and olfactory sensory cues not only in computer simulations but also in a real multi-robot system. Observed emergent spatial-temporal dynamics demonstrate successful transitioning of investigating prey-predator interactions from virtual simulations to the tangible world. It highlights the potential of multi-robotics approaches for studying prey-predator interactions and lays the groundwork for future investigations involving multi-modal sensory processing while considering real-world constraints.
![]() |
Xuelong Sun, Michael Mangan, Jigen Peng and Shigang Yue, 2025 |
We have developed a open source platform for simulating embodied insect navigation behaviors based on Webots. We hope this platform. By integrating gait controllers and computational models into I2Bot, we have implemented classical embodied navigation behaviours and revealed some fundamental navigation principles. By open-sourcing I2Bot, we aim to accelerate the understanding of insect intelligence and foster advances in the development of autonomous robotic systems.
![]() |
Xuelong Sun, Tian Liu, Cheng Hu, Qingbing Fu and Shigang Yue, 2019 |
We have developed an optically emulated pheromone communication platform inspired by the multifaceted pheromone communication observed in ants. This innovative platform not only replicates the pheromone-based communication system but also enables the localization of multiple robots using vision-based techniques.
![]() |
Xuelong Sun, Shigang Yue and Michael Mangan, 2018 |
This research reveals that the classical ring-attractor network can achieve optimal integration of multiple direction cues without requiring neural plasticity. This finding serves as the foundation for my forthcoming model of insect navigation
![]() |
Tian Liu, Xuelong Sun, Cheng Hu, Qingbing Fu and Shigang Yue, 2021 |
we expand upon our previous work presented at ICARM2019 by conducting a thorough analysis and investigation of our platform. Building on our foundation, we systematically examine and organize the results, offering a deeper understanding of its capabilities and performance. This extended study enhances the significance and relevance of our research.
![]() |
Qinbing Fu, Xuelong Sun, Tian Liu, Cheng Hu and Shigang Yue, 2021 |
We employed the ColCOSP pheromone communication system to simulate city traffic, creating a unique and cost-effective robotic approach to evaluate online visual systems in dynamic scenes. This methodology enabled us to investigate the LGMD model in a novel and manageable manner.
![]() |
Jialang Hong, Xuelong Sun, Jigen Peng and Qinbing Fu, 2024 |
We embedded Probabilistic Module into LGMD models and found great performance. This study showcases a straightforward yet effective approach to enhance collision perception in noisy environments.
![]() |
Hao Chen, Xuelong Sun, Cheng Hu, Hongxin Wang and Jigen Peng, 2024 |
We integrated visual information in spatiotemporal windows regulated by frequency parameters of Haar wavelets and effectively discriminates the small target motion from the disturbance of random noise caused by dim light.
![]() |
Yani Chen, Lin Zhang, Hao Chen, Xuelong Sun, Jigen Peng, 2024 |
We built a unified network based on ring attractor with simple synaptic computing that could account for many neural dynamics observed in animal brains especially that in insect heading direction system of cue integrating.
Chrishing every moment in my life
Learning is the one of the happniest thing in the world
Python, C, HTML/CSS/JS
English, Chinese, Japanese
Blender, Webots, PR, Office, Inkscape