![]()
Dr. Zheng Hong (George) Zhu is a leading authority in space robotics and computational control, serving as Professor & Tier 1 York Research Chair in Space Robotics and Artificial Intelligence in the Department of Mechanical Engineering at York University in Canada. His research spans spacecraft dynamics and control, tethered space systems, autonomous space robotics, computational control methodologies, and in-space additive manufacturing. He has published over 239 peer-reviewed journal papers and 186 conference papers, establishing an international reputation in astronautics and mechatronics.
His achievements have been recognized with numerous prestigious awards, including the 2024 Solid Mechanics Medal and 2021 Robert W. Angus Medal (CSME), the 2024 Gold Medal and 2019 Engineering R&D Medal of Ontario Professional Engineers Awards, and the 2021 York University President’s Research Excellence Award.
Abstract:
Autonomous robotic active debris removal is critical for ensuring the long-term sustainability of space activities. This research introduces a novel framework that employs a swarm of small, resource-limited spacecraft (e.g., CubeSats, Nanosats) instead of a single complex robotic platform for the capture and deorbit of tumbling, noncooperative debris. Inspired by the collective behaviors of ant colonies and bird flocks, the approach leverages a decentralized, behavior-based control architecture that enables spacecraft swarms to self-organize, explore, and collaboratively encapsulate targets. Individual agents execute aggregation and flocking behaviors to converge on the debris, while anti-flocking mechanisms optimize distribution for comprehensive surface coverage and shape capture. Limited onboard memory and local inter-agent communication facilitate synchronized capture actions through shared observations of debris landmarks. The result is a leaderless, interchangeable multi-agent system that enhances robustness, scalability, and mission resilience, while reducing cost and system complexity. This swarm-based paradigm advances the state of the art in active debris removal and provides a scalable, efficient pathway toward sustainable space operations.
Dominik Dold is a Marie Curie Fellow at the University of Vienna. He did a PhD at Heidelberg University and the University of Bern working on neuro-inspired AI, neuromorphic computing, and computational neuroscience. Following his PhD, he was a Researcher in Residence at the Siemens AI Lab in Munich, focusing on bridging graph learning, neuromorphic computing, and cybersecurity applications. Before coming to Vienna, he worked as a Research Fellow in AI at the European Space Agency's Advanced Concepts Team (ACT). In his research, he investigates how functionality emerges in simple, energy-efficient, and adaptive systems - be it biological and artificial neural networks, materials, or robot swarms. He is one of the scientific chairs of the first SPAICE conference, and his work has won the first prize at the 2019 International Collegiate Competition for Brain-Inspired Computing (ICCBC 2019) at Tsinghua University.
Keynote Title: From Brains to Space: The Next Generation of AI Architectures
Abstract:
Onboard AI has the potential to revolutionise spacecraft autonomy. However, deploying such systems in space requires extreme resource efficiency and resilience to radiation damage. In this talk, I will introduce an emerging technology highly relevant to these challenges: neuromorphic computing. Neuromorphic chips mimic the brain’s architecture, performing decentralised computations through all-or-nothing events known as spikes. Their promise of ultra-low energy consumption makes them a compelling candidate for onboard applications. I will present recent results on prototyping spiking neural networks (SNNs) on analogue neuromorphic hardware, estimating their energy efficiency compared to traditional neural networks in simulation, and investigating memristive devices for onboard tasks such as asteroid geodesy and guidance & control. Finally, I will introduce a novel analytical framework for studying the capabilities of SNNs, paving the way for theory-grounded comparisons between spiking and artificial neural networks.
Alexis Nicolas Schlomer
Abstract:
Approximate Nearest-Neighbor Search (ANNS) efficiently finds data items whose embeddings are close to that of a given query in a high-dimensional space, aiming to balance accuracy with speed. Used in recommendation systems, image and video retrieval, natural language processing, and retrieval-augmented generation (RAG), ANNS algorithms such as IVFPQ, HNSW graphs, Annoy, and MRPT utilize graph, tree, clustering, and quantization techniques to navigate large vector spaces. Despite this progress, ANNS systems spend up to 99% of query time to compute distances in their final refinement phase. In this paper, we present PANORAMA, a machine learning-driven approach that tackles the ANNS verification bottleneck through data-adaptive learned orthogonal transforms that facilitate the accretive refinement of distance bounds. Such transforms compact over 90% of signal energy into the first half of dimensions, enabling early candidate pruning with partial distance computations. We integrate PANORAMA into SotA ANNS methods, namely IVFPQ/Flat, HNSW, MRPT, and Annoy, without index modification, using level-major memory layouts, SIMD-vectorized partial distance computations, and cache-aware access patterns. Experiments across diverse datasets—from image-based CIFAR-10 and GIST to modern embedding spaces including OpenAI’s Ada 2 and Large 3—demonstrate that PANORAMA affords a 2-30x end-to-end speedup with no recall loss.
Hongwei Yang
Dr. Hongwei Yang received a Ph.D. from Tsinghua University. Since 2017, he has been with Nanjing University of Aeronautics and Astronautics, where he is a Professor now. His research interests include Astrodynamics, Space Mission Design, Spacecraft GNC, etc. He has published over 60 peer-reviewed journal papers. He is a recipient of the Young Elite Scientists Sponsorship Program by CAST, Young Scientists Fund and General Program by NSFC, and Excellent Young Scientists Fund by Natural Science Foundation of Jiangsu Province. He won the Excellent Doctoral Dissertation Award of Tsinghua University, 2nd Place in 8th edition of the GTOC, and 1st Place in 9th edition of the CTOC. He served as a vice president in the first presidium of young scientists’ club of Chinese Society of Astronautics. He is a senior member of AIAA.
Abstract:
Gravity-assist maneuver is a key technology for deep space exploration, which can significantly reduce fuel consumption and expand the reachable domain of spacecraft. To obtain efficient and innovative trajectories, increasingly more complex dynamical models are required in preliminary design. Accordingly, gravity-assist trajectory design becomes very challenging due to the use of complex dynamical models. Specifically, the initial value sensitivity, high dimensionality of optimization variables, and strong nonlinearity issues are caused by the multi-body environments, multiple deep space maneuvers (DSMs) and multiple uncertainties, respectively. To address these issues, three kinds of trajectory designing methods leveraging machine learning are proposed. First, deep neural networks (DNNs) are trained to model the three-body flyby map, and then a new Tisserand Lambert problem is proposed for solving the two-point boundary problem of gravity-assist trajectories using only one-dimensional algebraic root search. Second, DNN approximators are proposed to model the optimal DSM trajectories, and then high dimensionality of optimization variables is significantly mitigated in global optimization of multiple-gravity-assist DSM trajectories. Third, a reinforcement-learning based robust trajectory design method is proposed for low-thrust gravity-assist trajectories with the help of a trajectory segmentation strategy and a reachability constraint of the gravity-assist body. The effectiveness of proposed methods is validated in either interplanetary transfer scenarios or planetary system exploration scenarios.