This theme is about the role of (deep) learning in robotics, with a focus on models and data. In spite of tremendous advances in perceptual tasks and a relatively narrow set of motor tasks, deep learning has yet to provide leverage in broader robotics problems. Is this because of a lack of data, or is there more to the story? If we need more data, should we explore how reusable datasets can play a more central role in robotics research? Are algorithmic changes needed to support the reuse of data? Or is robotics going to undergo a paradigm shift (like NLP and Computer Vision) and rely on large pretrained models which will be fine-tuned to various tasks? After all, people often ask about the (seeming) tension between models and data. Is this even a reasonable question, since all models are built from data to begin with and the latest trend in learning-based systems is really a trend towards better models? And if “more data” is not the answer, what will the role of deep learning be in robotics? And what do we need to do to replicate the progress deep learning brought to other disciplines?
What do we want from robotic systems that learn? Desirable properties that often come up include learning that is incremental and compositional. Some would argue, developmental. What about causal discovery and inference? Are they important in robotics? Is it even reasonable to field operational robots that alter their models during deployments, beyond adjusting the parameters in well-understood and interpretable ways? Can we assess possible consequences of major behavioral adjustments on the fly, and can we ensure that they are non-threatening? What knowledge should be baked into robots and what should be learned? How should we think about memory?
What should future robots do? Is there a world of robotic applications beyond ‘I want a robot to pick up the mess in my house?’ Can robotic technologies help with big scientific challenges in neuroscience, biomechanics, paleontology, ethology, hydrodynamics, and other scientific domains? Can they help mitigate the effects of climate change? Shouldn’t we think planet-scale when it comes to robotics? What is the next killer app of robotics, after automation? Is it logistics? Why is it so difficult to start successful companies with “state-of-the-art” robotic technologies? What do we need to do to have the output of the scientific community lead to (commercial, social, societal, environmental…) success in applications?
Is simulation the way forward in robotics? Data collected by robots operating in the real world might well be the best way to train robots (but see the output of Learning I above). If so, and even if very large, robotics-relevant datasets are collected that enable us to explore and benchmark different kinds of models and supervision, simulation lets us control the complexity of problems and provides access to ground truth states. It could thus play a crucial role. The computer vision and machine learning communities base progress on evaluation on static data sets, rather than real-world experimentation. Should we incentivize this in robotics? Perhaps this might lead us away from the current practice of reporting successes only - a research practice in robotics whereby many of the true difficulties in deploying real systems are not communicated?
Have we neglected and underestimated the role of the environment in robotics? How can we leverage the environment in novel ways to achieve robust behavior, maybe even intelligence? This topic explores in what ways the environment can contribute to the generation of the behavior of robots. For example, the environment can take over some aspects of planning when the state of the world indicates which controller should be invoked for task progress. Or, the environment can take over aspects of control and perception in the context of compliant motion. Yet, traditional approaches to the design of autonomous robots consider the environment as a fixed constraint. Should we not jointly re-evaluate the shape, form, and function of the environments that our robots operate in? Should we not characterize task space, and even attempt to modify it? This problem statement is not common within the robotics research community, where robots and their immediate environments are often modeled as disjoint entities. How about co-design, which would jointly re-evaluate the shape, form, and function of the environments that our robots operate in?
Even the most impressive sensorimotor skills in robots pale in comparison to the versatility of adaptive behavior found in nature. Biological species have highly integrated sensorimotor control systems that allow for rich varieties of behavior that fit increasingly broad ecological niches. Robots, by comparison, often have modular hardware and software and are designed to solve specific tasks, rendering their ecological niches very small. In biology, the concurrent and mutually dependent evolution of motility and sensory systems gave rise to phenomenal sensorimotor skills. What lessons from biology can guide us to accelerate the evolution of robotics? In this workshop, we bring together experts studying biological and synthetic sensory systems to explore the idea that a tight action-perception coupling is key to both understanding the success of biological systems and building efficient and versatile robotic systems. We will discuss how biological sensory processing heavily relies on self-motion and consider ways in which robotics could capitalize on the fact that robotic agents move.