Abstract
Robots designed to interact with people in collaborative or social scenarios must move in ways that are consistent with the robot’s task and communication goals. However, combining these goals in a naïve manner can result in mutually exclusive solutions, or infeasible or problematic states and actions. In this paper, we present Lively, a framework which supports configurable, real-time, task-based and communicative or socially-expressive motion for collaborative and social robotics across multiple levels of programmatic accessibility. Lively supports a wide range of control methods (i.e., position, orientation, and joint-space goals), and balances them with complex procedural behaviors for natural, lifelike motion that are effective in collaborative and social contexts. We discuss the design of three levels of programmatic accessibility of Lively, including a graphical user interface for visual design called LivelyStudio, the core library Lively for full access to its capabilities for developers, and an extensible architecture for greater customizability and capability.
Overview Video
Video
Bibliography
@inproceedings{schoen2023lively,
title={Lively: Enabling Multimodal, Lifelike, and Extensible Real-time Robot Motion},
author={Schoen, Andrew and Sullivan, Dakota and Zhang, Ze Dong and Rakita, Daniel and Mutlu, Bilge},
booktitle={Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction},
pages={594--602},
year={2023}
}