Overview

In industrial settings, robots are typically employed to accurately track a reference force to exert on the surrounding environment to complete interaction tasks. Interaction controllers are typically used to achieve this goal. Still, they either require manual tuning, which demands a significant amount of time, or exact modeling of the environment the robot will interact with, thus possibly failing during the actual application. A significant advancement in this area would be a high-performance force controller that does not need operator calibration and is quick to be deployed in any scenario. With this aim, this paper proposes an Actor-Critic Model Predictive Force Controller (ACMPFC), which outputs the optimal setpoint to follow in order to guarantee force tracking, computed by continuously trained neural networks. This strategy is an extension of a reinforcement learning-based one, born in the context of human-robot collaboration, suitably adapted to robot-environment interaction.

Download

Prerequisites

Changelog

References, more information, and resources

  • Pozzi, A., Puricelli, L., Petrone, V., Ferrentino, E., Chiacchio, P., Braghin, F. and Roveda, L. (2023). Experimental Validation of an Actor-Critic Model Predictive Force Controller for Robot-Environment Interaction Tasks. In Proceedings of the 20th International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO; ISBN 978-989-758-670-5; ISSN 2184-2809, SciTePress, pages 394-404. DOI: 10.5220/0012160700003543.
  • YouTube video: https://youtu.be/7ysG4lz5lVY.