Reinforcement learning (RL) algorithms show promise in robotics and multi-agent systems but often suffer from low sample efficiency. While methods like SHAC leverage differentiable simulators to improve efficiency, they are limited to specific settings: they require fully differentiable environments, including transition and reward functions, and have primarily been demonstrated in single-agent scenarios. To overcome these limitations, we introduce SHAC++, a novel framework inspired by SHAC. SHAC++ removes the need for differentiable simulator components by using neural networks to approximate the required gradients, training these networks alongside the standard policy and value networks. This enables the core SHAC approach to be applied in both non-differentiable and multi-agent environments. We evaluate SHAC++ on challenging multi-agent tasks from the VMAS suite, comparing it against SHAC (where applicable) and PPO, a standard algorithm for non-differentiable settings. Our results demonstrate that SHAC++ significantly outperforms PPO in both single- and multi-agent scenarios. Furthermore, in differentiable environments where SHAC operates, SHAC++ achieves comparable performance despite lacking direct access to simulator gradients, thus successfully extending SHACs benefits to a broader class of problems. The full implementation is openly available at https://github.com/f14-bertolotti/shacpp.

SHAC++: A Neural Network to Rule All Differentiable Simulators / F. Bertolotti, G. Aguzzi, W. Cazzola, M. Viroli (FRONTIERS IN ARTIFICIAL INTELLIGENCE AND APPLICATIONS). - In: ECAI 2025 / [a cura di] I. Lynce, N. Murano, M. Vallati, S. Villata, F. Chesani, M. Milano, A. Omicini, M. Dastani. - [s.l] : IOS Press BV, 2025. - ISBN 9781643686318. - pp. 2818-2825 (( 28th European Conference on Artificial Intelligence, ECAI 2025, including 14th Conference on Prestigious Applications of Intelligent Systems, PAIS 2025 Bologna 2025 [10.3233/faia251138].

SHAC++: A Neural Network to Rule All Differentiable Simulators

F. Bertolotti
Primo
;
W. Cazzola
Penultimo
;
2025

Abstract

Reinforcement learning (RL) algorithms show promise in robotics and multi-agent systems but often suffer from low sample efficiency. While methods like SHAC leverage differentiable simulators to improve efficiency, they are limited to specific settings: they require fully differentiable environments, including transition and reward functions, and have primarily been demonstrated in single-agent scenarios. To overcome these limitations, we introduce SHAC++, a novel framework inspired by SHAC. SHAC++ removes the need for differentiable simulator components by using neural networks to approximate the required gradients, training these networks alongside the standard policy and value networks. This enables the core SHAC approach to be applied in both non-differentiable and multi-agent environments. We evaluate SHAC++ on challenging multi-agent tasks from the VMAS suite, comparing it against SHAC (where applicable) and PPO, a standard algorithm for non-differentiable settings. Our results demonstrate that SHAC++ significantly outperforms PPO in both single- and multi-agent scenarios. Furthermore, in differentiable environments where SHAC operates, SHAC++ achieves comparable performance despite lacking direct access to simulator gradients, thus successfully extending SHACs benefits to a broader class of problems. The full implementation is openly available at https://github.com/f14-bertolotti/shacpp.
Settore INFO-01/A - Informatica
   Typeful Language Adaptation for Dynamic, Interacting and Evolving Systems
   T-LADIES
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2020TL3X8X_001
2025
HUAWEI
IOS Press
Sage
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
ecai25-published.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 826.44 kB
Formato Adobe PDF
826.44 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1231675
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex 0
social impact