Few-Shot System Identification for Reinforcement Learning

Author's Department

Computer Science & Engineering Department

Second Author's Department

Computer Science & Engineering Department

Find in your Library


Document Type

Research Article

Publication Title

2021 6th Asia-Pacific Conference on Intelligent Robot Systems, ACIRS 2021

Publication Date





Learning by interaction is the key to skill acquisition for most living organisms, which is formally called Reinforcement Learning (RL). RL is efficient in finding optimal policies for endowing complex systems with sophisticated behavior. All paradigms of RL utilize a system model for finding the optimal policy. Modeling dynamics can be done by formulating a mathematical model or system identification. Dynamic models are usually exposed to aleatoric and epistemic uncertainties that can divert the model from the one acquired and cause the RL algorithm to exhibit erroneous behavior. Accordingly, the RL process loses its generality because it is sensitive to operating conditions and changes in the model parameters. To address these problems, Intensive system identification for modeling purposes is needed for each system even if the model dynamics structure is the same, as the slight deviation in the model parameters can render the model useless in RL. The existence of an oracle that can adaptively predict the rest of the trajectory regardless of the uncertainties can help resolve the issue. The target of this work is to present a framework for facilitating the online system identification of different instances of the same dynamics class by learning a probability distribution of the dynamics conditioned on observed data with variational inference and show its reliability in robustly solving different instances of control problems without extra training in model-based RL with maximum sample efficiency.

First Page


Last Page


This document is currently not available here.