RESNA Annual Conference - 2019

Transfer Learning for Prosthetics Using Imitation Learning

Waleed D. Khamies1, Monatser Mohammedalamen1, Benjamin Rosman2
1University of Khartoum, 2University of Witwatersrand, CSIR

waleed.daud@outlook.com, montaserfath@gmail.com, benjros@gmail.com

Abstract—In this paper, We Apply Reinforcement learning (RL) techniques to train a realistic biomechanical model to work with different people and on different walking environ- ments. We benchmarking 3 RL algorithms: Deep Deterministic Policy Gradient (DDPG), Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) in OpenSim environment, Also we apply imitation learning to a prosthetics domain to reduce the training time needed to design customized prosthetics. We use DDPG algorithm to train an original expert agent. We then propose a modification to the Dataset Aggregation (DAgger) algorithm to reuse the expert knowledge and train a new target agent to replicate that behaviour in fewer than 5 iterations, compared to the 100 iterations taken by the expert agent which means reducing training time by 95%. Our modifications to the DAgger algorithm improve the balance between exploiting the expert policy and exploring the environment. We show empirically that these improve convergence time of the target agent, particularly when there is some degree of variation between expert and naive agent.

I. INTRODUCTION

A. Problem Definition

blank
Fig. 1. OpenSim ProstheticEnv Environment.
Limbs are hugely valuable to many people, in that they improve mobility and the ability to manage daily activities, as well as provide the means to stay independent. It is costly (50K USD) and time-consuming for the manufacturers to design artificial limbs customized for one person, Designing intelligence prosthetics which deal with the large differences between humans (like human body dimensions, weights, height and walking styles) is so complicated by the large variability in response among many individuals. One key reason for this is that our understanding of the interactions between humans and prostheses is not well-understood, which limits our ability to predict how a human will adapt his or her movement. Physics-based, biomechanical simulations are well-positioned to advance this field as it allows for many experiments to be run at low cost.

B. Environment

We use OpenSim ProstheticsEnv environment, which models one human leg and prosthetic in another leg see in fig(1), OpenSim is a 3D human model simulator, which consists of observations of joints, muscles and tendons, 19 actions, and the reward Rt is the negative distance from the desired velocity in eq( 1).

R t = 9 3 V t 2

where Vt is the horizontal velocity vector of the pelvis. OpenSim environment has a limitation it is very slow to run due to the high number of observations and state variables.

C. Reinforcement Learning (RL) algorithms

Reinforcement Learning (RL) will help prosthetics to calibrate with differences between humans and differences between walking environments [1], RL is a machine learning paradigm, where an agent learns the optimal policy for performing a sequential decision making without complete knowledge of the environment [2]. The agent must explore the environment by taking action At and edit the policy according to the reward function to maximize the reward Rt. We use the DDPG algorithm [3], TRPO and PPO to train the agent. DDPG is a model-free, off-policy actor-critic algorithm using deep function approximators that can learn policies in high-dimensional, continuous action spaces.

D. Imitation Learning

The main problem with RL algorithms is the time needed to solve the problem -training time- because the algorithm must explore the environment and adapt its policy according to the reward at every timestep. Imitation learning is a specific subset of RL where the learner tries to mimic an experts action in order to achieve the best performance. The main advantage of DAgger is that the expert teaches the learner how to recover from past mistakes [4], and we aim to leverage this to illustrate behaviour learning. There are many ways to accelerate the learning process in RL, such as Cross-Domain Transfer [2], Inter-task Mapping via Artificial Neural Network (ANN) [5]. We use Imitation learning to achieve that by implementing DAgger algorithm. The DAg- ger algorithm has shown to be able to achieve expert-level performance after a few data aggregation iterations [6]. To use imitation learning there are two assumptions:

  1. Similarity between the expert and the target agent in actions, observations space and the reward function.
  2. Environment must be described by a Markov Decision Process (MDP).
COMPARISON BETWEEN ALGORITHMS IN PROSTHETICSENV.
Algorithm
Maximum reward
Mean reward
DDPG
113
-42
TRPO
194
43
PPO
70
-58

In the standard DAgger algorithm, the target agent exploits the expert policy and stops exploring the environment. This may be a problem, as the target agent should balance between exploitation and exploration. We propose some improve- ments to the DAgger algorithm to encourage the exploration.

II. EXPERIMENTS We run the following experiments:1

  1. Run RL algorithms (DDPG, TRPO and PPO) in OpenSim ProstheticsEnv, 2,000 episodes and 1,000 timesteps in the episode to give an agent more time to walk or stand up.
  2. we trained a DDPG agent to achieve positive reward (around +100) in the standing up task.
  3. we use that agent as an expert to evaluate the DAgger algorithm.
  4. we modify DAgger so that the expert agent labels the target agents actions based on the timestep reward, by comparing between the timestep reward of the expert agent and the target agent on a given timestep: if the expert agent has less reward than the target agent, the expert keeps the target agents action and the opposite is true.
  5. we use the target action value instead of timestep reward to do the comparison, and we sum the timestep rewards from a given state and action pair until the end of the episode
  6. we used the epsilon-greedy method [2], where the algorithm has the choice to select between taking the target action with a probability 1−ε or the expert action with a probability ε.

III. RESULTS

blank
Fig. 2. Number of Iterations VS Reward Mean in OpenSim ProsteticsEnv Environment
The maximum reward mean achieved by TRPO (see table I and fig 2), but it takes more time comparing with PPO and DDPG because it need to find the inverse of matrix which takes time. Although of this reward the agent can not walk for more than one step and sometimes it falls before the first step. Dagger algorithm achieved the best average reward comparing with other algorithms which balance between exploiting and exploring (see fig 3), we think the reasons behind that:

  1. The expert policy has a high reward.
  2. The high similarity between expert and naive agent.
  3. The naive agent needs more time to run by increasing

    the number of iterations.

blank
Fig. 3. Comparision between DAagger algorithm, Timestep Reward, action value and Epsilon-greedy
The main problem with timestep reward modification, it compares timestep reward (short-term) adding to this the large variation between timesteps. When the variation be- tween episodes is small in Action-value. The naive agent has gotten a reward greater than the expert agent, so roles can be exchanged, the expert can be naive and the naive can be an expert which will decrease training time significantly.

IV. CONCLUSIONS

We have applied imitation learning in a humanoid en- vironment to accelerate the learning process. The naive agent reaches convergence within 5 iterations while the expert reaches it after 100 iterations which means reducing training time by 95%. The DAgger algorithm achieved the best average reward comparing with other algorithms which balance between exploiting and exploring, these algorithms will work better when there is some degree of variation between expert and naive agent and this is what we are planning to do in the future by apply imitation learning from normal human legs to prosthetic, the main challenge will be how to figure out the differences and similarities between it.

V. RESEARCH LIMITATIONS

  1. The prosthetic model can not walk for large distances even can falls before completing the first step.
  2. Each experiment runs for one time, So we are planing to repeat each experiment number of times with differ- ent random seeds and take the average and variance.
  3. We used same hyperparameters for all algorithm to benchmark algorithms, we need to select the best hyperparameters for each algorithm and environment.
  4. We benchmarcked three RL algorithms only and from one library(ChainerRL). So we are planing to use different implementations.

REFERENCES

  1. T. Garikayi, D. van den Heever and S. Matope, (2016), Robotic prosthetic challenges for clinical applications, IEEE International Conference on Control and Robotics Engineering (ICCRE), Singapore, 2016, pp. 1-5. doi: 10.1109/ICCRE.2016.7476146
  2. Joshi, Girish & Chowdhary, Girish. (2018). Cross-Domain Transfer in Reinforcement Learning using Target Apprentice.

  3. Lillicrap, Timothy & J. Hunt, Jonathan & Pritzel, Alexander & Heess, Nicolas & Erez, Tom & Tassa, Yuval & Silver, David & Wierstra, Daan. (2015). Continuous control with deep reinforcement learning. CoRR.
  4. Attia, Alexandre & Dayan, Sharone. (2018). Global overview of Imitation Learning.
  5. Cheng, Qiao & Wang, Xiangke & Shen, Lincheng. (2017). An Au- tonomous Inter-task Mapping Learning Method via Artificial Neural Network for Transfer Learning. 10.1109/ROBIO.2017.8324510.
  6. J.J. Zhu, DAgger algorithm implementation, (2017), GitHub reposi- tory,