Multi-Objective Planning with Contextual Lexicographic Reward Preferences

Collaborative Robotics and Intelligent Systems (CoRIS) Institute
Oregon State University

Autonomous agents require to plan over multiple, context-dependent preferences over objectives that vary within the same environment. While current planning methods assume a single preference ordering and do not support multiple preference orderings, our approach is able to compute a policy that is valid and cycle-free irrespective of the conflicting nature of the preferences.

Abstract

Autonomous agents are often required to plan under multiple objectives whose preference ordering varies based on context. The agent may encounter multiple contexts during its course of operation, each imposing a distinct lexicographic ordering over the objectives, with potentially different reward functions associated with each context. Existing approaches to multi-objective planning typically consider a single preference ordering over the objectives, across the state space, and do not support planning under multiple objective orderings within an environment. We present Contextual Lexicographic Markov Decision Process (CLMDP), a framework that enables planning under varying lexicographic objective orderings, depending on the context. In a CLMDP, both the objective ordering at a state and the associated reward functions are determined by the context. We employ a Bayesian approach to infer a state-context mapping from expert trajectories. Our algorithm to solve a CLMDP first computes a policy for each objective ordering and then combines them into a single context-aware policy that is valid and cycle-free. The effectiveness of the proposed approach is evaluated in simulation and using a mobile robot.

Approach Overview

MY ALT TEXT

Overview of our solution approach for contextual planning is divided into three steps. First, policies are calculated for each context in isolation, across the entire state space, and then compiled into a global policy \(\pi_G\) by mapping actions to states based on each state's associated context. Second, \(\pi_G\) is analyzed for cycles by estimating goal reachability from each state. Finally, the detected conflicts are resolved by updating lower priority context policies conditioned on fixed actions of higher priority contexts.

Conflict Resolution

Experiments in Simulation

Domains used for simulation experiments

MY ALT TEXT

Performance in All Objectives

MY ALT TEXT

Consistency in Performance

MY ALT TEXT

Handling Conflicts

MY ALT TEXT

Experiments with Mobile Robots

We conduct a series of experiments using a TurtleBot in an indoor warehouse domain setup. The robot autonomously collects and delivers an object using a LiDAR and a map of the area for active localization so as to determine its state and execute actions based on its computed policy. The robot is tasked with delivery of package while navigating slippery tiles (shown as X) and human workers in narrow corridors (shown as X).

LMDP for Contexts

Yang et. al, 2019

Contextual Planning w/o Resolver

Contextual Planning with Resolver and Learned Z

Contextual Planning with Resolver