Skip to main content
caenopy

Learning Strategic Play with Language Agents in Text-Adventure Games

Large language models (LLMs) have recently shown success in powering autonomous, embodied agents within a diverse range of external environments (Wang et al., 2023; Shinn et al., 2023; Yao et al., 2023b). In contrast to reinforcement learning (RL) and imitation learning approaches, current approaches of LLM-powered language agents do not employ gradient-based training or fine-tuning to adapt to a new domain. Rather, these agents use in-context examples to leverage knowledge learned by a pre-trained LLM to generate action plans, API calls, or code that can be executed in an environment Wang et al. (2023); Yao et al. (2022); Schick et al. (2023); Yao et al. (2023a). Since these agents do not change the weights of the underlying language model, designing agent architectures that support acquiring new knowledge, incorporating past experience, and adapting to online gameplay is a considerable challenge.

Interactive fiction presents several unique challenges compared to other interactive decision making benchmarks (Yao et al., 2022; Shridhar et al., 2020). In the games we test, agents must adapt to different notions of embodiment, gameplay conventions, reward structures, and goals. Moreover, a capable agent must be able to calibrate its strategy for exploration and planning to a particular game environment by only using natural language responses provided by the game. In this project, we explore the use of LLMs to power adaptive language agent architectures applied to text-adventure interactive fiction games. We first implement several existing language agents architectures Yao et al. (2023b); Shinn et al. (2023) in an interactive fiction environment. We then present an approach for long-term memory and self-validation of actions designed to overcome limitations of these approaches.

Read the full report.