NLPGym – A toolkit for evaluating RL agents on Natural Language Processing Tasks

Reinforcement learning (RL) has recently shown impressive performance in complex game AI and robotics tasks. To a large extent, this is thanks to the availability of simulated environments such as OpenAI Gym, Atari Learning Environment, or Malmo which allow agents to learn complex tasks through interaction with virtual environments. While RL is also increasingly applied to natural language processing (NLP), there are no simulated textual environments available for researchers to apply and consistently benchmark RL on NLP tasks. With the work reported here, we therefore release NLPGym, an open-source Python toolkit that provides interactive textual environments for standard NLP tasks such as sequence tagging, multi-label classification, and question answering. We also present experimental results for 6 tasks using different RL algorithms which serve as baselines for further research.

  • Veröffentlicht in:
    When Language MeetsGames Workshop at Neural Information Processing Systems (NeurIPS)
  • Typ:
    Article
  • Autoren:
    R. Ramamurthy, R. Sifa, C. Bauckhage
  • Jahr:
    2020

Informationen zur Zitierung

R. Ramamurthy, R. Sifa, C. Bauckhage: NLPGym – A toolkit for evaluating RL agents on Natural Language Processing Tasks, When Language MeetsGames Workshop at Neural Information Processing Systems (NeurIPS), 2020, https://doi.org/10.48550/arXiv.2011.08272, Ramamurthy.etal.2020,