Working With OpenAI Gym in Scala

DZone 's Guide to

Working With OpenAI Gym in Scala

Let's take a quick look at working with OpenAI Gym with Scala and also explore the design of the API.

· AI Zone ·
Free Resource

As part of OpenAI, Gym is a toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents from walking to playing games. Gym is written in Python.

Under the OpenAI Gym umbrella, gym-http-api project provides a local REST API to the gym server, allowing development in languages other than python. A number of language bindings are contributed to the project by the RL community.

Scala combines object-oriented and functional programming in one concise, high-level language. It plays a prominent role in machine learning and the AI world. However, OpenAI gym yet has a binding to Scala, and it has limited Scala’s development working with Gym. To address this need, we have developed gym-scala-client and hopefully, can be helpful to Reinforcement learning researches.

Gym-scala-client uses following Scala modules:

Akka-HTTP: to provide asynchronized/synchronized HTTP interface.

Spray-JSON: to un-marshal JSON streams to scala objects

Breeze: to provide numerical computation

Breeze-viz: to provide visualization of the algorithm evaluation.

We provided CartPole-v0 implementation to demonstrate the usage, using Q-learning.


Gym-http-client supports following HTTP commands defined in gym-http-api:

  • POST /v1/envs/

  • GET /v1/envs/

  • POST /v1/envs/<instance_id>/reset/

  • POST /v1/envs/<instance_id>/step/

  • GET /v1/envs/<instance_id>/action_space/

  • GET /v1/envs/<instance_id>/observation_space/

  • POST /v1/envs/<instance_id>/monitor/start/

  • POST /v1/envs/<instance_id>/monitor/close/

  • POST /v1/upload/

  • POST /v1/shutdown/


The above APIs are represented by a number of case classes.

gymClient is an AKKA actor to execute the API commands. The commands are executed through Execution type class, and each execution command is an implicit in Execution companion object. The executions are synchronized HTTP requests to Gym server.

GymSpace provides the reinforcement learning environment, for instance, it supports discrete space that CartPole uses; it also provides interfaces to action space and observation space, which make interactions between the agent and the environment possible. It also provides JSON marshaling using spray-json library.


CartPole-v0 is one of the environments in Gym. The agent learns to balance a pole on a cart. It defines “solving” as getting the average reward of 195.0 over 100 consecutive trials.

Image title                                                                                                                  Cart Pole

To demonstrate the usage of gym-scala-client, we implemented this agent using Scala through QLearning. The demonstration is not intended for the algorithm evaluation, but to show how RL can be carried out through this binding.

Image title                                                                                            Average Rewards vs Episodes


Among the successfully “solved” algorithms evaluated by OpenAI Gym, the implementation language is predominately Python, the native language that implements Gym itself. Gym-scala-client offers the community an alternative to carry out their research in a different setting. As the Scala ecosystem is fast growing, the open-source community embraces it with vigorous support, such as numerical computing libraries, visualization libraries, etc. We hope that the reinforcement learning research can benefit from this project.

The project is available on GitHub.

artificial intelligence ,cartpole ,gym ,openai ,reinforcement learning ,scala

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}