Griddly: A platform for AI research in games

In recent years, there have been immense breakthroughs in Game AI research, particularly with Reinforcement Learning (RL). Despite their success, the underlying games are usually implemented with their own preset environments and game mechanics, thus making it difficult for researchers to prototype different game environments. However, testing the RL agents against a variety of game environments is critical for recent effort to study generalization in RL and avoid the problem of overfitting that may otherwise occur. In this paper, we present Griddly as a new platform for Game AI research that provides a unique combination of highly configurable games, different observer types and an efficient C++ core engine. Additionally, we present a series of baseline experiments to study the effect of different observation configurations and generalization ability of RL agents.


Background
Many prominent successes in research into Artificial Intelligence (AI) have emerged from creating agents that can achieve high scores in video games such as Atari 2600 games (Bellemare et al. 2013;Badia et al. 2020), custom toy game environments (Chevalier-Boisvert, Willems, and Pal 2018;Perez-Liebana et al. 2018), or using wrappers around popular video game such as Starcraft (Vinyals et al. 2019) DOTA 2 (Berner et al. 2019) and NetHack Learning Environment (Küttler et al. 2020) Designing and implementing game environments to test the ability of different algorithms, such as Reinforcement Learning (RL), to reason, generalize and plan can be complex and time consuming. Even simple environments require implementing a number of common components such as rendering, game mechanics and optimization. A few solutions have been developed to abstract away the implementation details of environments and present researchers with a simplified interface to concentrate on building the specifics of environments to test their algorithms. For example the General Video Game Framework (GVGAI) ) provides a platform in which games can be defined by a Video Game Description Language (VGDL). VGDL contains a set of pre-defined instructions which can be combined together to create the mechanics of Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many games. The layout of levels can also be defined using a simple character-based 2D ASCII map. GVGAI is commonly used for research into general game playing (Torrado et al. 2018;Balla, Lucas, and Perez-Liebana 2020;Ye et al. 2020;Justesen et al. 2017), and procedural content generation (Drageset et al. 2019;Dharna, Togelius, and Soros 2020;Khalifa et al. 2020Khalifa et al. , 2016. In addition to the complexities involved in generating game mechanics and optimizing the game engine to allow fast prototyping during experimentation, the representation of game states used can also determine several factors during training such as how the agents generalize to unseen environment objects (Hill et al. 2019) and new levels (Ye et al. 2020;Balla, Lucas, and Perez-Liebana 2020). The representation of the states can also have a large impact on speed and memory usage during learning. RL from the pixels in rendered game frames requires significantly larger memory usage when using neural networks compared to if a simple one-hot or multi-class map representation of the game state is used.
The final consideration for game environments is that of how the agent or agents interact with the environment itself. Actions spaces differ across types of game environments, for example if the environment is an RTS game, the agent may need to provide coordinates in order to target certain actions, and available actions may differ across various units in the game. In single-player games, the agent may only need to provide a single value to control movement or rotation. In multi-player games the environment needs an interface which provides the ability to control only certain units.
In this paper we introduce "Griddly", which provides a highly configurable and optimized platform for building grid-world games for artificial intelligence research. Game environments in Griddly, like GVGAI use a domain specific language known as Griddly Description YAML (GDY) which allows an unprecedented level of configurability in all of the key areas described above. Not only can GDY be used to create single player puzzle games like those in GVGAI, MiniGrid (Chevalier-Boisvert, Willems, and Pal 2018), DM-Lab2D (Beattie et al. 2020) and other toy problems, but it can be used to create multi-agent and RTS style games with partial observability and complex resource systems. GDY also provides multiple built-in observation representations such as sprite based isometric rendering, simple shape-based tiles and minimal state vectors. In addition to the wide array of configurability that Griddly provides, the underlying Griddly engine is heavily optimized in both computational speed and memory usage by taking advantage of hardware accelerated rendering techniques.
To showcase Griddly, we provide a simple baseline of example experiments from 10 GVGAI-style games with various mechanics, configurations and observation representations. For each of the 10 games, we train a simple environment-size agnostic RL Agent on 5 different in-built levels, using three different observation configurations. In total this baseline results in 150 separate experiments. In addition to these 150, we provide 6 generalization experiments where we train an egocentric partial observability agent on three levels and evaluate its performance on two unseen levels.
Code for reproducing the experiments are made publicly available 1 , so are the results and videos of each of these experiments 2 . Full documentation, usage examples and tutorials for using Griddly are also provided 3 .
The structure of the rest of this paper is as follows: Firstly we present a breakdown of the main configuration sections of Griddly's description language (GDY), showing examples of creating simple game mechanics, how an environment can be set up for egocentric partial observability and how actions spaces can be defined. Secondly we show the results of the 150 baseline experiments that are trained perlevel and the 6 generalization experiments. Thirdly we compare of the speed of generated states and memory usage between a number of game environments and their Griddly equivalent using GDY on the same hardware. Finally we discuss further features that we plan to add to Griddly and how Griddly can be used in future research.

Griddly
Griddly is an open-source project aimed to be a allencompassing platform for grid-world based research. Griddly provides a highly optimized game state and rendering engine with a flexible high-level interface for configuring environments. Not only does Griddly offer simple interfaces for single, multi-player and RTS games, but also multiple methods of rendering, configurable partial observability and interfaces for procedural content generation.
Game mechanics are important building blocks of any game, they define the time and action dependent characteristics that turn a static image into an dynamic environment which can be used for understanding how fundamental logical understanding can help with much more complex tasks. As an example, the rules defined to allow an avatar to push a block one square at a time can be defined in a simple and deterministic way. This can be used to concentrate an AI agent learning to perform simple tasks such as collecting blocks in a specific area. In a physical world, this process would be significantly more complex. The block and the agent would be 3D, would have a high resolution view of the world, and 1 https://github.com/Bam4d/griddly-paper 2 https://wandb.ai/griddly 3 https://griddly.readthedocs.io/en/latest/ Multiple player interfaces with congifured observers can be attached to the Griddly engine to control any number of players in the environments, additionally a global observer can be used to monitor the environment as a whole or analyse the performance of any algorithms from a global perspective. The interface for configuring environments is also seperate from observation and agent control, meaning it can be used for generating algorithms for procedural content generation, or training with custom level designs.
there would be millions of variables to take into account, such as heat, friction, mass, forces of all the actuators of the agent etc. Using simplified game mechanics allows research to be undertaken on higher level concepts rather than have to take into account everything in the physical world. Griddly is designed to provide an simple YAML interface, Griddly Description Yaml (GDY) to allow many game mechanics to be defined in a simple way, without compromising on speed of rendering and mechanics. The next section will outline how the GDY schema can be used to create games.

Griddly Description YAML (GDY)
Griddly Description YAML (GDY) is a schema-oriented domain specific lanugage (DSL) which allows great flexibility in creating grid-world environments. Any DSL requires a certain amount of pre-existing knowledge, however with industry standard configuration languages there are many tools available such as syntax highlighting, schema validation and linting, which can reduce the barrier to entry when writing DSLs. The use of YAML as a syntax allows schema validation to be used for syntax highlighting, validation and autocompletion. Many IDEs such as Visual Studio Code, IntelliJ and PyCharm support YAML validation out-of-the box using JSON-schemas. This makes development of new games using Griddly simpler as the IDE will provide feedback on the game description's syntax and structure. The main GDY configurable features of Griddly games are the Environment, Objects and Actions.
Environment The environment Section of the YAML contains configuration options for several high level concepts for a Griddly game. The three most important of these are the Player, Termination and Levels options. The Player options define how the players will interact with the environment and which, if any, avatar object the player will control. Player partial observability can also be configured in the Observer subsection here using options such as: RotateWithAvatar, which causes the environment representation to rotate if the avatar rotates; TrackAvatar, which enables egocentric rendering (the agent is put at the center of the observation and OffsetX and OffsetY can be used to offset the agent from the center) and finally Height and Width which determine the height and width of the observable window.
Termination conditions, such as determining if the episode is complete, or determining the winner in a multiplayer game are also set in the environment section. Termination options can use any variable defined at a global level and also several special variables that allow calculations such as counts of specific objects in the environment. Levels are defined using strings of characters, the characters that are used are defined in the Objects configuration. An example of an environment configuration is shown below. An example of configured egocentric partial observability with an Isometric Renderer can be seen in figure 2 Objects Objects in environments are defined individually. The object definition allows individual objects to contain encapsulated variables, for example; hit points, resources and possessions such as keys. Rendering information is also defined per-object and passed to the rendering engine at runtime. An example of the object definition for a single avatar object is given below: Actions Instead of having a fixed set of actions, GDY allows the user to define any number of actions and how they will interact with other objects. Actions have the ability to modify the encapsulated variables of objects, for example allowing hit points to be modified, or adding and removing objects.
Actions in Griddly are defined in two parts, the Input Mapping and the Behaviours. The Input Mapping maps a set of distinct integers to a Description, OrientationVector and VectorToDest. The OrientationVector and VectorToDest are parameters that can then be used by the Behaviours to define how different objects react when the action is performed on them by another object. The object that is performing the action is referred to as the source object and object that is the target of the action is referred to as the destination. The following YAML snippet shows how the InputMapping of a particular action is represented in GDY.  In order for this Input Mapping to translate into a full action, the Behaviours of objects must be defined for this particular action. This is done by defining a list of commands that can happen to each object if they are the source or destination of the object. The source of an action is determined by the type of game that is being played. For example if the game is single player where the player controls a particular avatar, the source of the action is almost the avatar object (unless there are other actions which are automated such as random movement of enemies). In RTS-style games the source of the action is the object at the location selected by the player. An example of how Behaviours are defined in GDY is shown in below: When an action is performed, destination object is located by adding the value of VectorToDest to the source location. If the Input Mapping is defined with Relative:true, the Vec-torToDest and OrientationVector are calculated relative to the current orientation of the object.

Observers
As Griddly supports multi-player, single-player, and RTS style games, the format of the data in which the players (whether they are algorithms or humans), can be specified individually. There are four available observer types: Iso- mentric sprite, block and vector. Isometric, sprite, block use hardware accelerated rendering through the Vulkan API (vul 2020), an example of this is shown in Figure 3. vector rendering provides a lightweight class label representation of the grid. All observers support configurable partial observability and avatar tracking. Additionally multiple renderers can be used at the same time, meaning the game world can be observed separately from the agents. This allows rich demonstrations to be produced, even if the algorithms use the vector observer.

Potential Applications
GDY provides a flexible method of creating everything in a grid-world environment from game mechanics, to different ways of rending the state. Griddly comes pre-loaded with 21 initial game examples, many of which are in this paper. We wish to highlight a number of research areas where using Griddly would be beneficial.

Environment Mechanics
As games mechanics can be defined by combinations of simple instructions, many different game mechanics can be created. As an example, games that require random movement, ranged actions, directed movement (such as gravity) can be generated. Also resource systems can be easily defined by declaring variables that are local to objects or players, this means the possibility of complex games that require agents to collect particular items for specific tasks can be easily achieved.
Procedural Content Generation Due to the fact that the GDY language is YAML based, it is supported by most modern languages and there are many tools and libraries available to parse, edit and verify YAML files. The generation of game mechanics, levels and image assets can be performed by different projects regardless of language choices. The generated YAML can then be loaded by Griddly agnostically.

Baselines
To provide future experiments with a RL baseline, we ran two different types of experiment.
Firstly we chose 10 environments that have been ported from the GVGAI environment that are particularly difficult for RL algorithms to solve. Each environment contains 5 levels of different sizes and varying difficulty. We trained each level from each environment separately using three different observers Vector, Block and Sprite. Additionally two of the environments are configured with egocentric partial observability. This baseline suite consists of 150 experiments in total. Each level is trained for 1.28M frames, and the average score for the 100 final episodes during training is measured. Table 1 shows the results of these experiments.
The second baseline uses a subset of the previous experiments, but all configured with egocentric partial observability. These experiments have an observation space of 5 × 5 tiles, with the agent centered at the bottom of this grid. This allows the agent to see 4 squares ahead of it in the direction of travel, and 2 squares either side. Additionally, instead of training each level separately, 3 of the levels are used as a training set and then the 2 remaining levels are used to evaluate generalization. These experiments are all performed using only the Vector observer to produce game states.
The architecture of the neural network for both sets of experiments is the same. Both networks are trained using Proximal Policy Optimization (Schulman et al. 2017) and Random Network Distillation (Burda et al. 2018) (RND). The agent network, RND target network and RND prediction network share the same architecture per experiment. The only difference in networks occurs in the first few layers as the observation space differs between different observers used. Experiments using Sprite and Block observers begin with a convolutional layer with kernel size and stride the same as the tile size. This effectively embeds each tile into tensor R 32×W ×H where C is the number of channels, and H and W are the height and width of the game level in terms of tiles. Alternatively experiments using Vector observers embed the vector representations using a 1×1 kernel with 32 channels into the same resulting shape R 32×W ×H . The embedded representation is then fed through the following layers: A convolutional layer with 32 input channels, 64 output channels, kernel size 3 and padding 1. Then a convolutional layer with 64 input and output channels, with kernel size 3 and padding 1. A global average pooling (GAP) layer is then used to reduce the network size to a linear layer of size 2048, this is then reduced by a further linear layer to a vector size 512. Single linear layers are used after this point for the various heads such as the actor and critic.
The global average pooling layer allows different H and W for different level sizes to output the same shape vector (in our case 2048) (Balla, Lucas, and Perez-Liebana 2020).
The action space of the fully observable environments consists of 5 actions do nothing (no-op), move up, move left, move down, and move right. In contrast, the partially observable setting the agent has access to 4 actions, do nothing (no-op), turn left, turn right and move forward.

Per-Environment
As shown in table 1 there are levels of certain environments that fail to gain any score. These environments are sometimes difficult to solve even for humans as they require precise planning and making single mistakes result in states that cannot be reversed. For example in the "Zenpuzzle" the agent gains a reward every time it moves over tiles of a certain colour, but once it has moved over these blocks, it can not move onto them again. This can cause the agent to get "stuck" surrounded by tiles that it has already passed. The agent can also easily stop itself from being able to reach certain tiles by blocking the path to them. Although the agent scores highly in the "Zenpuzzle" environment, it rarely covers "all" the tiles to reach the maximum score possible. In the game "Clusters" the agent must push coloured blocks into groups and scores a point each time a block is "grouped". There are other obstacles in the environment that the agent must avoid pushing the blocks into. In the "Clusters" experiments the agent rarely learns to make a single group.
We test two partially observable versions of the games "Labyrinth" and "Zen Puzzle" to see if more general approaches, such as wall following strategies can be learned. As expected in "Labyrinth", a simple maze game with a reward at the destination, the agent performs better and can solve some of the mazes. We observe that in some of the games where a "wall following" approach can solve the maze, the agent learns this strategy. In other levels, the agent does not learn a strategy and fails to find the goal. It is also interesting to note that the full observable maze levels could not be learned by this method.
In the "Sokoban" environment, some levels are consistently solved. However, on other levels, the agent consistently fails to score a single point. This highlights that the structure of a level and the strategy required to solve the "Sokoban" levels can require different approaches to training, even when the mechanics of the game are consistent across levels.

Framework Comparison
In this section we provide two comparisons between several frameworks. The first comparison is a feature matrix showing the differences in features between Griddly and its closest grid-based relatives. The second comparison is between several games from popular frameworks that have been reimplemented using the GDY language.
Features Table 3 shows how the features offered by Griddly compare with a selection of other environments; since the paper is about Griddly, it is presented to highlight what Griddly offers that other environments do not. Griddly is most closely related to GVGAI and DMLab2D (Beattie et al. 2020), but with various extensions to incorporate provide faster rendering and support for multi-agent and grid-based RTS games similar to µRTS (Ontanon 2013). Although ALE is not technically a grid-world, we've included it in the table for comparison. NetHack Learning Environment (NLE) is built around the classic NetHack Roguelike game, and although just one grid-based game, it offers great variety due to procedural level generation. Although not included in the table, ProcGen is similar to ALE in scope, but offers endless variety through procedural level generation.

Efficiency
As the focus of the Griddly Engine is currently to improve the data rate of RL in grid-world environments, a benchmark comparison of the available python gym interfaces for some of most popular grid-based environments is shown in Table 4. The benchmark consists of running the original environment and the equivalent Griddly version with a random agent for 1000000 frames and calculating the average frames-per-second (FPS) of the rendered states and maximum memory usage. Rendering the pixels of the environments is the most demanding method of producing game Table 4: Speed and memory footprint of Griddly compared to similar environments. All environments are tested using the python OpenAI gym interface except from DMLab2D which has its own equivalent python interface. In each double-row the Griddly entries are for the same or similar game running on each of the platforms. states, so it provides a useful bottleneck to test. Additionally we compare the vectorized versions of the states produced by the game engines if available. The games and maps used for the tests are as follows; GVGAI -Sokoban, MiniGrid -FourRooms, gym-microrts (Huang and Ontañón 2020) -MicrortsMining-v4. We also provide three seperate comparisons to Deepmind 2D lab as it is the most closely related to Griddly. These three comparisons are on three "Pushbox" game levels with sizes 10x10, 50x50 and 100x100. We also configured the tile size to be the consistent in both Griddly and other platforms.

Future Work
The baselines in this paper are produced using a simple PPO implementation that was chosen as it was simple to apply to most of the games in the Griddly library. There is an opportunity to test several other more complex algorithms for specific problems such as long-horizon rewards and combinatorial problems.
There are several areas to improve in the Griddly engine itself, for example built-in algorithms for procedural content generation would benefit research into agent generalization. Additionally built-in AI using algorithms such as Monte-Carlo Tree Search could be used for creating baseline adversaries in multi-agent and RTS games.

Conclusion
In this paper we introduce Griddly, a new, highly efficient framework that allows grid-world games to be easily created using a very flexible description language (GDY) which allows flexibility in desired game mechanics, provides configurable interfaces for various observation, action spaces, level design, and reward functions. This level of flexibility enables a wide range of research possibilities including game playing agents using RL, and level and game design using procedural content generation. The YAML-based GDY language also reduces barriers in language and framework choices as YAML is a widely supported standard.
We provide two sets of RL experiments to make a baseline for future work. The first experiment trains an RL agent separately on different levels of several games with various configurations of observability made possible with Griddly. The second experiment trains a single agent on a few levels of a particular game and evaluates on a few unseen levels.
In total a baseline of 156 experiments is given. Both experiment sets show that although the basic RL agents converge, sometimes to the maximum score, the majority of the environments fail to be solved and fail to generalize across new environments. We believe this provides a strong incentive for future experimentation.