16th AAAI Conference On Artificial Intelligence And Interactive Digital Entertainment

Conference Summary

By Levi Lelis and Matthew Guzdial

The 16th installment of the AAAI Artificial Intelligence and Interactive Digital Entertainment (AIIDE) took place online from October 19 to 23, 2020. The main goal of the AIIDE series of conferences is to bring together AI researchers from academia and industry, game designers and developers, creative technologists, and media artists. 

This was the first online installment of the conference. Perhaps that explains the record number of participants and the record number of participants from different countries. As a base of comparison, AIIDE-19 had 94 participants from 13 different countries. AIIDE-20 had 168 participants from 24 different countries. The plot below shows the number of attendees by country for AIIDE-20. 

In AIIDE-20 all presentations were recorded prior to the conference and streamed live during the meeting. We streamed all videos from a session, back to back, and the authors participated on a discussion panel after all videos were streamed. The discussions we had at the end of the sessions were quite interesting. This was a lesson learned. Instead of limiting ourselves to asking questions to one author at a time, the discussions tend to be more interesting and interactive if the audience is allowed to ask questions to any presenter at any time, as we did in the panel. Moreover, in the panel format all presenters can answer the questions, even if the question wasn’t originally directed at them. The authors were also available on the chat while the videos were being streamed. Many interesting questions were asked and answered during the presentations. 

One unique feature of this online conference was the use of a Discord server. We employed this server in a number of ways to help foster a feeling of community. First, as a stand-in for coffee break discussions, with several virtual “tables” made up of a text channel and an audio channel. On average, two of these tables were in use during each coffee break. This was helpful as a way for authors and conference attendees to continue conversations after a virtual session. We also similarly had a series of “rooms” (also with a text/audio channel setup) for after-conference socializing. At least one of these was in use every night, most commonly with freeform discussions or games of Among Us. Finally, we made use of this server during the poster/demo session, which allowed attendees to virtually “wander around” and pop in on different offerings.

Future installments of the conference could have a hybrid attendance format, where a group of people meet face-to-face for coffee breaks and join the online technical sessions from their hotel rooms. This hybrid model is more inclusive than the traditional conference model as it allows people unable to travel to participate, while still allowing face-to-face meetings and social events. 

In addition to a traditional research track, where papers are rigorously reviewed by at least three experts in the field, AIIDE’20 also invited submissions to a practitioner track and a playable experiences track. Recognizing that practitioners might not have the time to write a full double-column paper, the practitioners track invited the submission of 500-word contributions describing the use of intelligent systems in real-world applications, including AI in educational and serious games, AI-based design tools, and AI in published games. Similar to the practitioner track, the playable experiences track welcomes 500-word submissions describing innovatives use of AI in interactive and digital entertainment scenarios. AIIDE had two PE papers accepted describing the use of AI and artistic expression to create novel experiences. 

The main track proceedings contain many interesting papers. Here we will focus on the best student paper, the best paper, and on papers authored by researchers from the University of Alberta. 

The Best Student Paper was awarded to Maren Awiszus, Frederik Schubert, and Bodo Rosenhahn for their paper TOAD-GAN: Coherent Style Level Generation from a Single Example. TOAD-GAN is a novel Generative Adversarial Network architecture able to generate game levels with as few as a single training example. The first row in the image below shows two original Super Mario Bros. levels and the second row shows the TOAD-GAN reinterpretation of the levels. TOAD-GAN generates novel levels that are similar, in terms of structure, to the originals.

The Best Paper was awarded to Mikhail Jacob, Sam Devlin, Katja Hofmann for their paper “It’s Unwieldy and It Takes a Lot of Time” —Challenges and Opportunities for Creating Agents in Commercial Games. Jacob et al. conducted interviews with 17 developers of intelligent agents from AAA studios, indie studios, and industrial research labs regarding their experience creating and deploying reinforcement learning agents in commercial games. The interviews point to challenges and research opportunities for developing and deploying reinforcement learning agents in commercial games. For example, interviewees mentioned the difficulty of predicting an agent’s behavior once it is trained, others mentioned the difficulty of rapid exploration of different ideas with agents that take too long to be trained. The paper provides an interesting roadmap for RL research concerning applications to commercial games. 

Left: A level from the original “Snakebird” game. On the right, the level identified by Sturtevant et al.’s approach, which maximizes the solution length. Instead of being able to grow across to eat the second gem, the player must go up and around, falling onto it.

Nathan R. Sturtevant, Nicolas Decroocq, Aaron Tripodi, Matthew Guzdial authored The Unexpected Consequence of Incremental Design Changes, a paper investigating how even tiny changes to the design of a video game level can have outsized impacts on the game
“Snakebird” (a mobile puzzle game). In particular, it represents a novel “Exhaustive Procedural Content Generation” approach, a class of approaches to generate content by exhaustively generating all possibilities that fit some constraints. By generating all possible single changes to a level the authors were able to find the changes that maximized the difference to the length of the solution of that level. The authors confirmed that these changes were considered perceptually different in a human subject study, finding that humans could view these single changes as more than doubling the level’s perceived difficulty. 
Lucas Ferreira, Levi Lelis, and Jim Whitehead authored Computer-Generated Music for Tabletop Role-Playing Games, a paper that introduces Bardo Composer, a system for synthesizing music for story-based tabletop games. Bardo captures the players’ speeches and classifies the emotion of the story being told by the players. Then, Bardo tries to generate a musical piece that matches the emotion of the story. One of the contributions of the Bardo Composer paper is an algorithm that searches for combinations of musical notes that match a given target emotion. Here is a sample piece Bardo synthesized with a transition from an agitated to a suspenseful moment in the story.