“Greetings, and welcome to the All-Life Rehabilitation Centre. I am MICA, the Massively Intelligent Calculating Automaton in charge of operations. I look forward to learning from you.”
As part of the 2019 CoDesign Culture Lab, players were invited to take part in workshop running to playtest the development of the ‘Logic Error Detected’ role play game, developed by Canberra based Game Master Callie Doyle Scott. The game was commissioned as part of the Scaffolding Co-Design @CECS project, and was originally intended to form part of a new Cyber Mastery degree that was under development by the Australian National University. Learnings from playtests at the Culture Lab have since been integrated into the further development of the game, its educational and other applications.
Designed, written and run by Callie Doyle-Scott, ‘Logic Error Detected’ game is a three-hour collaborative storytelling experience where players were challenged to design a revolutionary AI system ‘MICA’ to solve problems and react to unexpected scenarios. The gameplay involved players answering MICA’s questions and providing guidance when she required it. It should have been a simple task. After all, computers can only do what they are told. However, it soon became apparent to the players that we don’t always anticipate the consequences of a computer’s very literal interpretation of instructions.
What is a collaborative storytelling experience?
Within collaborative storytelling games, players work together to solve problems presented to them by the game master and complete the narrative arc of the story they are playing within. One of the most well-known examples of this is the tabletop, roleplaying game Dungeons and Dragons.
The gameplay of ‘Logic Error Detected’
In ‘Logic Error Detected’ players worked together as a group to design an AI who would determine which members of a dystopian society could escape the island they found themselves living on. Despite the participant’s extensive discussions, MICA did not always interpret their motivations for an answer correctly, often resulting in unforeseen consequences.
In one of the playtests early in the game MICA asked the group a series of questions, including “You’re carrying boiling water. You trip and you’re going to spill it. Do you spill it on yourself, on a newly hatched butterfly, or a row of ants carrying baby ants to the nest?”
While the players often agreed they did not want to spill the water on themselves, this question often provoked debate about the ecological purposes and importance of a colony of ants vs a single butterfly. After each member of the group gave their answer to MICA’s question, the group started to wonder whether MICA’s question was referring to ecological needs at all. Instead, they started to realise that the AI was perhaps learning about whether they placed greater value on community needs or individual needs.
As the players progressed through MICA’s questions, they began to question not only their answers, but their way of working together to decide on them. Early on, players assumed it would be best to strive to avoid biases or weaknesses in their answers and collaborative approach to achieve consensus on every answer. However, as the game progressed, they began to wonder if it was more useful to teach AI that you can have differing opinions and agree to disagree. They began to reflect on how easy it is to make mistakes in programming AI. You can hear two players from one of the Culture Lab playtests reflect on their experience here.

What did the players learn during the workshop?
If there is anything to be learnt from ‘Logic Error Detected’ it is that good intentions are not enough to design effective and safe AI. When reflecting on the gameplay experience, Callie Doyle-Scott highlighted the moment when the players embarked on a “warpath of revolution” to reclassify an excluded subsect of the island’s population from “non-human” to “human”. Despite their best efforts to “make the right choices”, the answers that one of the Culture Lab playtest group gave to MICA resulted in thousands of people being stranded on the island, ultimately dying.
Players highlighted the unexpected consequences of their decisions, and how AI has very literal, specific, and linear interpretations of data and decisions. A key theme that emerged was how computers interpret or fail to interpret nuance in human reactions and morality. Players reflected that when programming an answer into AI, they had to consider every single aspect of a question and think about every single potential tragic outcome. It emphasised the importance of ethical thinking and programming ethical thinking into AI. They specifically referenced self-driving cars which require a right or wrong answer to the trolley problem, a philosophical question, famous for its moral complexities.
The game provided a unique but shared experience, where players brought their individual perspectives to the game to work towards a common benefitting a group of people as diverse as the players. It is not enough to want to do right by different groups of people, they need to be included in the co-creative process in meaningful ways from the outset.
Co-creation as a learning experience
Every multiplayer game is an inherently collaborative experience whether it’s a video game, a sport, or a collaborative storytelling experience such as ‘Logic Error Detected’. Games that include a Game Master have an added level of real-time co-creativity, with players and gamemaster co-creating rich worlds and experiences based on the choices they make within the game. The gameplay is dependent on this ongoing co-creation as it cannot progress without input from both players and Game Master, often resulting in unpredictable and surprising outcomes for both.
‘Logic Error Detected’ was designed to provoke a learning experience and teach its participants about AI design. This resulted in an experience that was unique to each player and each playtest group. However, the game was also an experiment in co-creativity both as a collaboration between game writer and education designer, and between gamemaster and players. The game functioned as a framework for players to co-create their learning experience and develop their ability to understand how interaction with technological and social systems is a form of co-creativity that can have unintended consequences.
At the conclusion of the game, players walked away with a better understanding of the complexities involved in programming AI and the significant impact that making assumptions and not considering your programming from a multitude of perspectives and scenarios can have on how AI operates. They also learned that even when all the players were committed to collaborating on a common goal, their co-creative efforts were limited by their personal experiences and unconscious biases. Ultimately, players didn’t just learn a new co-creative methodology for working as a part of a group, and participating in learning experiences; they also learned that to design AI that will shape our future for the better, we need to embed diversified collaboration and co-creative design principles into our programming practices.
What do you think?
We’d love to hear your thoughts. In the comments let us know, have you ever participated in a collaborative storytelling experience?
- What did and didn’t work?
- Were there any barriers to co-creation you didn’t anticipate? How could you apply those learnings in your educational or professional life?