For the past few decades video games have been the focus of widespread concerns regarding violence, addiction and sexist content. In the United States, video games are blamed for high gun violence rates. The World Health Organization (WHO) has claimed that excessive gaming is tantamount to a disease. Activists worldwide express concerns that the sexist content of some games may lead to sexism and misogyny in real life. But is there good research evidence to support these claims?
Lessons learned from the "video game violence" research era are instrumental. The term "violent video game" never held conceptual, scientific value, yet was used as an emotional loadstone. This resulted in overstatements of evidence by politicians, scholars and professional guilds such as the American Psychological Association. In some cases, hyperbole continued despite mounting evidence that action-oriented games are associated with, if anything, declining trends in violence and evidence for even effects on mild aggression remained inconsistent.
Nonetheless repetitions of this trend continued with discussions of "sexist" games and their impact on sexism in real life as well as the controversy over the WHO's "gaming disorder." In both cases activists for particular positions which appear to disparage games pushed forward aggressively despite the presence of evidence that might have advised a more cautious approach. It is time for a reassessment of games research and how a societal culture of moral panic impacted games research over the past few decades. However, these cultural issues can be repaired first by understanding the historical patterns of moral panic that scholars contributed to and second by embracing a culture of open science.
Human expression and connection fuels our evolutionary humanity, curiosity and passion. So how far can we go? Pell is designing modes for following the body's natural edge to the abyss of space. New works including the parallel design of human-robotic performance protocols undersea and human-cinematic robot performance onstage, have inspired new modes of trans-disciplinary dialogue to understand affective visualization applications in performing astronautics. Technical concepts derived through play and performance on EVA (extra vehicular activities or spacewalks), have led to the development of technical configurations supporting the Spatial Performance Environment Command Transmission Realities for Astronauts SPECTRA (2018). Various SPECTRA experiments on Moon/Mars analogue missions have expanded protocols, for example the confined/isolated Lunar Station analogue mission simulations [Lunares 3 Crew] with transmission of LiDAR imaging and the choreographers' moves for an artist-astronaut's interpretation on the analogue Crater. Pell has demonstrated that interactions with SPECTRA systems have a direct impact on the artist-astronaut's range of spatial awareness, orientation, geographic familiarization, and remote and in-situ operational training for amplifying performance capabilities on EVA. The significance of these new approaches is the widening of the definition of both technical and cultural activities in astronautics through play and performance. Other research from cinematic robotics, and mixed realities including virtual reality, LiDAR projects and big data immersive visualisations platforms, to an astronaut dance is about designing systems for improved performance and cultural engagement for exploring the critical pathways, discourse and cultural practice surrounding space as inspiration for new works of art, and new ways of working with art and space, during a unique mission simulation. These opportunities also support safe forums for reflexive analysis of our human ambitions, and indeed our assumptions, about a human return to the Moon, and future extra-terrestrial culture. SPECTRA tools translate visions for architecting a new era of spaceflight. Outcomes also signal new research and impact pathways for the artist, astronaut and avatar in space exploration and discovery.
Voice interaction is increasingly common in digital games, but it remains a notoriously difficult modality to design a satisfying experience for. This is partly due to limitations of speech recognition technology, and partly due to the inherent awkwardness we feel when performing some voice actions. We present a pattern language for voice interaction elements in games, to help game makers explore and describe common approaches to this design challenge. We define 25 design patterns, based on a survey of 449 videogames and 22 audiogames that use the player's voice as an input to affect the game state. The patterns express how games frame and structure voice input, and how voice input is used for selection, navigation, control and performance actions. Finally, we argue that academic research has been overly concentrated on a single one of these design patterns, due to an instrumental research focus and a lack of interest in the fictive dimension of videogames.
The intersection of the physically active human body and technology to support it is in the limelight in HCI. Prior work mostly supports exertion by offering sensed digital information about the exertion activity. We focus on supporting exertion during the activity through sensing and actuation, facilitating the exerting body and the bike to act on and react to each other in what we called 'integrated exertion'. We draw on our experiences of designing and studying "Ava, the eBike", an augmented eBike that draws from the exerting user's bodily posture to actuate. As a result, we offer four design themes for designers to analyze integrated exertion experiences: Interacting with Ava, Experiencing Ava, Reduced Body Control Over Ava and Ava's Technology. And also, seven practical tactics to support designers in exploring integrated exertion. Our work on integrated exertion contributes to engaging in new ways with the physically active human body.
Trading card games challenge players to select a card from their personal deck to compete against cards from an opponent's deck with the outcome determined by rules specific to the game. Players desire that the cards in their decks offer meaningful choice relative to those held by the opponent since one player dominating removes all challenge from the competition. The issue of determining the existence and extent of meaningful strategies during competitive selection processes is common to range of other contexts, including picking units for combat in real-time strategy games such as StarCraft II. The approach described models game outcomes as a skew-symmetric matrix and presents an algorithm for excluding dominated and dominating units, and then further ranks the remaining meaningful choice options. A metric: band size quantifies the degree to which subsets of units can still contribute to meaningful game play. This process is applied to a single unit combat scenario using the StarCraft II rules to identify and rank a core set of 39 combat units that only offer meaningful choice within a limited neighbourhood of 12 units around each unit.
Game designers working with Head-Mounted Displays (HMDs) are usually advised to avoid causing disorientation in players. However, we argue that disorientation is a key element of what makes "vertigo play" (such as spinning in circles until dizzy, balancing on high beams, or riding theme park rides) engaging experiences. We therefore propose that designers can take advantage of the disorientation afforded by HMDs to create novel vertigo play experiences. To demonstrate this idea, we created a two-player game called "AR Fighter", in which two HMD-wearing players attempt to affect each other's balance. A study of AR Fighter (N=21) revealed three recurring vertigo-experience themes for researchers to analyse and associated design tactics for practitioners to create digital vertigo play experiences. With this work we aim to guide others in using disorientation as intriguing game element to create novel digital vertigo play experiences, broadening the range of games we play.
eSports matches offer fast-paced entertainment for millions of viewers worldwide, but little is known about how to support a positive viewer experience. One of the key challenges related to popular real-time eSports games (e.g., multiplayer online battle arena games or first-person shooters) is empowering viewers to effectively follow rapid gameplay. In our paper, we address this challenge through the design of information dashboards to improve spectator insight and experience in League of Legends, and Counter Strike: Global Offensive. Based on surveys that received a total of 788 responses, we design information dashboards that we evaluate with 18 experienced eSports viewers. Our results show that dashboards contribute to spectator insight and experience, but that careful consideration is necessary to adequately manage in-game complexity and cognitive load of viewers, and establish spectator trust in information dashboards through transparent design. Based on these findings, our paper formulates design goals for spectator dashboards, and outlines key opportunities for future work.
Controller-based interaction is popular due to the increasing prevalence of console and couch-based games, but is known to be slower and less accurate than aiming with a mouse. In this study we evaluated the performance of five interaction techniques for games to answer the question, if gaze interaction can improve the performance of controller interaction. We compared mouse only, controller only, gaze only with two commonly used gaze and controller hybrid interactions: gaze teleportation and gaze panning. We implemented a targeting game that resembled a Fitts' Law test to evaluate performance, effort, and preference. Our findings show that mouse was the fastest technique and gaze was both the slowest and most error-prone. For the controller-based techniques, players preferred gaze teleportation over the other techniques; however, it only improved performance over the controller for targets that were small and far away.
Players are increasingly viewing games as a social medium to form and enact friendships; however, we currently have little empirically-informed understanding of how to design games that satisfy the social needs of players. We investigate how in-game friendships develop, and how they affect well-being. We deployed an online survey (N= 234) measuring the properties of games and social capital that participants experience within their gaming community, alongside indicators of the social aspects of their psychological wellbeing (loneliness, need satisfaction of relatedness). First, our findings highlight two strong predictors of in-game social capital: interdependence and toxicity, whereas cooperation appears to be less crucial than common wisdom suggests. Second, we demonstrate how in-game social capital is associated with reduced feelings of loneliness and increased satisfaction of relatedness. Our findings suggest that social capital in games is strongly and positively related to players' psychological well-being. The present study informs both the design of social games as well as our theoretical understanding of in-game relationships.
Virtual environments have been proven to be effective in evoking emotions. Earlier research has found that physiological data is a valid measurement of the emotional state of the user. Being able to see one's physiological feedback in a virtual environment has proven to make the application more enjoyable. In this paper, we have investigated the effects of manipulating heart rate feedback provided to the participants in a single user immersive virtual environment. Our results show that providing slightly faster or slower real-time heart rate feedback can alter participants' emotions more than providing unmodified feedback. However, altering the feedback does not alter real physiological signals.
Biofeedback holds great potential for augmenting game play, but research to date has focussed predominantly on single player games. This paper proposes an interactional approach, which emphasises how multiple players engage with biofeedback and one another to make sense of the feedback and to incorporate it into their game play. To explore this approach in the context of the dice game Mia, we designed AMIA (Augmented Mia), a prototype system that gives feedback on heart rate, skin conductance, and skin temperature on a player's hat or armband. A study with 21 participants showed that biofeedback was ambiguous, but nevertheless participants harnessed it as a hint about their opponents' strategies, as a means of distraction, as a handicap when players could not see their own feedback as it was presented on their hat, and as a point of connection with other players. We discuss the mechanisms underlying these interactions and present design opportunities along spatial, temporal, and compositional dimensions of biofeedback that encourage and heighten social interaction.
This paper reports on a study of the interaction skills of forty-two children, between the ages of eighteen months to forty-two months, in using touch devices. A majority of the children had used a touch device previously and had prior experience with touch devices. Continuous swiping, discrete touching and directional swiping were found to be the easiest actions to complete. The drag interaction was more difficult but most children could complete the interaction. The pinch, stretch and rotate interactions were most difficult for the children to make successfully. Common errors included unintended movement during interactions, pressing too hard, and lack of precision due in part to the target size. This study expands the domain knowledge about a toddler's ability to interact with touch devices, allowing better creation and selection of interfaces for them to use.
Digital games can offer rich social experiences and exciting narrations by the integration of interesting, believable companion characters. However, if companions fall short of the players' expectations, they may instead spoil the whole experience, especially if they continuously accompany the player during the game. In this paper, we analyze the design space of companion characters in games. We discuss the influence of companions on players' experiences and expectations. Furthermore, we present the results of an online survey (N = 237) to provide insights into players' opinions and perceptions regarding game companions. According to the survey results, players attach great importance to a companion's personality and its integration into the story of the game, and expect companion behavior to be context-sensitive, autonomous, and proactive. Altogether, our work aims at supporting the development of companions by highlighting the design decisions that have to be made and by suggesting ways to improve their believability.
Augmented sandboxes have been used as playful and educative tools to create, explore and understand complex models. However, current solutions lack interactive capabilities, missing more immersive experiences such as exploring the sand landscape from a first person perspective. We extend the interaction space of augmented sandboxes into virtual reality (VR) to offer a VR-environment that contains a landscape, which the user designs via interacting with real sand while wearing a virtual reality head-mounted display (HMD). In this paper, we present our current VR-sandbox system consisting of a box with sand, triple Kinect depth sensing, a virtual reality HMD, and hand tracking, as well as an interactive world simulation use case for exploration and evaluation. Our work explores the important and timely topics how to integrate rich haptic interaction with natural materials into VR and how to track and present real physical materials in VR. In a qualitative evaluation with nine experts from computer graphics, game design, and didactics we identified potentials, limitations as well as future application scenarios.
Research has shown that dynamic difficulty adjustment (DDA) can benefit player experience in digital games. However, in some cases it can be difficult to assess when adjustments are necessary. In this paper, we propose an approach of emotion-based DDA that uses self-reported emotions to inform when an adaptation is necessary. In comparison to earlier DDA techniques based on affect, we use parameterized difficulty to define difficulty levels and select the suitable level based on players' frustration and boredom. We conducted a user study with 66 participants investigating performance and effects on player experience and perceived competence of this approach. The study further explored how self-reports of emotional state can be integrated in dialogs with non-player characters to provide less interruption. The results show that our emotion-based DDA approach works as intended and yields better player experience than constant or increasing difficulty approaches. While the dialog-based self-reports did not positively affect player experience, they yielded high accuracy. Together, these findings indicate our emotion-based approach works as intended and provides good player experience, thus representing a useful tool for game developers to easily implement reliable DDA.
During game play a variety of undesired emotions can arise, impeding players' positive experiences. Adapting game features based on players' emotions can help to address this problem, but necessitates a way to detect the current emotional state. We investigate using input parameters on a graphics tablet in combination with in-game performance to unobtrusively detect the players' current emotional state. We conducted a user study with 48 participants to collect self-reported emotions, input data from the tablet and in-game performance in a serious game teaching players to write Japanese hiragana characters. We synchronized data, extracted 46 features, trained machine learning models, and evaluated their performance to predict levels of valence, arousal, and dominance modeled as a seven class problem. The analysis shows that random forests achieve good accuracies with F1 scores of .567 to .577 and AUC of .738 to .740, while using input features or in-game performance alone leads to highly decreased performance. Finally, we propose a game architecture that is able to react to undesired emotion levels by adaptive content generation in combination with emotion recognition.
Livestreamed APGs (audience participation games) allow stream viewers to participate meaningfully in a streamer's gameplay. However, streaming interfaces are not designed to meet the needs of audience participants. In order to explore the game design space of APGs, we provided three game development teams with an audience participation interface development toolkit. Teams iteratively developed and tested APGs over the course of ten months, and then reflected on common design challenges across the three games. Six challenges were identified: latency, screen sharing, attention management, player agency, audience-streamer relationships, and shifting schedules. The impact of these challenges on players were then explored through external playtests. We conclude with implications for the future of APG design.
Empowerment of movement through superhuman strength and flexibility is a staple of action video game design. However, relatively little work has been done on the same in the context of Virtual Reality and exergames, especially outside the most obvious parameters such as jumping height and locomotion speed. We contribute a controlled experiment (N=30) of exaggerating avatar flexibility in a martial arts kicking task. We compared different settings for a nonlinear mapping from real to virtual hip rotations, with the aim of increasing the avatar's range of movement and kicking height. Our results show that users prefer medium exaggeration over realistic or grossly exaggerated flexibility. Medium exaggeration also yields significantly higher kicking performance as well as perceived competence and naturalness. The results are similar both in 1st and 3rd person views. To the best of our knowledge, this is the first study of exaggerated flexibility in VR, and the results suggest that the approach offers many benefits to VR and exergame design.
To navigate beyond the confines of often limited available positional tracking space, virtual reality (VR) users need to switch from natural walking input to a controller-based locomotion technique, such as teleportation or full locomotion. Overloading the hands with navigation functionality has been considered detrimental to performance given that in many VR experiences, such as games, controllers are already used for tasks, such as shooting or interacting with objects. Existing studies have only evaluated virtual locomotion techniques using a single navigation task. This paper reports on the performance, cognitive load demands, usability, presence and VR sickness occurrence of two hands-busy (full locomotion/teleportation) and two hands-free (tilt/walking-in-place) locomotion methods while participants (n=20) performed a bimanual shooting with navigation task. Though handsfree methods offer a higher presence, they don't outperform handsbusy locomotion methods in terms of performance.
Game and player analysis would be much easier if user interactions were electronically logged and shared with game researchers. Understandably, sniffing software is perceived as invasive and a risk to privacy. To collect player analytics from large populations, we look to the millions of users who already publicly share video of their game playing. Though labor-intensive, we found that someone with experience of playing a specific game can watch a screen-cast of someone else playing, and can then infer approximately what buttons and controls the player pressed, and when. We seek to automatically convert video into such game-play transcripts, or logs.
We approach the task of inferring user interaction logs from video as a machine learning challenge. Specifically, we propose a supervised learning framework to first train a neural network on videos, where real sniffer/instrumented software was collecting ground truth logs. Then, once our DeepLogger network is trained, it should ideally infer log-activities for each new input video, which features gameplay of that game. These user-interaction logs can serve as sensor data for gaming analytics, or as supervision for training of game-playing AI's. We evaluate the DeepLogger system for generating logs from two 2D games, Tetris[23] and Mega Man X[6], chosen to represent distinct game genres. Our system performs as well as human experts for the task of video-to-log transcription, and could allow game researchers to easily scale their data collection and analysis up to massive populations.
Challenge plays a critical role in enabling an enjoyable and successful player experience, but not all dimensions of challenge are well understood. A more nuanced understanding of challenge and its role in the player experience is possible through assessing player psychophysiology. The psychophysiology of challenge (i.e. what occurs physiologically during experiences of video game challenge) has been the focus of some player experience research, but consensus as to the physiological markers of challenge has not been reached. To further explore the psychophysiological impact of challenge, three video game conditions -- varying by degree of challenge -- were developed and deployed within a large-scale psychophysiological study (n = 90). Results show decreased electrodermal activity (EDA) in the low-challenge (Boredom) video game condition compared to the medium- (Balance) and high-challenge (Overload) conditions, with a statistically non-significant but consistent pattern found between the medium- and high-challenge conditions. Overall, these results suggest electrodermal response increases with challenge. Despite the intuitiveness of some of these conclusions, the results do not align with extant literature. Possible explanations for the incongruence with the literature are discussed. Ultimately, with this work we hope to both enable a more complete understanding of challenge in the player experience, and contribute to a more granular understanding of the psychophysiological experience of play.
Virtual reality games are often centered around our feeling of 'being there'. That presence can be significantly enhanced by supporting physical walking. Although modern virtual reality systems enable room-scale motions, the size of our living rooms is not enough to explore vast virtual environments. Developers bypass that limitation by adding virtual navigation such as teleportation. Although such techniques are intended (or designed) to extend but not replace natural walking, what we often observe are nonmoving players beaming to a location that is one real step ahead. Our navigation metaphor emphasizes physical walking by promoting players into giants on demand to cover large distances. In contrast to flying, our technique proportionally increases the modeled eye distance, preventing cybersickness and creating the feeling of being in a miniature world. Our evaluations underpin a significantly increased presence and walking distance compared to the teleportation approach. Finally, we derive a set of game design implications related to the integration of our technique.
Despite lacking a formal peer-reviewed publication, the Game Experience Questionnaire (GEQ) is widely applied in games research, which might risk the proliferation of erroneous study implications. This concern motivated us to conduct a systematic literature review of 73 publications, analysing how and why the GEQ and its variants have been employed in current research. Besides inconsistent reporting of psychometric properties, we found that misleading citation practices with regards to the source, rationale and number of items reported were prevalent, which in part seem to stem from confusion over the "manuscript in preparation" status. Additionally, we present the results of a validation study (N = 633), which found no evidence for the originally postulated 7-factor structure of the GEQ. Based on these findings, we discuss the challenges inherent to the "manuscript in preparation" status and provide recommendations for authors, researchers, educators, and reviewers on how to improve reporting, citation and publication practices.
Ingestible sensors, such as capsule endoscopy and medication monitoring pills, are becoming increasingly popular in the medical domain, yet few studies have considered what experiences may be designed around ingestible sensors. We believe such sensors may create novel bodily experiences for players when it comes to digital games. To explore the potential of ingestible sensors for game designers, we designed a two-player game - the "Guts Game" - where the players play against each other by completing a variety of tasks. Each task requires the players to change their own body temperature measured by an ingestible sensor. Through a study of the Guts Game (N=14) that interviewed players about their experience, we derived four design themes: 1) Bodily Awareness, 2) Human-Computer Integration, 3) Agency, and 4) Uncomfortableness. We used the four themes to articulate a set of design strategies that designers can consider when aiming to develop engaging ingestible games.
Studies have shown that local latency -- delays between an input action and the resulting change to the display -- can negatively affect gameplay. However, these studies report several different thresholds (from 50 to 500ms) where local latency causes problems, and there is still little understanding of the relationship between the temporal requirements of a game and the effects of local latency. To help designers determine how lag will affect their games, we designed two studies that focus on specific atoms of interaction in simple games, and characterize both gameplay performance and experience under increasing local latency. We use the data from the first study to develop a simple predictive model of performance based on the amount of lag and the speed of the game. We used the model to predict performance in the second study, and our predictions were accurate, particularly for faster games and higher levels of lag. Our work provides a new analysis of how local latency affects games, which explains why some game atoms will be sensitive to latency, and which can allow predictive modeling of when playability will suffer due to lag, even without extensive playtesting.
There is an increasing trend in HCI on studying human-food interaction, however, we find that most work so far seems to focus on what happens to the food before and during eating, i.e. the preparation and consumption stage. In contrast, there is a limited understanding and exploration around using interactive technology to support the embodied plate-to-mouth movement of food during consumption, which we aim to explore through a playful design in a social eating context. We present Arm-A-Dine, an augmented social eating system that uses wearable robotic arms attached to diners' bodies for eating and feeding food. Extending the work to a social setting, Arm-A-Dine is networked so that a person's third arm is controlled by the affective responses of his/her dining partner. From the study of Arm-A-Dine with 12 players, we articulate three design themes: Reduce bodily control during eating; Encourage savoring by drawing attention to sensory aspects during eating; and Encourage crossmodal sharing during eating to assist game designers and food practitioners in creating playful social eating experiences. We hope that our work inspires further explorations around food and play that consider all eating stages, ultimately contributing to our understanding of playful human-food interaction.
Reflection is a core design outcome for HCI, and recent work has suggested that games are well suited for prompting and supporting reflection on a variety of matters. However, research about what sorts of reflection, if any, players experience, or what benefits they might derive from it, is scarce. We report on an interview study that explored when instances of reflection occurred, at what level players reflected on their gaming experience, as well as their reactions. Our findings revealed that many players considered reflection to be a worthwhile activity in itself, highlighting its significance for the player experience beyond moment-to-moment gameplay. However, while players engaged in reflective description and dialogic reflection, we observed little to no instances of higher-level transformative and critical reflection. We conclude with a discussion of the value and challenges inherent to evaluating reflection on games.
Exergames help senior players to get physically active by promoting fun and enjoyment while exercising. However, most exergames are not designed to produce recommended levels of exercise that elicit adequate physical responses for optimal training in the aged population. In this project, we developed physiological computing technologies to overcome this issue by making real-time adaptations in a custom exergame based on recommendations for targeted heart rate (HR) levels. This biocybernetic adaptation was evaluated against conventional cardiorespiratory training in a group of active senior adults through a floor-projected exergame and a smartwatch to record HR data. Results showed that the physiologically-augmented exergame leads players to exert around 40% more time in the recommended HR levels, compared to the conventional training, avoiding over exercising and maintaining good enjoyment levels. Finally, we made available our biocybernetic adaptation software tool to enable the creation of physiological adaptive videogames, permitting the replication of our study.
In this paper we present the design and evaluation of a first-person walker digital game called WORLD4. Walkers are a sub-genre of 3D games that typically include minimal player interaction, slow paced game play, and ambiguous goals. Walking is the primary means of interaction in walker games, rather than prioritize 'skill-based' mechanics. However, the design of these game environments is not well understood and challenges many accepted game design conventions. We have designed WORLD4, a multi-dimensional first-person exploration game, to explore how ambiguity might support exploratory game play experiences in virtual environments. 14 participants playtest WORLD4 and analysis of the data identified three descriptive themes specific to the walker game player experience: 1) designing partial inscrutability; 2) shifting meaning; and 3) facilitating subversion of expectations. We use these themes to describe a set of prescriptive design strategies that may assist designers in designing for ambiguity in exploratory game environments.
The human-computer interaction (HCI) field includes a long-standing community interested in designing systems to enable user reflection. In this work, we present our findings on how interactive narratives and roleplaying can effectively support reflection. To pursue this line of inquiry, we conducted an exploratory, cross-sectional study evaluating an interactive narrative we created, Chimeria:Grayscale. To address issues present in prior HCI studies on the topic of reflection, we grounded our system design methodology and evaluations in theories drawn from clinical psychology and education. The results of our study indicate that Chimeria:Grayscale, the interactive narrative we created by operationalizing our system design methodology, enabled our study participants to critically self-reflect.
A lack of racial-ethnic diversity in game characters and limited customization options render in-game self-representation by players of colour fraught. We present a mixed-methods study of what players from different race-ethnicities require to feel digitally represented by in-game characters. Although skin tone emerged as a predominant feature among players from all racial-ethnic groupings, there were significant group differences for more nuanced aspects of representation, including hair texture, style, and colour, facial physiognomy, body shape, personality, and eye colour and dimension. Situated within theories of how race is conveyed, we discuss how developers can support players of colour to feel represented by in-game characters while avoiding stereotyping, tokenism, prototypicality, and high-tech blackface. Our results reinforce player needs for self-representation and suggest that customization options must be more than skin deep.
Due to a steady increase in popularity, player demands for (online) video game content are growing to an extent in which consistency and novelty in challenges is hard to attain. Challenges in balance and error-coping accumulate. We introduce the concept of deep player behavior models by applying machine learning techniques to individual, atomic decision-making strategies. We discuss their potential application fields in personalized challenges, autonomous game testing, human agent substitution, and online crime detection. Results from a pilot study that was carried out with the massively multiplayer online role-playing game Lineage II depict a benchmark between hidden markov models, decision trees, and deep learning. Data analysis and individual reports indicate that deep learning can be employed to provide adequate models of individual player behavior with high accuracy for predicting skill-use and a high correlation in recreating strategies from previously recorded data.
Despite rewards being seemingly ubiquitous in video games, there has been limited research into their impact on the player experience. Informed by extant literature, we built a casual video game to test the impact of reward types, both individually (i.e. rewards of: access, facility, sustenance, glory, praise) and by variety of rewards (i.e. no rewards, individual rewards, all rewards). No evidence was found for differing reward types impacting the player experience differently. However, evidence was found for a greater variety of rewards having a positive impact on interest and enjoyment. Regardless of the impact of variety of rewards, the individual characteristic of reward responsiveness was found to be predict sense of presence and interest and enjoyment. This paper makes contributions to the application of reward types, general understanding of the impact of rewards on the player experience, and discusses the importance of trait reward responsiveness in player experience evaluation.
Virtual reality games have grown rapidly in popularity since the first consumer VR head-mounted displays were released in 2016, however comparatively little research has explored how this new medium impacts the experience of players. In this paper, we present a study exploring how user experience changes when playing Minecraft on the desktop and in immersive virtual reality. Fourteen players completed six 45 minute sessions, three played on the desktop and three in VR. The Gaming Experience Questionnaire, the i-Group presence questionnaire, and the Simulator Sickness Questionnaire were administered after each session, and players were interviewed at the end of the experiment. Participants strongly preferred playing Minecraft in VR, despite frustrations with using teleporation as a travel technique and feelings of simulator sickness. Players enjoyed using motion controls, but still continued to use indirect input under certain circumstances. This did not appear to negatively impact feelings of presence. We conclude with four lessons for game developers interested in porting their games to virtual reality.
This work focuses on studying players behaviour in interactive narratives with the aim to simulate their choices. Besides sub-optimal player behaviour due to limited knowledge about the environment, the difference in each player's style and preferences represents a challenge when trying to make an intelligent system mimic their actions. Based on observations from players interactions with an extract from the interactive fiction Anchorhead, we created a player profile to guide the behaviour of a generic player model based on the BDI (Belief-Desire-Intention) model of agency. We evaluated our approach using qualitative and quantitative methods and found that the player profile can improve the performance of the BDI player model. However, we found that players self-assessment did not yield accurate data to populate their player profile under our current approach.
We present an exploratory study of analyzing and visualizing player facial expressions from video with deep neural networks. We contribute a novel data processing and visualization technique we call Affect Gradients, which provides descriptive statistics of the expressive responses to game events, such as player death or collecting a power-up. As an additional contribution, we show that although there has been tremendous recent progress in deep neural networks and computer vision, interpreting the results as direct read-outs of experiential states is not advised. According to our data, getting killed appears to make players happy, and much more so than killing enemies, although one might expect the exact opposite. A visual inspection of the data reveals that our classifier works as intended, and our results illustrate the limitations of making inferences based on facial images and discrete emotion labels. For example, players may laugh off the death, in which case the closest label for the facial expression is "happy", but the true emotional state is complex and ambiguous. On the other hand, players may frown in concentration while killing enemies or escaping a tight spot, which can easily be interpreted as an "angry" expression.
Smartphones support gaming, social networking, real-time communication, and individualized experiences. Children and parents often take part in digital experiences with distant friends while isolating themselves from co-present family members. We present MeteorQuest, which is a mobile social game system aimed to bring the family together for location-specific game experiences through physical play. The system supports group navigation by mapping screen brightness to the proximity to various in-game targets. Mini-game stages were designed together with interaction designers to encourage physical and social interaction between the players through group puzzles, physical challenges of dexterity and proxemics play. We conducted an exploratory study with three families to gain insights into how families respond to mobile social game features. We studied their socio-spatial arrangements during play and navigation using the lens of proxemics play and provide implications for the design of proxemic interactions and play experiences with families.
Social play, and the role of technology in it, is a topic of central concern to the CHI PLAY and HCI community. In this paper we provide an overview of philosophical, psychological and sociological concepts and theories of social play and use these as a lens to conduct a literature review of research on interactive technologies in play contexts. Our chosen scope includes technologies which afford free play in groups of children within the same physical space. We identify how assumptions and stances about play influence which kind of technologies are designed, which social elements are supported and how success is defined and assessed. Finally, we propose a novel perspective on designing playthings which conceptualises them as boundary objects. We argue that such a perspective is particularly valuable when designing for heterogeneous groups of children and, thus, also has the potential to make a contribution towards designing effective roles of technologies for social inclusion.
The characteristics of virtual faces can be important factors for avatars and characters in video games. Previous work investigates how users create their own avatars and determines the generally preferred characteristics of virtual faces. However, it is currently unknown how the preferred characteristics of avatar faces depend on the players' age and gender or if these demographics can be predicted based on the data provided by an avatar creation system. In this paper, we investigate the effects of gender and age on the facial characteristics of 4,215 virtual faces created by 1,475 participants (994 male, 481 female) mainly from Central Europe using a web-based avatar creation system and the Caucasian average face. Our results show that with increasing age, men and women design increasingly realistic and less stylized avatars. We also found that young persons design more androgynous avatars, while adults further increase the masculinity and femininity of their avatars. However, women with higher age decrease the femininity and increase the masculinity of stereotypical faces. Based on the data collected during the avatar creation process, we can predict the participants' gender with an accuracy up to 91%, which open up new use cases for video games and avatar creation systems. We discuss potential social, biological, and cognitive explanations for our results and contribute with design implications for games and future avatar customization systems.
Avatars in virtual reality (VR) increase the immersion and provide an interface between the user's physical body and the virtual world. Thus, avatars enable referential gestures, which are essential for targeting, selection, locomotion, and collaboration in VR. However, players of immersive games can have another virtual appearance deviating from human-likeness and previous work suggests that avatars can have an effect on the accuracy of referential gestures in VR. One of the most important referential gestures is mid-air pointing. It has been shown that mid-air pointing is affected by systematic errors, which can be compensated using different methods. Thus, it is unknown if the avatar must be considered in corrections of the systematic error. In this paper, we investigate the effect of the avatar on pointing accuracy. We show that the systematic error in pointing is significantly affected by the virtual appearance but does not correlate with the degree to which the appearance deviates from the perceived human-likeness. Moreover, we confirm that people only rely on their fingertip and not on their forearm or index finger orientation. We present compensation models and contribute with design implications to increase the accuracy of pointing in VR.
Whenever someone chooses to study instead of going to a party, or forgo dessert after dinner, that person is exercising self-control. Self-control is essential for achieving long-term goals, but isn't easy. Games present a compelling opportunity to engage in tasks that allow a player to exercise and improve self-control, and consequently provide data about a person's cognitive capacity to exert self-control. However, exercising self-control can be effortful and depleting, which makes incorporating it into a game design that maintains engagement and quality of experience a challenge. We present the design of game mechanics for exercising and improving self-control, and an initial study that effectively demonstrates that games can be designed to engage a broad level of self-control processes without negatively affecting player engagement and experience. Our results also show that player performance is connected to trait-level self-control. We discuss how (for example) players with low trait self-control can therefore be identified, and games intended to improve or exercise self-control can dynamically adapt to this information.
In the recent years a handful of powerful game engines have been released for easing the production of high quality computer games, e.g. Unity 3D or Unreal 4. Since these engines are free of charge for amateurs, their use has increased worldwide drastically, enabling small teams to create high quality games rapidly. However, for blind people, no such tools exist, enabling them to create games easily.
IBlind Adventure closes this gap, enabling visually impaired or even blind people to create their own games. Games are strictly audio based, and Blind Adventure is like a structured audio recorder, enabling game creators to record and lay out game levels. In this paper we introduce Blind Adventure, its gaming concepts, and its user interface.
The current trend to use applied games to engage users with mobile health (mHealth) delivery systems continues to build, yet research as to its effectiveness is still sparse. This study evaluates the effectiveness of using two different casual games to drive meaningful engagement with an mHealth app. MindMax was produced by the Australian Football League Players Association to improve the mental health and wellbeing of young Australians. Interviews (N = 42) and app usage data of organic users (N = 2679) suggest that MindMax has sustained users' interest with the app by using the casual games as rewards for engagement. In addition, there is evidence that the gamification strategy was successful in drawing users back to the wellbeing modules. Mixed experiences with the more difficult game suggest the potential usefulness of game play data to inform more personalized mHealth messaging. Further uses for applied games in mHealth applications are discussed.
Educational games are a creative, enjoyable way for students to learn about technical concepts. We present Entanglion, a board game that aims to introduce the fundamental concepts of quantum computing -- a highly technical domain -- to students and enthusiasts of all ages. We describe our iterative design process and feedback from evaluations we conducted with students and professionals. Our playtesters gave positive feedback on our game, indicating it was engaging while simultaneously educational. We discuss a number of lessons we learned from our experience designing and evaluating a pedagogical game for a highly technical subject.
Research shows that bespoke Virtual Reality (VR) laboratory experiences can be differently affecting than traditional display experiences. With the proliferation of at-home VR headsets, these effects need to be explored in consumer media, to ensure the public are adequately informed. As yet, the organizations responsible for content descriptions and age-based ratings of consumer content do not rate VR games differently to those played on TV. This could lead to experiences that are more intense or subconsciously affecting than desired. To test whether VR and non-VR games are differently affecting, and so whether game ratings are appropriate, our research examined how participant (n=16) experience differed when playing the violent horror video game "Resident Evil 7", viewed from a first-person perspective in PlayStation VR and on a 40" TV. The two formats led to meaningfully different experiences, suggesting that current game ratings may be unsuitable for capturing and conveying VR experiences. The public must be better informed by ratings bodies, but also protected by developers and researchers conscious of the effects their designs may have.