top of page

​

Usability Testing

Overview

The user testing that the team conducted in order to further the development of Through Other Eyes, consisted of gaining insight and feedback from 6 unique participants testing our mid fidelity prototype, and 11 testing our high fidelity prototype. To ensure that the user testing we conducted was as effective and insightful as possible, we intentionally selected testers in varied occupations and in ages ranging from 11 to 59.

 

The overarching goal of the user testing we conducted was to gain the key insights and raw thoughts and feedback required to continue further iterating and refining our prototype in order to ensure that Through Other Eyes would be accessible to the target audience, and ultimately a game that people would want to share. We chose to employ two moderated user testing techniques, these included Concurrent Think Aloud, which allowed us to experience how the user was experiencing the prototype as they were playing through the story & Retrospective Probing which allowed us to gain in-depth insight into how users felt about interacting and experiencing the game prototype.

 

Furthermore, COVID-19 greatly impacted our ability to test with a larger audience, limiting us to family, or testing remotely. We believe that in normal circumstances, the team would have been able to test our prototypes with a larger and more diverse sample of our target audience.

Approach & Activities

During testing sessions, our testers were tasked with playing through the entire game. Some were told to play as a specific character, others were free to choose. Crucially, testers were then told to play through it again, but this time by taking different choices, to force a different outcome. As the story is built around adapting to player choice, during the testing phase, we intentionally chose not to remove the element of control away from the testers. After testing the story of the game, users were also tasked with exploring through the menus of Through Other Eyes.

smartmockups_kanz31pi.png

Example Moderated Mid Fidelity Testing Session

Key insights from user testing

We discovered that the user interface contained a few issues that potentially impacted the streamlined experience that we intended to deliver. These were one time issues for the users, but the team understands that the first impression of the solution is often the most important insight. Issues that were brought up by testers included: menu buttons that were too hard to press, unclear context screens and confusing screen to screen navigation.


 

The functionality was a bit lacking, as our testers collectively had higher expectations, stemming from experiences with similar products. Functionality issues that were brought up by testers included: confusing minigame screens, no way to go backwards in the story, a lack of save system and a confusing jump to the main screen after the story ends. 


 

The narrative that we wrote contained a few issues, both in the way it's presented to users, and the content itself. Testers noted that there were a number of broken screens present as well as continuity errors and grammar/spelling errors. Other issues that were uncovered in testing included options that would end the game immediately, providing no real experience.

How did it influence the next stage?

Our mid-fidelity testing, critically provided feedback from our target audience which greatly impacted the design and development of the high fidelity prototype. For example, while our mid-fidelity prototype featured each section of the story all contained on one screen with minimal differences between character textboxes, the feedback we got back made it clear we needed more than one way to differentiate the characters and context boxes. This feedback directly led to a number of iterations of a new card interface, which radically redesigned each screen so that it would feature just one dialogue or context box rather than the 3 to 20 found in the mid fidelity version. 

 

Furthermore, the key insights we gained from user testing led to the redesign of the core context and dialogue boxes. Fundamentally, we iterated a version of coloured boxes but crucially added character illustrations as well as character names at the top of every dialogue box. Upon testing this new design, with illustrations, our users responded positively to the new and improved design, noting that the characters and background looked visually appealing. 

 

Additionally, a handful of users noted that the interface didn’t make how to progress through the story clear. In response to this feedback, we designed a unique arrow that we added to every dialogue and context box, which ended up solving this issue. Other minor issues were broken links, poor grammar, incorrect spelling and continuity errors. All of which were fixed rather quickly, and immediately updated in the prototype.

© 2020 Team Synergy. All Rights Reserved

bottom of page