Screen Shot 2019-01-30 at 1.13.11 PM.png

OVERVIEW

The University of Michigan Multidisciplinary Program and Vectorform funded us to design a prototype for an indoor navigation tool for visually impaired individuals. The project is named Beyster Bluepath because it was prototyped and tested at the Bob & Betty Beyster Building on the University of Michigan campus.


DURATION

6 months

CLIENT

  • Vectorform

  • University of Michigan Multidisciplinary Program

MY ROLE

  • Research

  • UI/UX Concept Development

  • Journey mapping

  • Storyboarding

  • Wire-framing

TEAM

  • User Experience: Aaron Tang, Asha Shenoy-Kudupi

  • Software: David Cao, Jonathan Hamermesh, Minh Tran

  • Hardware: Moran Guo, Sanika Kharkar


THE CHALLENGE

Buildings, cities and maps are designed with sighted users in mind. This makes navigation challenging for the visually impaired. Persons with visual impairment memorize paths within buildings on campus. But, this prevents them from accessing parts of campus that are unfamiliar to them. Even in a familiar space, there are moving obstructions like improperly placed furniture, which take away from visually impaired users their ability to navigate independently.

client GOALS

  • Increased accessibility to the Bob and Betty Beyster Building (BBB)

  • Easier and stress-free navigation of the BBB for the visually impaired

  • Scalable system that can be deployed in buildings across campus and beyond


THE OUTCOME

Beyster Bluepath, a mobile application that serves as an in-building navigation system helping visually impaired users navigate independently inside a building. The app identifies a user's location within a building, takes user input in natural language, and provides turn-by-turn audio navigation instructions to their destination. The app also alerts the user when there is an obstruction ahead of them, helping them walk with an ease of mind.

Beyster Bluepath Concept Video


Process

USER RESEARCH

We conducted 5 in-person interviews with individuals who had visual impairments that ranged from low vision to total blindness. We supplemented our interviews with observational research, walking alongside individuals as they navigated university buildings and made notes as they interacted with their environments.

Our user interviews focussed on these four questions:

  1. What does a day look like in the life of a visually impaired individual?

  2. How do they currently navigate and what are obstacles that prevent them from navigating independently?

  3. How do current systems address their needs? How are they lacking?

  4. What role does technology play in their lives?

To gather a broader perspective, we visited a low vision support group at the UM Kellogg Eye Center and conducted brief 15 minute interviews with 7 individuals after their weekly meeting. It helped us learn about their current tools and the obstacles they face in navigating independently. We also had the opportunity to observe a mobility instructor help a student familiarize themself with a new building.

The major pain points identified through research were:

Feeling lost inside familiar buildings: Most of our interviewees recollected the sense of panic they feel when they thought they might be lost inside a building. This was particularly painful when they were in the building during non-peak hours and had to wait for someone to walk by.

Walking into furniture or bumping into hanging obstacles: We heard multiple users recollect experiences of bumping into something because they couldn’t sense it with their cane. This most often happened to be hanging obstacles and sometimes low placed furniture. One of our interviewees was absolutely frustrated by the students that leave unattended boxes and equipment on the floor of the computer lab.

Walking longer than others: They had to walk longer distances either because of knowing the location of only one restroom or water fountain on the floor. But also because of having to stop for directions. Users expressed frustration that they had to spend more time standing and walking due to this.

Having to depend on others:

Navigating crowded environments

The insights gathered from user research were:

Heightened sense of hearing: A common theme among our interviewees was that they relied on sounds and changes in their environment like the direction of wind flow for navigation. We observed that one of our interviewees walked seamlessly next to her mother by paying attention to the sound of keys in her mother’s hands. Upon inquiry, they mentioned it was a strategic choice they made as she was able to filter the sound of keys even in noisy environments. From interviews and observations, we found that all our users had a heightened sense of hearing and were responsive to the smallest change in the environment around them.

Do not wish to replace the cane: All our interviewees agreed that they would not have anything replace their cane. Regardless of how advanced the technology offered might be, they did not feel comfortable giving up the sense of control a cane gave them. And while technology can fail or have glitches, the cane has been a reliable friend. There was unanimous agreement that they were not looking to replace their canes. Any tool developed will have to be a substitute and not a replacement. One of our interviewees was an advanced echolocation user and he mentioned using an advance piece of technology that he thought was slower than his echolocating ability paired with a white cane.

Limited Access and Dependency on Mobility Instructors: We found that most of our interviewees worked with mobility instructors on campus and could not visit a building because they had not received training in that building. We identified this as an accessibility issue as this prevents students and staff from visiting a large portion of the campus and keeps them from living campus life independently.

Aesthetic Concerns: We learned that our interviewees were concerned about bringing too much attention to themselves. They wanted a solution that was seamless and did not call for attention.


PRIMARY PERSONA

Our initial interviews helped us identify a broad set of needs of visually impaired individuals. We met with people who fell at different points on the visual impairment spectrum (low vision, color blindness and severe visual impairment). Our team along with our mentors discussed the insights we gathered and decided to focus on a primary persona that is completely blind. The reasoning was that if the product works well for this persona, it most likely will work well for others with less severe forms of visual impairment. The persona of Emily Smith then became the guiding light for our team.


USER JOURNEY / FLOW

As a UX designer, I was interested in the experience that one has as they walk a space with somebody's instructions. So, I created a user journey map to discover opportunities and challenges. This led to the following design decisions:

1. Our system does not replace the cane, but only substitutes it.

Our users lay a lot of trust in their cane to help them walk around a space. It helped them judge the surfaces they were walking on, detect obstacles, sense turns in the hallway and served as a cue to others to give way. So, we are not looking to replace the current way of navigation. What our users needed were navigation instructions/ a map. They usually had to depend on others to tell them where to go, or needed someone to walk with them. So, our system serves as the friend that gives them directions as they walk through a space.

2. Simple and understandable language for navigation instructions.

We do not want to overwhelm the user with complex instructions. So, the system uses these simple instructions: "Turn Left/Right", "Approaching Left Turn/Right Turn" and "Walk Straight/ Continue Straight". The rationale behind using "Approaching Left Turn/Right Turn" is that a lot of users didn't have a perception of distance. They found more value in knowing that a turn was ahead and were able to look out for the edge on the wall using their cane.

3. Clear Feedback when system does not understand user input.

Since the system takes input through voice instructions, errors are expected. The system will repeat the destination before it sets out to navigate the user. It will also ask the user to retry if it is not able to parse a query well.

4. Account for missed or incorrect turns.

The system is able to localize a user continuously as they walk through a space and will know if they are off track. We would reroute the user in this case and ensure a no-confusion shortest path reroute.

5. Detect obstacles

Hanging obstacles are a major cause of injury for VI users as they cannot detect them with a cane. Our system uses depth tracking to sense obstacles that are within a certain threshold distance and will alert the user of them. To ensure we do not overwhelm the user, this feedback will be haptic(vibration) instead of audio.


BRAINSTORMING

Initial Mapping

Screen Shot 2019-03-05 at 10.44.48 AM.png
Screen Shot 2019-03-04 at 8.44.02 PM.png

Technology Evaluation

Technology Evaluation.jpg
Screen Shot 2019-03-05 at 10.49.24 AM.png
Screen Shot 2019-03-05 at 10.48.23 AM.png

Along with the engineering team we compared the different choices based on our user requirements. We needed the product to have high accuracy and fast processing for real time use. It had to be affordable for the user and the product needed to be scalable across the university. Given these dimensions, we created a matrix to determine the right technology for this set of constraints. With a clear set of expectations from the system, our team played with different pieces of technology like bluetooth beacons, RFID detectors and depth sensors to evaluate advantages and disadvantages of each technological choice. We set these up in our homes to evaluate what would be suitable for the end user. We also visited the Detroit Institute of Art to try their new augmented reality mobile tour to get our hands on Google Tango. We then compared them on 4 terms namely accuracy, response rate, cost and security. By weighing their pros and cons with user preferences, Google Tango emerged as a clear winner as it lets the user carry a single device that they can use without the dependency of external infrastructure (requiring institutions to install tags and beacons on every item in the building). We decided to build a mobile application that can elicit user input via natural language, find the user inside a building and guide the user to their destination through voice commands. 


SYSTEM DESIGN

Screen+Shot+2019-03-05+at+10.38.20+AM.jpg

Having narrowed down on our choice of technology, we mapped the system components and their interactions.. This helped the engineering team clearly understand different modules of the system, their interactions and how these would integrate to form a single system.


DIGITAL PROTOTYPE

Our primary user group will not use a visual interface and will interface with the application through audio instructions. We designed a simple UI for the convenience of users with low vision/ mild visual impairments.


THE TEAM

(From left to right) Asha Shenoy, David Cao, Minh Tran, Jonathan Hamermesh, Moran Guo, Aaron Tang