Design: Low & high fidelity wireframing, voice prototyping, usability testing
Research: Semi-structured interviews, affinity mapping, competitive analysis, expert evaluation
August 2021 - December 2021
Figma, Adobe XD, Balsamiq, Miro, Qualtrics
Boya Ren, Jessie Chiu, Srijan Jhanwar
The most popular dating apps today are inherently visually based and as such rely heavily on images and other visual elements, creating a barrier to access for blind users.
In tandem with the slew of accessibility issues present on these platforms, this leaves blind users to commonly find themselves at best mildly disoriented or at worst unable to use core functions of these applications without the aid of sighted individuals.
The goal of this project is to devise a universal design solution which ensures equal access to the dating pool for blind users and enriches the experience for both blind and sighted users.
Most popular dating apps have rolled out photo verification processes in the past few years, many of which are mandatory onboarding tasks. This typically involves the user taking a photo of themselves making a pose shown in an example image. Without a textual description of the gesture, blind users cannot complete this step independently, which robs them of their autonomy and privacy.
This concept offers an improved verification experience by providing:
Most dating apps are hyper-visual, relying on photos to convey a person's personality, interests, and lifestyle, which greatly excludes the blind community. Voice-rich profiles affords both sighted and non-sighted individuals an outlet to more fully express themselves and learn more about potential matches.
This concept allows users to:
We uncovered that blind users experience difficulty selecting and uploading photos for their profile, often consulting with friends and family to varied results.
To mitigate the dependency and ambiguity of accepting recommendations from sighted individuals, we incorporated a recommendations engine which would build upon existing facial recognition in mobile photo albums.
Through this concept, users:
Cognizant of our limited perspective as sighted individuals, we employed several research methods to learn how the blind community currently navigates the online dating space given current solutions, identify accessibility and usability issues in current product offerings and then used those findings to inform our final design solutions. I took the lead in semi-structured interviews, low-fi wireframing, feedback sessions, and hi-fi prototype design.
We began by conducting secondary research, performing an accessibility audit and competitive analysis of the most popular dating apps. We then sent an accessible survey to blind users of online dating platforms to get to know their experience and recruit participants for further interviews. Interviews were then carried out with accessibility experts and a subset of survey respondents to gain a deeper insight into the challenges blind users face on online dating platforms and explore the rationale survey responses.
Examining popular dating platforms such as Tinder, Bumble, and OkCupid, a common thread emerged in that these apps don’t work well with screen readers. This is mainly due to buttons being labelled poorly, and the lack of useful (or in some cases, any) alt-text for images. This leads to confusing interactions for blind users, pushing them to rely on sighted individuals to effectively use these apps. The below table contains our analysis of each of these platforms.
Our survey was distributed across different blind communities on Reddit and Facebook. It asked questions regarding platform preferences and experiences, context of use, and demographics. Semi-structured follow up interviews were conducted to open up the dialogue with users and explore tangents that may produce valuable insights, while maintaining a script to keep on track. These interviews included questions regarding which dating apps they use most, criteria when swiping on a potential match, if interviewees required help from family or friends while using their preferred app, and their thoughts on disclosing their blindness on their profile.
We synthesized interview data into an affinity map in order to highlight common themes, draw insights, and guide design decisions:
After analyzing all of the data from interviews and surveys, we generated the following insights:
We learned from our interviews that voice is a rich source of information for blind users, allowing them to learn more about potential matches which is typically communicated to sighted users through profile photos.
This barrier impedes users from completing key tasks, causing frustrations for blind users in an environment where they already feel at a disadvantage.
Our interviews uncovered that some users would prefer to use dating apps on their own, but cannot as they require the help of their friends and family to use them.
Initial research showed that security is a major concern for blind users, some of whom reported issues being cat-fished or deceived by bot accounts.
We distilled down the insights discovered across our research methods by crafting three personas depicting the ethos of our users and their characteristics to build empathy with the people we’re designing for. We also created empathy maps to visualize our personas’ attitudes and behaviors and develop a shared understanding of our users’ needs.
With the design implications uncovered from our analysis, we conducted a brainstorming session and ultimately generated 3 design concepts to move forward with. These concepts tackle the issues most pertinent for blind people using dating apps. We then produced low fidelity sketches of each of the concepts to align on our ideas and communicate them to sighted stakeholders.
With our design ideas starting to take shape, we held a preliminary feedback session with users to get some initial thoughts and validation on if they found these concepts useful, any points of confusion or potential accessibility issues, and anything else that may have stood out.
Ultimately we found that for photo verification processes, users voiced their desire for input from the system describing how to properly take the verification photo in the case they have failed verification. Participants also advocated for additional assistance in photo capture, pointing us to stock accessibility features found in the iOS camera app which provides real time feedback communicating to users where their face is in relation to the frame. For voice-rich profiles, we identified the need to include time limits for recorded audio, and lastly for the profile photo recommendation system, we noted that participants expressed some confusion about how to remove photos they didn’t like from the recommendations.
Based on our response from our initial feedback session, we refined our designs to include improvements and address the concerns raised by participants. These concepts were also additionally fleshed out more in this stage to include screens for playback and error states.
Updated design features real time photo capture feedback, curated retake instructions for photo verification, and time limits on audio verification.
This iteration of the concept features time limits on audio recording, as well as additional screens showcasing audio playback with the option to redo recording.
New to this iteration of the concept is an updated gallery view, addition of an action button to remove unwanted photos, and a more consistent share button utilizing text rather than an icon.
After refining our concepts and producing low fidelity wireframes, we conducted another round of feedback with users through think aloud tests with voice interactions. We took the feedback received from theses session and annotated our wireframes with our notes, identifying things which were received positively as well as areas for improvement to iterate upon for our prototype.
Some of the key takeaways from this session were the desire to listen to a sample recording of voice prompts, adding an option to upload photos from miscellaneous sources, and some call outs regarding the way in which some elements should be communicated by screen readers.
After getting feedback on our low fidelity wireframes, we iterated upon our design in high fidelity using both Adobe XD and Figma. Adobe XD was used to conduct our evaluations as it offered the functionality to incorporate voice interactions. This was vital for us to make our prototype closely resemble a real use context in emulating the behavior of a screen reader.
Account verification in a more flexible, accessible and fault tolerant manner.
Tell the story behind profile photos.
Communicate more on a profile by answering voice prompts.
Suggest up to 10 photos from the camera roll which can be shared with friends and family.
With our final prototype in hand, we worked with accessibility experts and blind users to assess our designs. Due to the limitations of current prototyping tools not providing support to work natively and with full compatibility with screen readers - short of developing and launching a live application from the ground up, these tests had to be conducted over screen/audio share.
In evaluating our design, we engaged accessibility experts and users alike to perform cognitive walkthroughs. We were able to assess our design by asking participants to complete a series of core tasks related to each concept, keeping in mind:
Before and after conducting cognitive walkthroughs with users, we asked them to take a usability questionnaire to compare their experience using their most used dating app with that of our prototype. This was done to assess participants' feelings towards our design ideas, and if they view them as an improvement over current systems in a quantifiable way.
Through our testing, overall we had found that users saw a marked improvement in the usability of these concepts over their counterparts in their preferred dating apps. Meanwhile, accessibility experts primed us to review the full gamut of our designs to ensure that there is always back/cancel navigation available, even for required processes, as well as accompanying alt-text for images even when already paired with a textual description in the case of our inclusive verification concept. Additionally, a cognitive load constraint was highlighted that audio verification phrases should be limited to 2 seconds in length for memorability's sake.
Based on initial feedback, we feel the project is headed in the right direction, however we feel this should be further verified through testing with a larger group of users.
As this project came to a close, we made sure to incorporate these improvements into the final version of the prototype showcased in this case study, so annotations featuring this feedback have been included on earlier versions of wireframes as a visual aid:
From this project I learned a lot about designing for accessibility, designing for those with impairments, and working with protected populations. I got to try my hand at designing for voice and working with Adobe XD and Balsamiq.
Working with the blind community, specifically on a topic as personal and sensitive as private dating lives was a challenging and rewarding journey which made me empathize with their experience. It taught me the value of actively seeking opportunities to design side by side with your end users at every step of the process, not just with them in mind.