Portfolio Category: Interaction Design

CoC Website Redesign

Website Redesign For The College Of Computing At Georgia Tech

Domain

Duration

Tools

Team

Role

UX Design
UI Design

3 Months
2019

Figma,
Adobe XD,
Google Analytics


Prabodh Sakhardande,
Harshali Wadge,
Jordan Hill,
Santiago Arconada,
Shihui Ruan

User Experience Designer

Summary

Georgia Tech’s College of Computing (CoC) is ranked number 8 in the nation for computer science. However, the current website and content lack a mobile-friendly layout, and a consistent content hierarchy which makes it increasingly inaccessible to both prospective and current students.

The goal of this project was to consolidate and streamline content in order to improve the navigation of the website, as well as to reduce any friction associated with finding desired content. We are focussed primarily on two user groups, current and prospective students (both undergraduate and graduate level).

This project took us through a path of rigorous research and design which helped us understand the problems of an existing system which is used by thousands of visitors everyday. Towards this we performed a comprehensive survey, a card sorting task, analysis of Google Analytics and a thorough heuristic evaluation. We consolidated the research findings and gleaned insight on the actual wants and needs of our users. Based on these insights, we designed a basic prototype which was tested with actual users. From the findings of this testing, we further improved the prototype into a high fidelity mockup and performed usability testing on it. We presented our research findings and proposed website design to the CoC team.

We consolidated the client requirements along with our study of the the existing system to generate a comprehensive plan for the project.

 

Research and Design Process

Gathering Data

Start through stakeholder interviews, understand what is expected. Then conduct a heuristic evaluation on the current site, look at user flows and website analytics, to see how users are navigating the website currently.  Also explore other university sites, including websites for other colleges within Georgia Tech for a competitive analysis. Through this, formulate a comprehensive research plan which will infrom the design.

User Research

Understand the unique needs of each user group, as well as issues relating to accessibility and mobile usage. Initially we started with a survey, then conducted a  a card sorting task to assess our users’ current mental models and understand how users expect the information on the website to be organized. Parallelly, we analysed the data collected by Google Analytics and did a thotough heuristic evaluation of the current system.

Design

The results of the research activities informed our IA and design decisions. We did a preliminary Global Navigation redesign with the main topics in mind (i.e. programs, news, faculty, students, etc). This started with redesigning the information architecture and wireframing.

Iterating over Prototype

Based on the design map and our wireframes, we created a low fidelity prototype of the website. This was tested and then we developed a responsive prototype. After testing our prototype, we developed a high fidelity prototype for our final design.

Evaluation

To evaluate our design and responsive prototype for the website, we conducted expert analysis and A/B testing to assess its ease of use, time on task, and validate the new information architecture. We designed the tasks in these studies around the needs and use cases identified in our user research.

 

https://lh3.googleusercontent.com/_DVt41e9G7UXzmc93pnebuk48rBfljl0Sn26wq361Bosn6w25xSoD8p9z0qOdxoA7F7Ou-DAd0UbgTChHVaGvCEmzk5XSRbrP3yfmvM2Zn_X6kzT-UmIa6miGjWaWuKOKLHiarFR

 

 

 

 

Phase 1 - Initial Research

At this stage, we did a comprehensive study of the existing system along with understanding the client requirements.

Existing System

Website Design

Some screenshots of the current site and first impressions. The user must click through three different pages to find their specific program in some instances. There is no clear path or flow delineated for the user, and images do not scale to smaller device sizes appropriately.

https://lh4.googleusercontent.com/CLABasL9fPCmyGxJel_fyKNP0XHiKpCNyJqPRhfnUr-WXpLHoCWpjjILOOtqOKKBvF1ATRThmUroiD0Cl66OL0ESBoAkTf031M5LDCbcWI1WaQfTseIqDY79S6PmSrLk3MrU3Psp https://lh4.googleusercontent.com/Wzt2HpMcPhWGQ4-RfqQDrFXzrYT4u-unD4CjmZmb0_Ll6sKuk_nbpZDxPtg69_VeUc7Ep9uSnaTA9FBpxylMPZaSGvSNdSrxm5ZRetP3CCzALDehXt_2J3OYtiHIxJKoLhGdFrY7 https://lh3.googleusercontent.com/D2LhFHjBffqhL6cE4CzQbk54817oS1CV0XI1VXpzx4wSNSVokQTC8QaqG0aqhdzIXG-lafTlWQDLnN0xKTa0ryOOa7PHQH0wbBArR0mJq2pDturXO0kyIJAhRKG67ng9WnL37fbZ

 

Issues with Accessibility 

We ran a free version of a website analysis tool to get a quick glance at how much the website conforms to accessibility standards. We found that 78% of the pages scanned had accessibility problems. Overall the website got an A rating (the worst), which stands for “Pages with level A issues are unusable for some people”.

Comparing Websites

For comparison, we looked at the website of the College of Engineering at Georgia Tech.

 

The initial impression from this site is that it is more organized, the content is more scannable and easier to navigate overall. The touch targets on smaller device sizes are larger and icons clearly signify functionality - items expand down page to reveal additional content.

 

Client Requirements 

Based on our preliminary research and the meeting with the CoC Communications Team, we uncovered the following requirements:

Reworking in the website’s global navigation structure and information architecture.  The client discussed how content continues to be added in pieces onto the website.  There is a need for a content strategy and information architecture that can scale to accommodate additional content.

Updating content. There are many broken links and outdated information that is still accessible on the site.

Redesigning within CoC’s branding and style guidelines. The branding and style guide for the website may be found at https://comm.gatech.edu/brand. We are required to redesign the website based on findings and pre-established styles - while leaving the global header and footer as is. However, the research, findings, and design done for the CoC might be applicable to the schools as well.

Accessibility. We must pay special attention to accessibility when going through the redesign. We found that the current website is not compatible with screen readers, nor does it conform to AAA or meet WCAG 2.1’s  success criteria.

SEO Optimization. We may (ideally) take into consideration the website’s SEO rankings and redesign the website with SEO in mind. This includes recommendations for developing the new website with a push for SEO optimization.

Phase 2 - Understanding User Needs and Design Implications

 

In this part of the project we focused on the data collection and how that shaped our design implications. Our initial research gave insights on the research methods we would use to gain an in-depth understanding of the wants and needs of our user group. The stakeholder interviews aligned our process with what was expected. We started our research with a survey sent out across all students of the college of computing. While the survey was out, we started analysing the data collected from Google Analytics. At this point we also began a heuristic evaluation of the current system. We analysed the results of the survey and this helped us define a card sort task. Finally we combined all the data gathered from these research activities and formed a holistic understanding of the wants and needs of our users. These insights were then converted into design implications.

As we collected more data and identified key issues of the existing system, we expanded the problem space to also include a way to facilitate site visitors to find the content they are looking for and thus improve the overall experience for all site users, including people who use assistive technologies  to access the website.

 

Stakeholder Interviews

The stakeholders we interviewed had varying responsibilities within and outside the College of Computing, so our interviews allowed us to hear from other parts of the administration and university.

Our goals for these interviews were as follows:

  • Highlight issues with the CoC website that stakeholders within the CoC struggle with on a regular basis.
  • Identify problematic or missing site content that stakeholders would like to see addressed
  • Assign a priority to the issues identified by the CoC team
  • Gain a basic understanding of accessibility tools and how they can used in website design

Method Details

We connected to different stakeholders through the CoC team. For each stakeholder, we met for about an hour and had some topics we had prepared beforehand. But for the majority of our time with them, our interactions were conversational and we wanted to hear what the stakeholders had personally experienced, their needs, and the needs of the students they interacted with.

Analysis

To analyze our interview data, we started by transcribing our notes onto cards, separating the notes based on the different themes addressed. We thought this would help us find commonalities between the three interviews, but because the three stakeholders had such varying interests and job roles, it was difficult to identify patterns across the three data sets. So instead, we applied the data we found to our larger affinity map, which had themes from the other research activities we conducted. Through our affinity map, we were able to categorize our notes according under the larger themes of accessibility, responsiveness, content, and navigation.

 

Heuristic Evaluation

Conducting a competitive analysis and heuristic evaluation served as a starting point for our user research and provided a good benchmark for both competitors and for the current state of the website.

Method Details

We extrapolated evaluative criteria from a larger set of widely used common principles and best practices, or heuristics, and applied them to a competitive analysis as well as a heuristic evaluation in order to diagnose the College of Computing website’s current state. The heuristics we used in both our competitive analyses and CoC website audit  were picked from the Information Architecture Heuristics by Abby Covert (Abby IA), Nielsen’s Usability Heuristics, and W3C Web Content Accessibility Guidelines.

We grouped our criteria into four main categories based on their overarching themes :

 

  • Navigation and Site Organization
  • Design & Branding
  • Content
  • Accessibility

 

Analysis

The heuristic evaluation uncovered multiple aspects of the current design and how it could be improved.

Survey

For our survey, we wanted to identify the areas of interest and usability challenges that students in CoC face when they use the website. Specifically, we were curious about the following themes:

  • Content: What content are students most interested in? Why do students come to the CoC website?
  • Navigation: How discoverable is content on the website? How are students accessing the content they need? What challenges do they face in finding the information they want?
  • Responsiveness: What information do students access while visiting the site on their mobile devices?
  • Accessibility: What assistive technologies or tools are users employing when using the CoC website? What are the unique issues they face when interacting with the website through an accessibility tool?

Method Details

We designed a 24 question survey in Qualtrics, which was divided into 4 blocks:

  1. Introduction and demographics: to identify participants’ degree program and tenure at Georgia Tech for that program. Because the CoC is interested in the perspective of prospective students, we wanted to separate and use the first year students (for all degree programs) as a proxy, who had completed the application process most recently.
  2. Website Experience: to ask various questions on the usability of the website, identify areas of interest, devices students use to access parts of the website, and challenges they face. First year students saw slightly different versions of these questions (e.g. “When you were applying to Georgia Tech,...”).
  3. Accessibility: to identify students who use assistive technology to navigate the website, and gather data on that experience.
  4. Volunteering for Further Research: a final block that linked to a separate survey where students could provide their contact information if they wanted to volunteer for our future research activities. The two surveys were separated to keep the responses of this survey completely anonymous.

 

Analysis

In order to do a fair comparison between the different student populations, we normalized each data category to the number of responses we got from that demographic (per degree level: undergraduate, masters, or PhD; or per current/prospective).

 

 

 

 

 

 

Card Sorting

With our hybrid card sorting task, we wanted to identify patterns in how visitors expect to see the website content to be organized. We chose to do a card sorting task for this project as a precursor for our next step: redoing the website’s information architecture. Our goals for the card sorting task can be summarized as follows:

  • Identify large categories of content that students expect to see on the CoC website
  • Identify subcategories to populate each of the larger categories identified above
  • Pinpoint extraneous or ambiguous content
  • Surface missing topics or content that students want to easily find on the website

Method Details

Card sorting tasks are often employed when (re)designing the information architecture of a website. We chose to do a closed card sort because there are existing categories on the website’s navigation bar that we felt would simply re-emerge if we had done a fully open card sort.  In our task, we allowed participants to write in any elements or categories that they would want to see or remove any that felt irrelevant. With the exception of one or two, the categories remained largely the same, confirming our idea that providing some pre-existing categories would not compromise the validity of the task.

Analysis

For each participant, we took a photo of their card sort to use in our analysis. We will be looked at how frequently each card showed up in a specific category across participants, and whether or not some of the larger categories needed to be changed, removed, or added. We also gathered qualitative data from participants and derived insights by looking at trends from the card sort along with the qualitative data.

Google Analytics

The College of Computing website has upwards of 50,000 users every month. With Google Analytics integrated into the website, it is possible to access information on how these users click through the website. The site’s analytics provided insight on usage trends and user behaviors.

Methodology

We decided to narrow down the categories to study based on an initial overview of all fields available and further based on the survey results that we had. This allowed us to focus only on what was relevant. We all collectively surveyed the analytics and parsed through the Audience, Behavior, and Acquisition sections in order to determine which metrics we wished to explore further, and which would be useful for us to supplement the other research methods we were employing.

 

 

Phase 3 - Design Ideas and Feedback

 

During this phase, we focused first on breadth as we redesigned the global navigation, or main menu of the website. From there, we took a deep dive into a specific section of the site, currently labeled “Academics”. It was made apparent from the CoC communications team, our survey results, website analytics, and card sorting results, that the content in this section was not easy to find, and navigating to content has been arduous for both types of students.

Our Approach to UI Design

Taking from Brad Frost’s Atomic Design, we utilized  a component-based or modular-design approach to the College of Computing website redesign. This approach enabled us to devise a system of user interface  groupings - or components - contextually linked to types of content that cater to our primary user groups: current and prospective students. Each page on the website will be constructed by stacking these components one after the other, creating page layouts of concise and consistently designed content, cross-site.

 

 

Global Navigation

The card sorting task, shed light on how the global navigation could be reorganized according to students’ information needs and existing mental models on higher education websites. Using these insights, we created a high-level diagram of our redesigned global navigation.

Within the new ‘Prospective Students and Current Students tabs, we focused on four pages and two navigational pathways for our feedback sessions.

Prototype Designs

For our initial feedback session we worked to develop an low-fidelity prototype consisting of four canvases and paper cut outs of UI Components. These components were first tested through pilot testing. After the pilot test, we amended the prototype design in order to better test what we set out to measure. By removing UI from the prototype, and replacing it with blank cards, we were able to better test a participant’s mental models surrounding content placement, and how they organized the content in a hierarchy from the top of the canvas or page to the bottom.

 

Wireframe of the ‘Degree Programs’ page

Feedback Sessions

In our first study, we wanted to understand how to organize content within pages. We designed our own research method for this activity, catering to our specific scenario, derived from other research activities. In our research method study participants would organize distinct in-page components based on their understanding of the content, how they felt it should be organized, and where it belongs on a given page. This is similar to a card sort but instead of sorting a higher level organization of content - like in a global navigation - participants are asked to organize content components in-page.

The participants were presented with a blank canvas for a single page and the individual components associated with that page. The participants were asked to arrange the individual components on the canvas in a way that best organizes the information on the page. While they were completing this task, the researchers asked the participants to think out loud and reflect on why they were organizing the page in a certain way.

 

 

Summary of Findings

 

  • Users want to see Calls to Action (CTAs) and most relevant information at the top of the page. Participants commented that they would like to see links immediately if a page had an obvious CTA.
  • Platforms influenced the desired content hierarchy. Participants commented on how, on mobile, they would prioritize CTAs even further by making them the first item on a page. On the desktop format, participants placed the CTAs close to the top of the page, but often to the right side.
  • Overview pages should include concise but informative descriptions about broader topics. These pages link to more detailed, content heavy pages, so overview pages themselves tend to have more visual elements like images and banners.
  • Content titles need to be validated. One of our tasks was focused on finding the ‘Program Advising’ page, but our participants were confused as to what that meant.
  • Because the website is rich in content, the user might get lost in the multiple layers of the information architecture. The website contains a lot of information about a large variety of topics. Users require wayfinding mechanisms on each page they visit to understand where they are and how they got there.
  • A Components-Based Design approach can be leveraged to apply both a consistent design system as well as accessibility principles across the website. By first designing page components, we can scope our work and templatize the website redesign process because we will have the building blocks for each page.
  • Prefer simple design. Participants commented that the minimalistic elements were helping to declutter the pages and helping them focus on what they were looking for.

 

Prototype Improvements

Based on our feedback session, we modified our initial prototype. From the ‘Degree Programs page, users can click on the ‘Masters Degrees’ component to reach the ‘Masters Degrees’ page. Similarly, from the ‘Academic Resources’, the user can click on ‘Program Advising’ to reach that page. This paved the way for our final design.

Phase 4 - Final Design, Evaluation and Validation

 

Final Prototype

Our final prototypes consisted of hi-fidelity designs linked together in InVision and Adobe XD, and enabled users to interact with the redesigned College of Computing website on both desktop and mobile breakpoints. Our prototypes were exportable, and had public URLs to share and to test with.

Before we designed and prototyped the website’s pages we created a component library. The library, or component documentation, consisted of a medium-fidelity images of each component in desktop and mobile breakpoints and the component’s feature and function specifications - such as the UI elements used, the grid system used, and aspect ratios for all images - to hand to the CoC Communications team after our final presentation.

 

We then designed hi-fidelity website pages for the prototype using our components and Georgia Tech’s institution-wide style guide.

 

 

 

Evaluation

 

Expert Based Method: Heuristic Evaluation

For this research, we asked four experts to evaluate our prototypes. The four experts were selected based on their expertise and availability. Two of the experts were students in the MS-HCI program; one had a background in design and writing, while the other had a background in psychology and prototyping. The other two experts were professors in the MS-HCI program, and brought overall expertise and knowledge of design, especially in the accessibility and web development space.

The heuristics we used for this evaluation were the same as the ones we used in the earlier heuristic evaluation of the current website. The heuristics were selected from several existing sets that are commonly used in industry. These sets included:

  • Abby IA
  • Nielsen’s Usability Heuristics
  • Shneiderman’s Eight Golden Rules
  • W3C Web Content Accessibility Guidelines

 

Each expert provided a score for each of the heuristics on the given list. Scores were given on a scale of 1 to 5. We then took an average of the scores across the 4 experts for each heuristic.

 

 

User Based Method: A/B Testing

We user usertesting.com for this purpose. Users on UserTesting.com were shown a link to either the mobile or desktop prototype. They were provided a scenario and given tasks to complete in the prototype.

Scenario: “You are a student at a higher education institution and are checking the homepage of your college.”

Task 1: “You are thinking of doing an on-campus Masters of Science in Computer Science (MSCS) and want to know the cost associated with it. Find this information on the website.”
Task 2: “You are thinking of doing an on-campus Masters of Science in Computer Science (MSCS) with a Thesis option, find out the requirements that this option entails.”
Task 3: “You are a Masters student in Computer Science looking for program advising. Find out who your advisor is.”
Task 4: “You are an international student looking to study at Georgia Tech and would like to find information for international students. Navigate to the page where you would expect to find this information.”

After each task, participants were asked if they completed the task successfully. The answers allowed were: “yes” or a variety of no answers with reasoning for why the task wasn’t accomplished. Participants were also asked to rate the difficulty of the task with a Likert Scale going from Very Easy to Very Difficult.

Additional metrics we collected are time-to-task-completion as well as qualitative data on where users are looking for information.

 

Recommendations For The CoC Communications Team

 

As a final deliverable, we presented our consolidated research and redesign to the CoC communications team along with a set of recommendations. These came from the culmination of all the design and research activities we had performed over the course of this project.

 

 

 

https://lh5.googleusercontent.com/RjRV41ZoOI1Z8xxx7OA7Ga1ASb3JqjpPW4nmFNGQh7r7FHVdzxWV2jC_tBk4XKooCFGeNX4IoBnszI9WCrwIPvRCEhVOMT2hwoeZADBvOsXa5wFPdLd-PoWeaJ8E9a8xFFF175KA

 

 

 

ImagineAR

ScholAR - ImagineAR

Field

Duration

Skills

Team

Role

Augmented Reality
Interaction Design

4 Months
2019

Rapid Prototyping
Usability Testing
Wireframing

Prabodh Sakhardande
Pratiti Sarkar
Utsav Oza

Unity Developer
Interaction Designer

The process of learning involves multiple ways of interacting with information and data. It depends on the medium of the data as well as the technique of learning. One such technique is learning by solving open ended problems, this requires a certain amount of creativity and divergent thinking. In the context of K-12 education, it could lead to applying concepts learned in class to real life scenarios. Working together in a collaborative environment, students can share experiences, enhance knowledge, and converge towards a collaborative potential far greater than an individual. It also sets the path for being able to effectively work in a group which is important for all aspects of life. Augmented Reality (AR) provides a unique affordance - that of being able to partake in experiences that would otherwise not be realistically possible. This research project brings together all these elements and studies how augmented reality can be used in a classroom environment towards collaboratively solving open ended problems. In terms of AR, it creates experiences beyond the screen and brings interactions in the real world through tangible artifacts. My role in this project was the conceptualization of using augmented reality with open ended problem solving for collaborative classroom education, design of the study protocol and creation of AR apps that were used in user testing.

This research is currently In Press and due to be presented at the 27th International Conference on Computers in Education in Taiwan, an updated version will be put up shortly after it is published.

 

 

 

Shopping Without Numbers

Shopping Without Numbers

Domain

Duration

Tools

Team

Role

UX Research
UX Design
UI Design

3 Months
2019

Figma,
Adobe XD




Prabodh Sakhardande,
Harshali Wadge,
Jordan Hill,
Santiago Arconada,
Shihui Ruan

User Experience Designer

Introduction

Understanding The Problem

Dyscalculia has a prevalence of 3 – 6% but it is poorly understood. Individuals with dyscalculia often show difficulties in driving or gauging speed, budgeting, time management and completing simple math. Dyscalculia often occurs together with dyslexia, which will be important to keep in consideration for the design phase of this project. As with dyslexia, it is not clear whether these children have two independent conditions, or if their difficulties in mathematics are caused by their difficulty in maintaining sustained attention.

No agreed definition of dyscalculia makes it hard to diagnose individuals. The complexity in defining people with dyscalculia creates substantial variation in methods and results throughout studies. All the definitions around dyscalculia have three similar aspects:

1) showing difficulties in tackling math

2) having difficulties in certain areas instead of all academic realms

3) it is assumed that is caused by brain dysfunction in some way

Dyscalculia is significantly different from the other conditions in the sense that it does not impair mental ability, only understanding of mathematical concepts in varying orders is affected. Following this, we choose to design within the context of processing, interpreting, and manipulating numerical data as it relates to financial management and budgeting.

Scenario of The Problem 

Consider a scenario where a neurotypical individual is shopping at a superstore. Even though the shopping styles may be different, there are a few key elements in the customer-store interface that are shared across people. First of these is knowing what to buy; this could be a shopping list or a mental list of items that are needed. After entering the store, the next step is to navigate to the location/aisle of where the required item is stacked. Normally in a supermarket, there is a huge variety of items of the same type. These could differ by brand, quantity (weight/volume), cost, specialty and so forth. In order to reach an informed decision, an individual has to compare these quantities as well as the qualities (specialty) of the product. This situation involves using several counting and comparison capabilities, all of which are problematic to individuals with dyscalculia.

Task Analysis

We explored two different hierarchical task analyses for individuals with dyscalculia to calculate the items they wish to purchase against the amount budgeted for those items,

 

Analysis of Existing Systems

Most of the existing systems that address dyscalculia target mathematical skills and number sense in early development stages (children between 6-11 years old). The majority of these systems are software based. In previous research, adaptive game software-based solutions have been successful in the remediation of dyslexia. Software for dyscalculia is based on understanding that dyscalculia comes from a deficit in numbers sense or in connection between number sense and the symbols for numbers.

SocioTechnical Systems and Context

Individuals with dyscalculia often have “math anxiety”, or feelings of tension and fear associated with activities involving mathematical operations, equations, and the like. This is the root of our problem space and what we would like to study and design for in context. Our discovery evidenced a lack in dyscalculic-friendly tools and systems that target adult individuals with the disability. Through surveying personal blog posts and online support communities, we found that many dyscalculics rely on their friends and families to help navigate these financial situations currently.

We were wary to draw conclusions this early in our design process. With that being said, findings online and in our preliminary research were encouraging and supported continued research of dyscalculic individuals activities relating to budgeting and financial management. We wished to further explore the relationship between dyscalculia and difficulties  interpreting numerical data - as well as design solutions to mitigate these difficulties.

 

Design Alternatives

At this stage, we narrowed our focus on the shopping experience in a brick-and-mortar store. We choose to target the process of browsing through different products and comparing their prices. This is inherently a mathematical operation and, due to the way prices are depicted, it is also a common challenge voiced by individuals with dyscalculia. Our design aimed to alleviate dyscalculics’ pain points by designing a solution that compares products to: a fixed quantity (i.e. common denominations of currency such as $5, $10, and $20) and similar products. In our designs, we have leveraged past intervention techniques in order to create multimodal solutions that facilitate numeric comparisons in alternative forms.

There are two main comparisons that occur in this process. Based on a fixed currency amount, or in-between goods.

  • Comparing numbers to a fixed quantity. This involves any scenario where the individual feels that $7 is a high number, but they are not quite sure how high/or close to $10 (known paper bill) it is.
  • Comparing numbers relative to one another. This involves a scenario where the user wants to compare two items between each other and needs to “experience” a clear difference in monetary cost between them (i.e. $8 > $7)

Scenario

Our user decides to make a sandwich and goes shopping for artisanal bread at her local Jrader Toe’s. At the store, she finds several options in the bread aisle that meet her qualitative specifications (puffy, looks good, everything you’d want for sandwich bread). Then, she proceeds to look at the price tags to do a quantitative comparison but she cannot make a decision because she struggles to efficiently comprehend and compare the different prices.

Design Process

We interviewed experts on dyscalculia and individuals who have the disorder through internet forums, social media groups, and over the phone. From these methods, we gleaned insights that informed an ideation session where we identified categories of challenges dyscalculics experienced. Then each group member worked to identify specific problems within the categories. Finally, we discussed and voted to decide which problem space we want to solve for in this project. This helped us identify flaws, opportunities, and key design implications in various avenues, thus shaping our final design.

Informed Brainstorming

To devise possible design implications, we brainstormed through open discussion backed by our research. We created a board for our brainstorming using Miro, and grouped pain points in context per user (yellow post-its). After this, we then started to think through possible design implications (green post-its) that we believed would aid or amend the problem areas in different situations. This provided us with a holistic view of the problems and possible problem spaces.

We found patterns in users’ pain points as well as in their efforts to mitigate them. Users generally avoided situations involving mental calculations, and preferred to write things down if they needed to do math throughout their day to day activities. They seemed to have difficulty with short term memory involving numerical values, such as prices, times, and dates. Time as a concept seemed to be a pain point, we found a large number of our users had difficulty comprehending or comparing the time to do everyday things, as well as perceiving time as a general measurement.

 

Interviews

We spoke with five individuals with dyscalculia directly, which gave us first-hand insight into the day-to-day challenges they face. Another one of our interviewees was an expert who had over 100 interviews with individuals with dyscalculia. Her expertise helped us narrow the scope of our project and focus on a more general problem space.

Our interviewee highlighted three overarching themes with most people she did research on. First they have a hard time ordering things, not just numbers but the alphabet, keeping a sequence and estimating the distance between elements. Second, they are slow in reading a number, as if they have never seen a number before, they never become fast. Third, they always count, even if it is a basic operation like 2 + 3. Some people have orientation problems or working memory difficulties but not all do.

 

We started to sketch the potential design ideas, and grouped them by the required technology stack. We dived into each solution, walking other members of the group through attempting to solve pain points in the scenario with the solution on hand. This exercise helped us identify opportunities in various avenues that ultimately shaped our three alternative designs.

 

 First Design Alternative

Research showed  that individuals with dyscalculia find it hard to recognize price tags’ values commonly used in supermarkets, this makes it equally difficult to make comparisons or estimate a price. One design-relevant piece of information we found during an interview was that dyscalculic individuals find it much easier to understand “whole quantities” such as 10, 20, and 30. We built upon this insight in our first solution and devised a way to compare the prices of items to the next common denomination of a bank note. Thus providing a visual representation of the cost along with the ability to estimate the price relative to a familiar and relevant bank note.

Price tags at stores would have a pictographic bank note of the next largest denomination attached. For example, if the price of a product is $3, a $5 bill will be visible next to the price tag. If the cost of a product is $7, the bill presented next to the tag will be $10.

Another prototype of the same idea, with subtle changes to make the numeric text more accessible.

 

The three primary components in this design are 1) the physical tag which shows the information alongside a traditional product tag, 2) the banknote design indicating the next banknote denomination, and 3) the progressive highlight of the banknote representing the price of the product as compared to the value of the banknote. We choose banknotes as a common reference as through our research we found that even dyscalculic individuals are familiar with them, mainly because of the frequency of use in their daily lives. Even though most transactions are card based these days, physical currency still remains as a familiar presentation of money.

The choice for making this a physical tag comes from ease of use. We do not want to limit any user to use a particular device or app for this solution. Even though there is an added overhead on the side of the supermarket, it is a relatively easy change to integrate; supermarkets print out tags for their products already. This design was an attempt to make the solution truly inclusive,  and not dependent on the user’s access to technology.

Second Design Alternative

 

Out of many factors that determine shopping decisions, we identified two major parameters:

  • Product Affinity:  “is the product suitable?” or “how much does the user want the product?”
  • Product Price: “what is its price?” or “is the user willing to pay this price?”

Both of these are largely dependent on comparison. Information about how the product price compares to other similar products is essential in making an informed decision. Our second design alternative tackles the problem space from this direction. It allows for quick and easy comparisons across items of the users choice. This is done through a visual, non-numeric representation: a bar graph.

 

The core purpose of this design alternative is to enable users to compare prices between products relative to each other and to absolute denominations. The user scans the barcode from the product price tag  and loads the price into the app. The barcode pulls in all relevant product information, including the product’s image, from the company’s online server. This allows for easy visual comparison and identification.

In the main UI of the app, there is a bar graph, where each bar represents the cost of each item that the user has scanned. As the user scans more items, the graph axis readjusts to display the next highest banknote denomination (if $7 is the cost of the most expensive item scanned, the y-axis of the bar graph goes up to $10). Every bar on the y axis also scales accordingly.

Each of the bars in the graph will have a picture of the item scanned right below for quick recollection. A maximum of 4 items will be compared simultaneously on the UI to account for cognitive load. Every newly scanned item after the 4th item replaces the earliest scanned item (first in first out).

Upon tapping on the image/bar of an item, a secondary screen appears with a magnified picture of the item and both the numerical and non-numerical description of the cost. There will be the option for audio output if the user clicks on the speaker icon and the user can hear the price spoken to them. Furthermore, the bar graphs corresponding to each  item scanned will be highlighted in a high contrast color on the UI. This increases scannability across the various items. The Open Dyslexic font is used throughout the app for all the text, further enabling the user to scan the information on their device.

 

For individuals with dyscalculia, comparing shapes is always easier than comparing quantities. In fact, learning through shapes was a predominant technique for teaching math to children with dyscalculia. This app idea enables users to compare available products through the display of prices as proportional shapes.

Third Design Alternative

When shopping, a user usually has an estimate of how much they are willing to pay for a particular product. This is often used as a first filter for buying cheap, normal, or premium products. This design alternative gives the user with the ability to have such quick overviews. Using recent developments in spatial tracking, depth mapping and augmented reality, it uses multiple techniques of multimodal sensory stimulation to help the user get “a glimpse” of available products without having to spend too much time. For this solution, a user would be required to have a phone capable of supporting augmented reality apps. Further, their dyscalculic symptoms should not be so severe that they cannot decide the price range they are interested in.

 

The user inputs a target price range and scans the items on store shelves through a mobile AR app. As the user scans the shelf, the app produces an AR overlay, vibration pattern, and sound to indicate which items are within the given price range. This multimodal feedback occurs in a redundant manner. It is designed such that users use vibration and sound to make the first decision on whether the chosen product warrants their further attention (whether it is in the price range).

If the user scans single items, he or she receives auditory and haptic feedback for each item. A bell chime is played for products within the price range, a high pitched error tone for prices higher than price range and a low pitched error tone for prices which are lower. Users also receive a haptic feedback consisting for a double vibration along with the error tone for every product that is out of the price range. The app is designed in a manner such that users rely on the tone and haptic feedback to get a “first glance” of the price without even needing to look at the screen. If they want, they can look further into the price comparison.

If the user scans single items, he or she receives auditory and haptic feedback for each item. A bell chime is played for products within the price range, a high pitched error tone for prices higher than price range and a low pitched error tone for prices which are lower. Users also receive a haptic feedback consisting for a double vibration along with the error tone for every product that is out of the price range. The app is designed in a manner such that users rely on the tone and haptic feedback to get a “first glance” of the price without even needing to look at the screen. If they want, they can look further into the price comparison.

This design facilitates the ability to look at product prices ‘at a glance’. The overview feature allows users to quickly identify products within their price range. The design first provides the user with non-visual cues on whether a product fits in their price range. Only then do they need to look at their screens to see where a product’s price falls in their budget. The color coded mechanism chosen is intuitive, widely used, and conveys information very quickly.

An important ability of augmented reality is that it can map virtual elements to objects in the surrounding physical space. The overlays, once placed over an object, stay there irrespective of how the user moves around. This also allows users to stand back, review, and return to all the products they have chosen.

The use of augmented reality also opens up the opportunity to display any other information the user might require over the product. A color overlay was chosen for this design because it succinctly conveys all the necessary information as quickly as possible.

 

System Prototype

Description of the System Prototype

Before entering the store

This solution is meant to be utilized in the time period where the user is setting a budget and deciding upon a shopping list for their next grocery run. During this phase, the user has a conceptual idea of how much they would like to spend and they input this in the app. The idea is to do all the planning in advance, so that the user doesn’t need to do addition or subtraction in the store, where they may need to also concentrate on their shopping lists and finding items. Once the overall budget is set, individual budgets for each item on their shopping lists are set using a graphical range to represent product price in relation of the overall budget.

In-Store

Our tool is meant to be a mobile app, but has the potential to be integrated with a smart watch with reduced functionality. In the store, the mobile app will show possible products they can buy in each category they specify on a shopping list that fit into their budget. By including the app on their smartwatch, the users will be able to check out their shopping list “on-the-go”, or in transit during their shopping experience.This is ideal in case the user forgets an item, and must check the list to see what items to search for next. If the app is on a smartwatch, even with reduced functionality, the user may (1) see the list of items they had previously created prior to shopping, and (2) in utilizing the store’s inventory online, the watch may display  product visuals for the user to compare color and branded packaging with the products they are viewing on the shelves.

Back-end

The database of the application is connected to the current store’s website/application using an API to request product information (i.e. product names, prices, descriptions). By doing so, users can see the updated product price on the application in real-time. When the user sets the budget for the individual categories, the app calculates the actual dollar amount and cross checks across all items that fit in the selected budget.

By having a visual representation of how much the price of the item is in respect to the category budget we can a) remove numbers to represent its price and b) allow users to visually compare multiple items between each other and see if they fit within their budget. The app also displays an empty bar (representing the overall budget) on the top of the page, so they can view an estimation of how much of the budget remains. If the user wants to purchase different items within the same category, they can click on the check mark and if allowed (depending on item cost and budget) they can increase the quantity. When the user checks one of the items, the budget bar fills up partially (or completely if item takes all budget) and the individual price bars scale to reflect the cost of the item in respect of how much money is left.

Upon opening the app the user is presented with a main screen where they can set the budget for their next grocery trip. In order to set their budget they operate an interface which includes elements of sliding to choose the budget. The farther the scale extends to the right, the higher the budget rises, likewise as it extends to the left, the budget decreases. This is beneficial for individuals with dyscalculia as it specifically does not involve any numeric input. In this two-step process, the first step is meant to help users identify the largest bill they would need for their shopping trip. This is important because many adults with dyscalculia are familiar with conventional currency amounts, and are able to more easily choose a value out of these options. The second step allows users to fine-tune their budget, relative to a value they understand well and predetermined.

In this scenario, the user selects $40 as the final budget for the next grocery trip. Once this is done, they are presented with different categories (e.g. bread, diary and meat.). They then set individual budgets for those categories as well. A design intervention we have applied here is scaling the budget for individual categories as a factor of the main overall budget. In this manner, the user has a visual non-numeric representation of how their shopping choices may affect the total budget they have allocated. Further, they can also set and visually verify how these shopping choices affect their individual allocated category goals.

Once the budget for the individual categories is set, on the main screen there is a list of the different categories set by the user. Upon clicking on them, a drop-down appears with images of the different items the store has available which fit in the set budget. On the bottom, each of these items will have on a horizontal bar color coded to symbolize the quantity of the budget that this item will take. In this manner, we simplify the decision making process of the user by presenting them with only the products which fit their wants and needs along with their budget restrictions.

Once at the store, the user will be able to select a category (let’s say bread) and then go through the different options that fit in the budget, i.e. whole wheat, grain, sourdough, etc. The user can look at the bread shelves and locate the ones listed on the app. Upon a survey for texture, visual appearance, etc. the user can select the one they like the most from all within-budget options. Going through the overall store the user will be able to gauge what items fit on the budget, which ones don’t and select accordingly. The intention of the app is not to restrict users to buy only the items that are within their budget, but rather to provide assistance in making an informed choice.

 

At any point during the process, the user can tap on the budget bar and see how much of the bread budget they have already used.

 

 

 

Evaluation

We simulated the effects of dyscalculia by obfuscating all numeric events in the traditional places where numbers are usually located in the shopping process. This was done using a script that is difficult to read. Through pilot testing we determined that this technique allows the participant to understand the number but the numeral recognition time is significantly increased as compared to a normal scenario. In this manner we simulated a major symptom of dyscalculia.

Think-alouds

We presented our participants with an interactive prototype app and gave them high-level tasks that took them through a user workflow. For each task, we asked the participants to talk through what they were thinking when they interacted with the prototype’s UI. Through the think-alouds, we aimed to gather qualitative feedback on elements within the UI such as colors, buttons, CTAs, and  interaction paradigms.

Testing The Immersive Shopping Experience

To simulate an in-store experience, we set up an open area with tables (“shelves”) and items on the desks. Behind each item was an image of what that item represented (e.g. a specific brand of bread or type of milk), along with a price tag written in roman numerals. At the beginning of their “shopping trip”, participants were given a small basket to place items into. This was to test the interaction with a mobile app while holding and picking up other objects. The whole in-store part of the experience didn’t take more than 6 minutes per participant.

For those who had completed the think-aloud previously, they were able to use the app to select specific items before shopping. The items listed in our prototype were a subset of the items in our shopping experience. The tasks the participants completed in this evaluation phase mirrored the tasks in the think-aloud:

  1. Pick as many breads as they would like, staying within a $10 budget and place items in the shopping basket
  2. Pick as many milks as they would like, staying within a $5 budget and place items in the shopping basket
  3. Return the basket to the experimenter and complete the NASA TLX form.

Evaluation Results

NASA TLX Data

To evaluate the shopping experience with and without the app we had them fill out a NASA TLX form as described in D3. The idea is to measure task load on different fronts (i.e. mental, physical, temporal, etc) to determine whether the app is an actual improvement over reading complicated price tags. From this we learned that people who used the app experienced a higher mental demand and perceived their performance was worse. However, on average they thought the task was less physically and temporal demanding, and they rated lower levels of effort and frustration. Among all these categories, the mental demand was by far the one where users agreed on the difference on app vs no app with 5-point difference in rating. Followed by physical demand with ~4 point difference.

Qualitative Findings

We consolidated our research findings and ranked them on the number of occurrences. This is the number of times users brought up this topic or issue.

Participants found the bar and the representing graphs confusing. In three instances, participants found that the price displays through the bars felt inconsistent. They were more concerned about the price of the product and felt that the bars conveyed insufficient information than what was required to make an informed decision.

There was a stronger negative pushback to the color bar display with arrow.

According to our design, whenever an item was selected the arrow would scale to represent how much the price of the product was in relation to the remaining budget left. But in user testing, we found that participants attached various other meanings to this  bar and the arrow. There was some feedback that the methodology that we applied to remove numbers from a shopping experience, to some extent took away control from the users.

In our design, when the user selects items from subcategories (after they have set the budgets), they are provided with a drop down menu for each category. On the left of each drop down menu, the number of items in that category were displayed. Two of the three participants who used the app did not understand what this value meant.

 

Discussion and Synthesis of Evaluation Results

In this section we look at the holistic nature of our user testing by combining the data we collected from the think alouds, immersive shopping experience evaluation, we consider the results of the NASA TLX along with the qualitative findings. We also perform a deeper analysis of the primary issues we found in the results of our evaluation and work towards how future designs could address these issues.

Temporal Demand

One of our highlighted quantitative results is that users perceived the task to take less time (close to half) when using the app than without. The additional time taken to shop is one of the handicaps we are trying to solve for people with dyscalculia and our app appears to address it well.

Frustration Level

Another quantitative data of interest is frustration levels. Users in our control group were more frustrated (18% more) during the experience.On the other hand, participants in the test group experienced less frustration because the app would perform the necessary operations to show them only products that are within their budget. This is also a pain-point commonly shared in our online interviews and from forum data gathering.

Performance 

Almost all test users agreed that they performed worse than the control (25% less), which is interesting given that most of them said they felt more confident about not going over budget when they used the app. Since the task was to get as many items as possible within the budget, we would assume that people with the app would think their performance was 100% but it wasn’t the case. Maybe this was due to not being sure they operated the app properly which could be fixed implementing some of the feedback we received regarding explanation screens, onboarding, etc.

Individual Item Bar Graph

There were multiple points of confusion in this interaction. First of the two primary issues was that for the users mental model, the color coded nature of the bar conveyed diverse meanings which were not related to the price of the item as we intended it to be. Our data shows us that this scale does not directly translate to the attributes that are associated with the price of a product.

The arrow in our design scaled proportionally to show a relation between the price of a particular item and the budget that was remaining. People think of prices as absolute parameters and the price is rightly so. Unless it is explicitly clear what the moving arrow means, it is difficult for users to understand. Relational price is something that never comes to their mind mainly as price is something that is supposed to be fixed and not a lot of systems use relational values when dealing with prices. As our participants eagerly adopted this system once they understood what it meant, it indicates that as this is unfamiliar territory, users will need some sort of training before they can be used to it.

Potential Prototype Improvements

Overall Budget Graph Design

As participants added products, the “overall budget used” graph increased along with each category budget’s graph. We may amend the design to add additional copy “add products to meet your overall budget”, informing the user of a more clear action to adjust the graph’s values.

We also amended the overall budget graph so it would remain “sticky” or omnipresent on the top of the screen. When the user scrolls down to add products and view all of their food categories they are now able to see how their actions affect the overall budget.

 

Category Budget Graph Design

The category budget selection process was something we did not explicitly define for the participant in our prototype. We believe this attributed to confusion when selecting their category budget. In our amended design, we may include the numerical value selected by the user for the category budget adjacent to the category graph.

Product Pricing & Product Selection

We may  include the product’s price very subtly -  with decreased font size and opacity - and placed underneath the product’s name. We are thinking about including it for reference only. Although we saw participants in our control group exhibiting more frustration than participants with the app, we would like to amend the design in-app to mitigate the participants’ frustration during their product selection.

 

Disabled Products Over Budget

We have realized disabling a product, or prohibiting a user from selecting the product in their shopping list due to any reason, could be considered revoking control from the user. In our amended design, we depart from “prohibitory” functionality, and instead, warn the user and still allowing the user to select it anyway. The product that is over budget will have a warning icon on the product graph, along with text warning “OVER BREAD”.

 

 

Non Visual Graphs

How Can Visually Impaired Users Perceive Visual Graphs Like Bar and Pie Charts

Field

Duration

Skills

Team

Role

Accessibility
Sonification
Interaction Design

9 Months
2019



Usability Testing
Rapid Prototyping
Sound Technology
Python
Processing (IDE)

Prabodh Sakhardande (Lead)
Prof. Anirudha Joshi (PI)

Designer
Researcher
Developer

Summary

Visual graphs and charts are an intrinsic part of our everyday lives. But their visual-only nature make them inaccessible to visually impaired users. This project focused on designing a solution to make visual graphs and charts accessible to visually impaired users. It followed a design process of research, exploration, ideation, divergence and then convergence on the solution that would make the most impact. This solution was then prototyped and tested with 20 sighted and 20 visually impaired users. It led to the formulation of a solid research backed design framework for creation of auditory graphs.

 

FlytLog

FlytLog

Field

Duration

Skills

Team

Role

Interaction Design
Software Development

3 Months
2017

User Research
User Interface Design


Prabodh Sakhardande
Karan Uderani
Pradeep Gidhani
Sarvashish Das

Designer
Developer

Summary

Drone pilots typically make multiple drone flights in one go and often need recorded data of these flights for debugging and to make tweaks on hardware for future flights. The drone controller is the device that is in charge of the drone flight, it has all the sensors required and computes the flight parameters from input data. The log data from the drone controller is of utmost importance to evaluate flights in case of unexpected issues.

With this in mind, FlytOS (proprietary drone operating system developed by FlytBase Inc.) needed a robust implementation for retrieval of log data from the onboard autopilot onto the FlytOS dashboard screen and made accessible to view for the users. It also needed a way for these to be uploaded and displayed on the cloud based user console.

In this project I was involved in designing the log presentation to users and the workflow they would go through for log download, upload and retrieval. I was also responsible for retrieving the log data from the drone controller onto the FlytOS system for the user to view remotely.

Through user input we identified the key wants of the users and supplemented this through additional surveys to determine the needs. Our research showed that users needed a seamless and robust solution. In most cases, users needed access to only the most recent logs and they needed it fast. The user interface design of this system was made to be as seamless as possible. Special attention was given such as the user would not require any additional onboarding to be able to use the system.

The log retrieval system was designed with this focus. On selecting the option, users were presented with a list of all available logs. Then they had the option to view or download only the ones they required. Interactions were designed through text based icons that were tested to be easily identifiable, so as not to need special documentation. Required interactions touchpoints to access recent logs were kept minimum. Further, log upload to cloud was automated in the background to reduce user load. After internal user testing the feature was deployed and user feedback was taken and incorporated at multiple points.

 

Blowing Messages

Blowing Messages (Like Kisses)

Humans, have the tendency to build mental constructs around themselves and adhere to standard practices of everyday motion, like “switch on the lights” or “press a button to send a message”. Over time we have become so used to these constructs that anything beyond our perception seems like magic.

Blowing Messages explores how constructing an experience tangential to these constructs can lead to a magical experience for the user. The first of a series of projects undertaken under the mentorship of Prof. Rishikesh Joshi, Blowing Messages is an interactive exhibition project which lets anyone in the vicinity type a message on his or her phone and then blow it towards a projector screen where it is displayed.

Cognitive Lights

Cognitive Lights

Humans form bonds with everything around them. It may not always be a strong emotional bond but we do have a tendency to get attached to even non-living objects. Cognitive Lights explores how interactive lighting can bring about a warm feeling of presence. It aims to study the point at which a human machine relationship can have an emotional link and what amount of cognition should the machine exert in order for this to happen.

Lights have been a integral part of human society for years. Apart from technological advancements, they have remained largely the same and built for one purpose - illumination. But at the same time, we tend to perceive light changes the most. It is for this reason interactive lights was chosen for this project.

 

 

Current characteristics exhibited by Cognitive Light is the ability to detect presence in the room and pop on (with a slight animation) when a person walks in. They fade out over a period of 2 minutes if you don't give them attention. They can be spoken to and will change color and brightness based on the way you talk, responding in a corresponding fashion the moment you stop talking, similar to a conversation with a person. At times they also exhibit random "breathing" at times to catch your attention.

Further work in being done on improving the concept beyond just lights, adding audio (not speech) and fine tuning the aggregation of sensor readings.

 

KeyTouch

KeyTouch

Many of the systems we use are optimized for efficiency but not always intuitiveness. An ongoing project, KeyTou, addresses one such area of human-computer interaction specifically the use (or rather hindrance) of a mouse on notebooks. The touchpad has been around for centuries and over the years we have adapted ourselves to make the most of it. This project looks at what would be the most natural way to interact with a computer?

A significant point of irritation when typing is when we have to take our eyes and hands away from the keyboard to use the mouse. It breaks our line of thought and is a deterrent. The mouse is critical to computer systems and this projects in no way tries to replace it. What KeyTouch aims to do is attempt to naturalize the menial tasks which do not necessarily need our attention to be taken away from the task at hand (literally).

On computers, we have a substantial array of keys in front of us. Can this be used similar to a big touchpad albeit with lower resolution. Can simple tasks like changing windows, brightness volume, etc. be done without the need to remember shortcuts, complex menus and specific keys? KeyTou uses gestures which can be “drawn” by dragging your finger across your keyboard (think like pressing all keys across a piano with one finger).

As a proof of concept, the upper row of numeric keys is used to control volume. The user can press any key in the whole row and then drag left or right to decrease or increase volume. This method was found to be much more user intuitive and natural than searching for the specific function key and then pressing it.

Current work includes the extension of this concept to the whole keyboard and development of a framework to accurately detect gestures.

 

Persuasive Alarm

Persuasive Alarm

An alarm clock which does not stop until you get up and stand in front of it. Also turns on and off the lights for those who need less persuasion. Automaitically plays the radio of choice once you are awake to get the day started. Based on an Open Source project by Andrew J. Pierce (pi_alarm). Contribution to to the original source includes presence detection, addition of radio and its controls, display of random quotes from fortune library.

 

Exploring Touch

Exploring Touch

Touch is a fundamentally central human sense which transcends into so much more than just a mere function.
These projects explore how touch can go beyond the norm from everyday devices which respond on touch to actually transmitting data through human touch. A torch that turns on through the touch of a single person or through a chain of people holding hands. Research to exchange electronic information through physical touch. Next time you meet somone new  just a handshake could exchange your business cards