Project Mission

Leverage RFID technology to improve and scale library self-checkout

Deliverables

iOS/Android checkout experience


Product Designer

Denise Macalino

Company

BiblioCommons (bibliocommons.com)


Product Manager

Francis Kim

Duration

3 sprints (over 6 weeks)

Engineers

Eugene Kim
Navpreet Kaur


Overview

BiblioCommons is a software company that offers SAAS solutions to public libraries across North America. As the Product Designer for their Apps team (BiblioApps), I worked on improving the self-checkout experience using RFID technology instead of the traditional barcode self-checkout.

As the BiblioApps Product Designer, my role was to: 

  • Conduct research to understand our libraries’ experiences with our existing barcode self-checkout 

  • Explore how analogous experiences use RFID to leverage mobile technology 

  • Identify the user journeys for our library patrons and ensure our self-checkout experience is simple and optimized

  • Design an experience that considers both mobile and IRL (in-person) limitations

 

So.. What is RFID Technology?

RFID Technology is essentially the process of small devices using radio frequency to transfer data.

Think: your condo key fob scanning a locked door to unlock it.

Most libraries already use RFID technology to organize library items, so implementing this form of self-checkout is low effort and high impact. In other words, a no brainer.

 

Project Outcomes

Project Accomplishments in a nutshell

 

Project Context

Barcode self-checkout requires manual effort for libraries to implement. Most libraries already use RFID technology to organize content. By implementing RFID self-checkout, we can accommodate hundreds more libraries, and thereby increase sales to $400k USD a year.

 

Problem Statement

RFID is a technology libraries understand, but one that users (library patrons) might not. Checkout using RFID may be confusing to users as it is not that intuitive.

How Might We make RFID self-checkout simple for users to understand?

How Might We ensure that users of different levels of ability and digital literacy can use self-checkout?

 

Project Goals

Overall project goals


 

Process


Understanding the current barcode self-checkout

BiblioCommons has already implemented barcode self-checkout with a few libraries who were able to implement it. I mapped out the existing flow to understand what pain-points might exist for RFID:

Barcode self-checkout flow

To go one step further, I also made an in-person visit to the Toronto Public Library and see how to perform self-checkout, and see RFID self-checkout in action!

In-person self-checkout experience


Considerations:

Barcode self-checkout is intuitive because of the system to real world match. Users are familiar with using a camera or handheld scanning gun at grocery stores to scan items. 

The RFID checkout experience I need to design has to match an existing experience that users are already familiar with.

 

Interviewing our libraries

We spoke to three libraries with BiblioApps who have self-checkout at their libraries. Our goal was to understand:

  • How popular (or not popular) the feature is among users 

  • What challenges (if any) libraries were having with the current self-checkout

Below I’ve listed the main insights from these conversations:

Library Interview Insights

Creating a system to real world match:

There are 2 pieces that make this problem particularly challenging: 

  1. Scanning items: 

    Users are likely not familiar with scanning tags using RFID technology. They need to perform the scan twice. RFID checkout at libraries requires the item to be scanned once to identify the item, and a second time to deactivate the security on it.

  2. Self-checkout in the real world: 

    Users will be performing self-checkout using their phones, but unlike an e-commerce checkout experience, they will be checking out an item in-person.

 

Finding a familiar experience

Although there’s no direct experience that we could compare against, there was an experience that could give us insights on the areas we were most concerned about:

The smart phone wallet!

  • Using your phone to pay is a user experience that involves scanning something using your phone (NFC technology) 

  • The experience happens end-to-end on your mobile device 

  • It involves an IRL element - the debit machine

Apple Pay and Google Pay 

We decided to look at both Apple and Google Pay to understand how we can create a more familiar experience.

Google and Apple Pay Flows

Presto - Card reloading 

Another experience that involved a real-world element and your mobile device is the Toronto Transit Commission Presto card. You can reload this transit card using your phone, and tapping it against your physical card.

Presto Reloading Flow

Competitive Analysis Insights

  1. Scanning twice | Apple Pay, much like BiblioCommons’ RFID checkout experience, requires users to scan twice. The way that Apple Pay makes this simple is including guiding dialogue

  2. Minimum screens | Google Pay is the simplest experience of the 3, keeping the total screens and actions that users need to perform at a minimum

  3. Clear steps | Presto guides users by showing them which step they are on at each stage of the reloading process

 

Mapping Out Our RFID Self-Checkout

Before jumping into our solution, I wanted to map out exactly what the RFID checkout flow would look like to catch any pain points we hadn’t thought of, and make sure the general flow made sense. 

RFID self-checkout user journey

Above is the final iteration of the map. We identified three areas where users might run into errors: 

  1. Not finding the RFID tag on the library item

  2. Scanning once, then scanning a different item at the second scan

  3. The scan not working 

So what did we discover would help guide users to understand this new checkout experience?

  1. Clear sequenced steps 

  2. Support for troubleshooting

 

Version 1

Version 1: Mid-Fi Mocks

Numbered step process

The first version we explored approached the problem of making it clear to users what was happening at each step of the checkout process. The numbered approach follows the familiar experiences we examined with food ordering and delivery apps.


1. Using progress bars

My approach to our “HMW make RFID self-checkout simple for users to understand?” was to include a progress bar at the top of the screen. 

I referred to food delivery services like UberEats and DoorDash - these experiences show users at each step what is happening with their order

2. Help/Support 

Because we imagine that performing RFID self-checkout for the first time will be confusing, I included plenty of instructional text, but also a help screen

3. Scanning twice

The simple solution to indicating to users that they need to scan twice would have been changing the modal text to “Scan Again”. However, this copy is not editable

 

Version 2

Version 2: Mid-Fi Mocks

Both Scans in background

Another approach I explored with the designer on the Core team was allowing both scans to be performed in the background. This would avoid any confusion that may come with the double scan process.


1. One progress bar

This approach requires less work cognitively for the users. They can simply keep their phone on the item they are checking out and wait for the scan to complete.

2. One modal

In this version, even though the second scan is being performed, since the user doesn’t need to know that and can just wait for both scans to complete, the second modal can be removed.

 

Simplicity isn’t always the answer

Although the approach with both scans occurring in the background seemed the simplest, there were several complications we ran into that made the original approach the most viable.

  • iOS Scan Modal: after working with the developers on this, we realized that the iOS modal would appear whether we would like it to or not. This meant that users would see a prompt to scan again, regardless if they knew that they needed to scan twice or not.

  • Error Handling: There are also scenarios in which the RFID tag may be faulty, or the scan fails. Error handling using the approach where two scans happening in the background could create a more frustrating experience for users.

In the end it made more sense to develop our initial solution that uses a numbered progress bar and two clear scans.

 

But how do you test RFID self-checkout?

The problem was now how the heck do I test this? A Figma prototype would have too many limitations since the progress bar needed to animate while the user was scanning. So I decided to learn Facebook’s Origami prototyping software. I could probably write an entire case study just about learning to use Origami, but that’s another story! 

Before running live tests, we did a few dry runs with BiblioCommons staff. The main finding doing these tests was that users did not understand that they needed to perform 2 scans - which is what we hypothesized. Therefore, before running live tests we decided to add onboarding tutorial screens for the first time a user is checking out with RFID self-checkout. 

After many headaches (and admittedly some tears), I finally put together an Origami prototype to test on library patrons. Because Origami files are enormous, here is a Figma version of what it looked like:

Mid-Fidelity Prototype


Testing

With the help of our amazing VP of Customer Success, we were able to secure 3 real-life library patrons (a true miracle for anyone who knows how hard it is to get relevant testers).

User Testing Demographics


Test Insights

What did we find in our testing?

  • Origami prototype testing is a nightmare.. Users needed to download the Origami app to their phone, I had to then email them the prototype (not a link to it, but an Origami file.. Yes, a whole FILE), and then had them open the app and share their Zoom screen while going through it. 

Other, more helpful testing insights: 

Key Post-Testing Changes

  1. Tutorial screens | Some users still didn’t know that they needed to scan an item twice, so we decided to remove the extra steps for users and keep it simple for our MVP

  2. Concise modal instructions | What we did find is that people followed instructions when they were given direction at each step - it was much less cognitive overload and recall work for users to do so. We indicated the number of scans by simply stating “Scan 1 of 2” and “Scan 2 of 2”

  3. Unclear checkout completion | The original checkout flow prioritized letting users choose from two CTAs: 1. to see the list of items they had just checked out and 2. checking out another item. The issue here was that users did not understand that they had completed checkout and that simply exiting the screen wouldn’t abandon checkout. I decided to instead prioritize a “Finish Checking Out” CTA. The list of items checked out was moved to underneath the library item info instead

Most important change for our MVP

Rather than focusing on explaining to users how to perform the scan before the checkout process, testing revealed to us that the simple solution was to guide them at each step. Instead of trying to have users remember the steps, each step will have clear and concise instructions

Final Version

This version of the feature: 

  1. Does not have the tutorial screens anymore: users didn’t find enough value in them (plus they felt a bit condescending, a big no-no) 

  2. Uses checkmarks instead of a numbered progress bar to reduce cognitive overload while still making it clear to users how far along they are in their checkout 

  3. Has a minimal number of screens! Self-checkout should be faster, so why bog users down with more screens?

 
 

Reflection

Constraints

  • IRL testing | Due to both COVID and the time/cost of testing in-person with live users, we were unable to conduct testing with real people with real library items. This proved to be a bigger problem when I did end up conducting remote tests (which were still better than nothing). Sometimes it felt as though the remote tests tested the prototype more than they tested the checkout experience, and looking back it might have been worth spending more time testing and really pushing for live in-person tests. Once the beta version of the feature has been rolled out, it is something we’re considering.

  • No control over RFID | In a perfect world, where only the user experience matters, we would have scrapped the whole two-scan RFID thing. It was inefficient, and super confusing to have users scan twice. This functionality, even if possible, could not have been compromised for security reasons. It would have severely impacted the loss prevention numbers at our libraries. However, if it were possible to deactivate the security and check the item out of the ILS system in one scan, this would have been the ideal scenario.

What went well

  • Testing | I am really glad that I pushed for testing, because had we released the feature and then found out that users did not understand to scan twice, this would have meant going back to the drawing board after beta. Although testing pushed the project another sprint (2 weeks), it ended up saving us months of time.

  • Communication | Working closely with the PM and engineers on this project allowed us to do some brainstorming together. This helped me consider the technical constraints from the customizability of the scan modals, the animation of the progress bar, and the exact time of the scan.

What could have been done differently

  • Showing vs Telling | Tutorial screens weren’t the best approach to the problem. I was so deep in the problem that I couldn’t see it with clear eyes and suggesting tutorial screens wasn’t what users needed. Users don’t want information spelled out to them - they want to not have to think too much in the process. 

    Now I think “How Might We limit cognitive overload when introducing new user experiences?”. This is actually something that
    came up in a current project I am working on: Introducing Multiple Accounts. There has been a push to use tutorial screens to show users how to switch between accounts. But after this project, I learned that, although a seemingly obvious solution, in reality: 

    • It actually just adds cognitive overload 

    • Great user experiences don’t need to be explained, they just make sense

Learning

  • Less is more | I think we underestimated our users and overestimated the problem. As demonstrated by Apple Pay, users are familiar with scanning twice when using a similar technology. 

  • Task completion needs definite conclusion | Although in the back end, the checkout is technically complete as soon as the second scan is done, users didn’t feel that the task was complete. The fact that they thought that they were abandoning checkout when tapping on exit indicated to us that a CTA to finish the task needed to be added.

  • Users can learn a new functionality | RFID scanning for checkout is a new and unfamiliar experience, but with the right guidance, you can create a system to real world match that helps users take advantage of a new technology.

Where do we go from here?

  • The project is far from done, with devs working on it over the next couple quarters. Because it’s such a complex functionality that involves scanning something physical with a mobile device, it won’t be out until early Q4 2022. However, the next phase would be to run in-person tests and verify if our findings from our remote tests will be valid.