Independent blind supermarket shopping is difficult at best. This paper presents ShopTalk, a wearable small-scale system that enables a visually impaired shopper to successfully retrieve specific products. ShopTalk uses exclusively commercial-off-the-shelf components and requires no instrumentation of the store. The system relies on the navigation abilities of independent blind navigators and on the inherent supermarket structure.
Blindness and low vision, independent shopping, wearable computing
When shopping in a supermarket, a visually impaired person often needs a sighted guide for assistance. In recent years, assistive navigation aids have begun to be developed which guide the visually impaired in indoor environments. When these types of technologies are applied to supermarket settings, they promise to allow a visually impaired person to walk into a supermarket alone and perform their desired shopping independently without requiring assistance from a sighted friend, family member, or store employee. Independent shopping in a supermarket is a multi-faceted problem. It requires two different types of tasks: macro-navigation in the locomotor space, and searching for a target product in the near-locomotor and haptic spaces. During macro-navigation phases, a shopper must navigate through large, potentially unknown areas of the store - aisles, cashier lanes, open areas - and find the general area of a target product. Once the shopper is in what he or she thinks is the general area of a desired product, also known as the target space (3), they need to search for the specific location of a product.
ShopTalk is a system for small-scale independent supermarket shopping for the blind. It is a wearable system consisting of a computation device, a barcode reader, and a numeric keypad for user data entry (see Figure 1). The output of the system is verbal route and product search directions generated from a topological map. No instrumentation of the store environment is required. The system takes advantage of the fact that many supermarkets place barcodes on the front of the shelf directly beneath the each product. In ShopTalk, each barcode becomes a topological position for locating every product in the store through verbal directions. A topological map connecting the store entrance, aisle entrances, open areas, and cashier lanes is stored in the computational device. Since the shopper is assumed to have independent O&M skills, ShopTalk only acts as a route and search direction provider. The basic assumption is that for small-scale blind grocery shopping verbal route instructions are sufficient (2).
Trinetra (4) is another shopping aid being developed at CMU. Trinetra retrieves a product's name after the user scans a barcode to aid in identifying an object. The system provides no navigation features leaving it up to the shopper to find the product's target space. Even when in the product's target space, the shopper has no way of performing an efficient search for a specific product location. Given that the average supermarket has 45,000 products (1), finding a specific product in a supermarket without any route or search directions may not be possible.
In ShopTalk, every product in an aisle is found based on the following hierarchical chain of information. First, a product is located in a specific aisle. Next, a product is either on the left or right side of an aisle. On the next level are shelf sections, 4 feet wide sections of shelving. Given a shelf section, the next level is the specific shelf. The final level is the product's relative position on the shelf. This position is not a 2D coordinate in some distance units, but is a relative position based on how many products are on the same shelf. To build the barcode map, every barcode on the shelf system of one aisle in a local supermarket was scanned, and each product's aisle, aisle side, shelf section, shelf, and position were recorded along with the product's barcode. A little more than 5% of the products' actual names were stored as well. A total of 1,655 individual barcodes were scanned and recorded. Of these, 297 had their product names recorded as well. The topological map of the store environment consisted of a graph that connects points of interest such as the store entrance, cashier lanes, and aisle entrances. The two maps (topological and barcode) are connected through the aisle information available in each map. No modification or extra instrumentation of the environment was made.
Three hypotheses were tested in a single participant pilot study. First, a blind shopper who has independent O&M skills can successfully navigate the supermarket using only verbal directions. Second, verbal instructions based runtime barcode scans are sufficient for target product localization. Third, as the shopper repeatedly performs the shopping task, the total traveled distance approaches an asymptote. To test the hypotheses, an aisle in a local supermarket was scanned as described in the next section and seven product sets were generated from the data. A product set is a set of 3 randomly chosen products in the aisle. Each product set had one item randomly chosen from the aisle’s front, middle, and back. Three product sets contained items only from the aisle’s left side, three sets contained items only from the aisle’s right side, and one contained two items from the left side and one from the right. To make the shopping task realistic, each product set contained one product from the top shelf, one product from the bottom shelf, and one product from a middle shelf.
Product Set |
Product Location |
Completed Runs |
---|---|---|
0 |
Left Side |
2 |
1 |
Left Side |
3 |
2 |
Left Side |
2 |
3 |
Right Side |
1 |
4 |
Right Side |
2 |
5 |
Right Side |
3 |
6 |
Both Sides |
3 |
The participant was an independent blind (only light perception) guide dog handler in his mid twenties. In a 10-minute training session before the first run, the basic concepts underlying ShopTalk were explained him to his satisfaction. A run consisted of the participant starting at the entrance of the store, traveling to the target aisle, locating the three products in the current product set, and, after retrieving the last product in the set, traveling to a designated cashier. Sixteen runs were completed with at least one run for each product set in five four one-hour sessions in a supermarket (see Table 1).
All three of our hypotheses appear to be reasonable for this participant. First, the participant was able to navigate to the target aisle and each target space using ShopTalk's verbal route directions. Second, using only ShopTalk's search instructions based on the barcode map and runtime barcode scans made by the participant, he was able to find all products for all 16 runs. These were both accomplished using only the wearable ShopTalk system.
Figures 2 and 3 both show the downward trend in distance. Figure 2 also shows the downward trend in time. The first run took the longest, 843 seconds, and had the largest distance, 376 feet. But after the second run, all times were less than 460 seconds and all distances were less than 325 feet. The two exceptions in terms of distance were runs 7 and 13. In both these runs, the participant initially entered an incorrect aisle. After scanning a product in the incorrect aisle, the participant was instructed he was in the wrong aisle and given route directions to the correct aisle. Although, the distance increased dramatically in these runs, the time did not. The suspected reason for the lack of increase in time is that at this point the user had enough confidence and spatial knowledge, and was therefore walking faster and searching for items faster than during the initial two runs.
Product set 5 involved walking the longest distance of all the products sets. When the same route was walked by a sighted person, the distance was 298 feet. The shortest run for product set 5 was 313 feet. So once the user is familiar with the environment, it appears it is possible to achieve walking distances that are slightly longer, but comparable, to those of a sighted person.
Although the user was twice able to find a product on the first scan, on average it took 4.2 barcode scans to find the target product. Figure 4 shows an example of the search the user performed for a product.
Future work includes increasing the number of aisles in the map and executing runs with a larger number of participants in order to test error recovery and collect a more statistically significant amount of data. A dynamic route planner is being added so that users are guided to products in the most efficient order. A product verification module will also be considered.
This pilot study shows that verbal route directions and search instructions based on barcode scans may be sufficient for independent supermarket shopping for the blind. No store instrumentation is necessary when the structures inherent in the store are used.
The study was funded by two Community University Research Initiative (CURI) grants from the State of Utah (2004-05 and 2005-06) and NSF Grant IIS-0346880. The authors would like to thank Mr. Sachin Pavithran, a visually impaired training and development specialist at the USU Center for Persons with Disabilities, for his feedback on the shopping experiments. Mr. Lee Badger, the owner of the Lee’s MarketPlace, a supermarket in Logan, UT, is gratefully acknowledged for his permission to use his store for blind supermarket shopping experiments.
Disney produced a television show in the mid 1990s called Gargoyles. It's a great show and I'm a big fan. A few years ago Disney started to release the show on DVD. The last release was of season 2, volume 1. That was two years ago. Volume 2 has not been released. Why? Poor sales. So if you should find yourself wanting to support my work, instead I ask you pick up a copy of season 2, volume 1. It's a great show and you might find yourself enjoying it.