Summary

Asia-Pacific Network Operations and Management Symposium

2020

Session Number:TS9

Session:

Number:TS9-1

Smart Self-Checkout Carts Based on Deep Learning for Shopping Activity Recognition

Hong-Chuan Chi,  Muhmmad Atif Sarwar,  Yousef-Awwad Daraghmi,  Kuan-Wen Liu,  Ts_-U_ _k,  Yih-Lang Li,  

pp.185-190

Publication Date:2020/9/22

Online ISSN:2188-5079

DOI:10.34385/proc.62.TS9-1

PDF download (288.7KB)

Summary:
Fast and reliable communication plays a major role in the success of smart shopping applications. In a ?hJust Walk Out?h shopping scenario, a video camera is installed on the cart to monitor shopping activities and transmit images to the cloud for processing so that items in the cart can be tracked and checked out. This paper proposes a prototype of a smart shopping cart based on image-based action recognition. Firstly, deep learning networks such as Faster R-CNN, YOLOv2, and YOLOv2-Tiny are utilized to analyze the content of each video frame. Frames are classified into three classes: No Hand, Empty Hand, and Holding Items. The classification accuracy based on Faster RCNN, YOLOv2, or YOLOv2-Tiny is between 93.0% and 90.3%, and the processing speed of the three networks can be up to 5 fps, 39 fps, and 50 fps, respectively. Secondly, based on the sequence of frame classes, the timeline is divided into No Hand intervals, Empty Hand intervals, and Holding Items intervals. The accuracy of action recognition is 96%, and the time error is 0.119s on average. Finally, we categorize the events into four cases: No Change, placing, Removing, and Swapping. Even including the correctness of the item recognition, the accuracy of shopping event detection is 97.9%, which is higher than the minimal requirement to deploy such a system in a smart shopping environment. A demo of the system and a link to download the data set used in the paper are in Smart Shopping Cart Prototype or found at this URL: https://hackmd.io/abEiC83rQoqxz7zpL4Kh2w.