Stura specializes in computer vision technologies. Our target applications are people counting, footfall analysis, queue monitoring, and access control. This section provides some background information about these applications.
The three steps for monitoring people in video streams are:
Detection is the process of analyzing an image to see if it contains persons. The typical output of a detection algorithm is a set of boxes denoting the location of the persons in the image.
Nowadays the most effective person detectors use CNN (convolutional neural networks). These models are trained on large datasets (e.g. COCO) and have proved effective in many situations.
Despite the power of CNNs, the detection process is never 100% accurate. Depending on the quality of the videos and the complexity of the scenes some problems can occur:
We also note that detecting persons via CNN is a resource-intensive process. Even when using GPU cards, the detection stage limits the throughput of the system. At Stura we specialize in custom trained, fine-tuned CNN models. We optimized our models for accuracy and detection speed.
Some deep learning models are also capable of performing pose estimation. Pose estimation detects not only the persons but also the locations of their body parts. The advantages of using pose estimation are threefold:
Tracking is the process of assigning a unique ID to each person visible in the video. The ID will stay with the persons as long as they move in front of the camera. Accurate tracking is necessary for footfall analysis. For example, tracking provides the data to calculate the dwell time of shoppers in specific areas.
Tracking depends on the output of the detection process, which we know is not 100% accurate. Even with good detections, tracking people is a rather complex task. The problem becomes harder in crowded scenes with many people occluding each other. Typical problems with tracking are:
At Stura we have extensive experience in designing tracking algorithms. We will adopt the solution that provides the better performance for your videos.
Re-identification is the process that recognizes the same person moving across different cameras. For every new detection, the system will have to decide if this is a new visitor or if the person had been seen before. Successful re-identification enables computing the total number of visitors in the venue. It also makes it possible to reconstruct their path through the entire store (shopper journey).
As of today, facial recognition is the most reliable re-identification approach. Unfortunately, face recognition is not viable unless cameras are placed at eye level. Also, face recognition is not always viable for applications that need to be GDPR compliant.
Luckily there are approaches that enable re-identification without using facial images. These methods create anonymized descriptions (encoding vectors) of each person’s appearance. The encoding vectors can later be used to search for a specific person. The encoding vectors can also be used to automatically re-identify the same person across cameras.
The technology for re-identification is still in the research stage. At Stura we can help you assess its accuracy and see how it performs in your venue.
The what, when, where of computer vision applications.
The position and types of cameras have a big impact on the outcome of a computer vision project. New installations represent favorable cases since they offer flexibility in choosing:
It is also true that most venues are already equipped with CCTV security cameras. The resolution of these cameras, and their placement, is often not optimal for computer vision. But it’s still possible to extract valuable information from those streams. At Stura we specialize software that is fine-tuned to work with CCTV cameras.
Some applications (e.g. surveillance) demand real-time processing. In other instances the video streams can be processed offline. For example, data in the retail market is often analyzed over long time intervals. When real-time data is not needed, video analytics can leverage offline processing. Processing videos offline will improve both accuracy and efficiency (less computation needed).
Video streams are information-rich media sources and analyzing them requires significant processing power.
In some applications, edge-processing allows extracting data directly at the source. Small embedded (IoT) devices can be paired to cameras to process the stream locally. Edge processing improves system robustness and reliability. Also, since the videos never leave the device, it offers the highest level of data privacy.
Another option involves processing the videos on on-site servers. This common approach is easy to integrate in existing IT infrastructures. It also offers a good level of data privacy since the videos are not leaving the company’s network.
Finally, a third approach is to process the videos on the cloud. Cloud computing offers great benefits in terms of flexibility and scalability. New servers can be quickly added (or removed) to match the processing requirements.