Over the past few years, great progress has been made in human pose estimation using 2D and 3D sensors such as RGB sensors, LiDARs or radar, driven by applications such as autonomous driving and VR. However, these sensors have some limitations, both technically and in practical use. First of all, the cost is high, and ordinary families or small businesses often cannot afford LiDAR and radar sensors. Second, these sensors are too power-hungry for daily and household use.
As for RGB cameras, narrow fields of view and poor lighting conditions can severely impact camera-based methods. Occlusions become another obstacle that prevents camera-based models from generating reasonable pose predictions in images. Indoor scenes are particularly difficult, as furniture often blocks people. What's more, privacy concerns hinder the use of these technologies in non-public places, and many people are reluctant to install cameras in their homes to record their actions. But in the medical field, for safety, health and other reasons, many elderly people sometimes have to perform real-time monitoring with the help of cameras and other sensors.
Recently, three researchers from CMU proposed in the paper "DensePose From WiFi" that In some cases, WiFi signals can be used as a substitute for RGB images To conduct human body perception. Lighting and occlusion have little impact on WiFi solutions for indoor surveillance. WiFi signals help protect personal privacy, and the equipment needed is affordable. The key takeaway is that many homes have WiFi installed, so the technology could potentially expand to monitor the health of older adults or identify suspicious behavior in the home.
Paper address: https://arxiv.org/pdf/2301.00250.pdf
The researcher wants to The problem to be solved is shown in the first row of Figure 1 below. Given 3 WiFi transmitters and 3 corresponding receivers, can dense human posture correspondences be detected and restored in a cluttered environment with multiple people (the fourth row of Figure 1)? It should be noted that many WiFi routers (such as TP-Link AC1750) have 3 antennas, so only 2 such routers are needed in this method. Each router costs about $30, meaning the entire setup is still much cheaper than LiDAR and radar systems.
In order to achieve the effect shown in the fourth row of Figure 1, the researcher got inspiration from the deep learning architecture of computer vision and proposed a that can be executed based on WiFi Neural network architecture for dense pose estimation, and realizes dense pose estimation using only WiFi signals in scenes with occlusion and multiple people.
The left picture below shows image-based DensePose, and the right picture shows WiFi-based DensePose.
## Source: Twitter @AiBreakfast
In addition, it is worth mentioning that the first and second authors of the paper are both Chinese. Jiaqi Geng, the first author of the paper, obtained a master's degree in robotics from CMU in August last year, and Dong Huang, the second author, is now a senior project scientist at CMU.
Method Introduction
To use WiFi to generate UV coordinates of the human body surface requires three components: First, pass The amplitude and phase steps clean up the original CSI (Channel-state-information, indicating the ratio between the transmitted signal wave and the received signal wave) signal; then, the processed CSI samples are converted through a dual-branch encoder-decoder network is a 2D feature map; then the 2D feature map is fed into an architecture called DensePose-RCNN (mainly converting 2D images into 3D human models) to estimate the UV map.
The original CSI samples are noisy (see Figure 3 (b)), not only that, most WiFi-based solutions ignore the CSI signal phase and focus on the amplitude of the signal (see Figure 3 (a) )). However discarding phase information can have a negative impact on model performance. Therefore, this study performs sanitization processing to obtain stable phase values to better utilize CSI information.
#In order to estimate the UV mapping in the spatial domain from the one-dimensional CSI signal, we first need to convert the network input from the CSI domain to spatial domain. This article is completed using Modality Translation Network (as shown in Figure 4). After some operations, a 3×720×1280 scene representation in the image domain generated by the WiFi signal can be obtained.
After obtaining a 3×720×1280 scene representation in the image domain, this study uses a method similar to DensePose-RCNN Network architecture WiFi-DensePose RCNN to predict human body UV map. Specifically, in WiFi-DensePose RCNN (Figure 5), this study uses ResNet-FPN as the backbone and extracts spatial features from the obtained 3 × 720 × 1280 image feature map. The output is then fed to the region proposal network. In order to better utilize complementary information from different sources, WiFi-DensePose RCNN also contains two branches, DensePose head and Keypoint head, after which the processing results are merged and input to the refinement unit.
However training Modality Translation Network and WiFi-DensePose RCNN networks from random initialization requires a lot of time (approximately 80 hours). In order to improve training efficiency, this study migrated an image-based DensPose network to a WiFi-based network (see Figure 6 for details).
Directly initializing a WiFi-based network with image-based network weights did not work, so this study first trained a The image-based DensePose-RCNN model serves as the teacher network, and the student network consists of modality translation network and WiFi-DensePose RCNN. The purpose of this is to minimize the difference between the multi-layer feature maps generated by the student model and the teacher model.
Table 1 results show that the WiFi-based method obtained a very high AP@50 value of 87.2, which shows that the model can effectively detect The approximate location of human body bounding boxes. AP@75 is relatively low with a value of 35.6, which indicates that human body details are not perfectly estimated.
Table 2 results show that dpAP・GPS@50 and dpAP・GPSm@50 have higher values, but dpAP・GPS@75 and dpAP・GPSm @75 is a lower value. This shows that our model performs well in estimating the pose of the human torso, but still has difficulties in detecting details such as limbs.
The quantitative results in Tables 3 and 4 show that the image-based method yields very high AP than the WiFi-based method. The difference between AP-m values and AP-l values of WiFi-based models is relatively small. The study suggests this is because people further away from the camera take up less space in the image, which results in less information about those objects. Instead, the WiFi signal contains all information about the entire scene, regardless of the subject's location.
The above is the detailed content of Full-body tracking, not afraid of occlusion, two Chinese people from CMU made a DensePose based on WiFi signals. For more information, please follow other related articles on the PHP Chinese website!