“We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input.”
Duh? I don’t think anyone with the right field of study thought this wasn’t possible. It just doesn’t have good use cases.
I’m an EE, and I have serious doubt about this actually working nearly as good as they are putting it. This sort of stuff is hard, even with purpose built radar systems. I’m working with angle estimation in Multipath environments, and that shit fucks your signals up. This may work it you have extremely precisely characterised the target room and walls, and a ton of stuff around it, and then don’t change anything but the motion of the people. But that’s not practical.
It’s Popular Mechanics, of course it doesn’t work as well as they say it does. But the theory has been around a long time.
Full body vr tracking without sensors?
The human presence sensors based on this are already on the consumer market, we just need to dial up the sensitivity.
There are already smart light bulbs you can buy off the shelf that use radio signals to see when somebody is in the room. Then it can turn on the lights automatically, without a camera or infrared sensor in the area.