Its quite simply really. You wouldnt feel what you are touching. All you would feel is resistance from the suit. If programed correctly a ball hitting the back of your palm ingame would trigger the following force feedback motors. Shoulder, elbow and wrist.
If you held a ball in your hands, you would get resistance as you tried to move your fingers through the ingame ball surface. Pneumatics are fast and strong enough to simulate that sort of resistance. So in essence you would feel force and basic form from the pressure exerted from the suit on to your body that is trying to move beyond where the suit lets you.
Its a similar technology as the prediction algorythems in robotic prostesis. Its not perfect, but for the most basic use its enough. Its just used in a different way with the movement algorythems comming from an ingame collision detection system, rather than prediction algorythems.
The feeling would be like a glove frozen in place. So the glove is totally frozen round a ball and then removed. You can not move your fingerspast where the glove is letting you. You are not holding anything, but from the pressure you are feeling from the frozen glove you can feel that the form you are golding is a round object. Now just move that example to a semi robotic exosuit that dont let you move past the collision detection data fron inside the VR.
As said. Its not cheap, but it already exists in other areas. Some parts date back to the space race really.
So looking at the leg. Imagine your pants suddenly stopping from the hip down. Thats the feeling you would get. You just couldnt move in that direction any more. And through the headset you would see that you just slamed your knee in to a wall. It wouldnt hurt, but your leg would just stop due to the force of the exosuit. Your virtual display would give your brain the information for the why that happened. So the feeling of confusion would be nearly nonexisting. Yoiud just be "Oh i hit a wall. Got to go another way", like you do with a controler today.
Ofcourse the issue remaines. Its a gimmic. While you would probably be able to interact with the world, you would most likely not have any control over things like bringing up and navigating menus, since there is no real imput past movement. I guess you could tie it to voice commands, but those are iffy at best. Best way would be to design menus round the idea of an VR version of augmented reality. Like a watch that you press to bring up a holographic menu and using your finger to press the buttons on the hologram. Or like a lot of games are doing these days. A VR smartphone. Ofcourse those systems are really clunky and you couldnt do it on the fly.