Author of the post here - happy to answer any questions.
replies(4):
I have been trying to figure something out for a while but maybe haven't quite found the right paper for it to click just yet - how would you mix this with video feedback in a real robot - do you forward predict the position and then have some means of telling if they overlap in your simulated image and reality?
I've tried grounding models like cogvlm and yolo, but often the bounding box is just barely useful to go face something, not actually reach out and pick something.
there are grasping datasets, but then I think you still have to train a new model for your given object+gripper pair - so I'm not clear where the MPC part comes in.
so I guess I'm just asking for any hints/papers that might make it easier for a beginner to grasp.
thanks :-)