Do you know if anyone has tried building an arm that uses spatial positioning techniques from augmented reality, like structured light or pose tracking[1], to understand the position of the arm in space without resorting to "dead reckoning"?
It seems like that kind of approach would increase the physical tolerance and reduce the programming complexity, since you know both a) where the arm is supposed to be, and b) where it actually is.
Do you know if anyone has tried building an arm that uses spatial positioning techniques from augmented reality, like structured light or pose tracking[1], to understand the position of the arm in space without resorting to "dead reckoning"?
It seems like that kind of approach would increase the physical tolerance and reduce the programming complexity, since you know both a) where the arm is supposed to be, and b) where it actually is.
[1] https://en.wikipedia.org/wiki/Pose_tracking#Outside-in_track...