Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's cool to hear from someone with experience!

Do you know if anyone has tried building an arm that uses spatial positioning techniques from augmented reality, like structured light or pose tracking[1], to understand the position of the arm in space without resorting to "dead reckoning"?

It seems like that kind of approach would increase the physical tolerance and reduce the programming complexity, since you know both a) where the arm is supposed to be, and b) where it actually is.

[1] https://en.wikipedia.org/wiki/Pose_tracking#Outside-in_track...



This is actually pretty common. But getting enough resolution to improve on what you can do with the encoders isn't so easy.

The more usual application of multi-camera setups etc. is in path planning and scene understanding, not low level control.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: