Alex Stone
Alex Stone

Reputation: 47348

iPhone Robotics Visual-SLAM algorithm implementation

I have a tracked robot with an iPhone for brains. It has two cameras - one from iPhone, another a standalone camera. The phone has GPS, gyroscope, accelerometer and magnetometer with sensor fusion separating user-induced acceleration from gravity. Together the phone can detect it's own attitude in space. I would like to teach the robot to at least avoid walls if the robot bumped into them before.

Can anyone suggest a starter project for Visual Simultaneous Location and Mapping done by such robot? I would be very grateful for an objective-c implementation, as I see some projects written in C/C++ at OpenSlam.org, but those are not my strongest programming languages.

I do not have access to laser rangefinders, so any articles, keywords or scholarly papers on Visual SLAM would also help.

Thank you for your input!

Upvotes: 3

Views: 1644

Answers (1)

Michele mpp Marostica
Michele mpp Marostica

Reputation: 2472

I can suggest you to take a look to FAB-MAP, it has an implementation in OpenCV

I'm doing my MS thesis on visual-SLAM, particularly on the loop-closure problem.

Its in C++ but if you follow the examples you will find it very easy to use.

Upvotes: 1

Related Questions