Monday, March 21, 2011

Software Demo

Hey, welcome to the first algorithms post from the guy writing the software that will drive RoboBuggy. For now, I'll give a rough introduction to the big ideas at the heart of the project. Even for those less technically inclined, don't miss the flashy video at the end of the post.

The central problem is localization, the Buggy's ability to know it's location on the course at any moment in time. Once the buggy knows where it is, steering becomes easy. Of course, even a human driver has difficulty determining her location exactly. We must settle for a "guess" of the location, called a belief, in the form of a probability distribution over possible positions for the Buggy. You can think of the belief as a heatmap, the brigher a location on the map, the more likely the buggy is to be there. This belief encodes both the buggy's most likely position (the distribution mode) and it's confidence in this position (the distribution variance).

To localize, we use a recursive filter. Heres the idea: at a given time t, the buggy considers the following:
* The buggy's belief at time t-1, where the buggy thought it was last time around
* The buggy's estimated velocity and steering angle, which offers a guess at how the buggy's position has changed
* The image captured by the buggy's camera, which we will use to refine our belief about location
The filter uses this information to create a belief about the Buggy's location at time t. This simple concept is extremely powerful, it allows the buggy to consider all of the information it has gathered over the entire run in an efficient manner. The alternative, trying to guess location based on a single frame, is extremely difficult!
Intuitively, if we know where we were, then there are only so many placed we could possibly be now, which makes determining where we now are ever so much easier.

I'll give a rough outline of how filtering is accomplished. First we take the old belief and shift it to respect the buggy's movement, if we were near position x at time t-1 and moved at a certain velocity and direction, we are somewhere near x' at time t. Next we consider the lane-lines that the robot sees and compare them to what the Buggy expects to see from the map. Based on the difference, we can tweak the belief to better match up observed lane lines with those on the map.

Now for a quick demo. Here is a early test simulation from roughly two weeks into the project. In the top left is the video captured by the Buggy's camera (during a guerilla roll in light traffic). In the bottom left are the lane-lines that vision algorithms have extracted from the video. The middle shows the output from the filter: the buggy's belief about location projected onto an overhead map of the course. The right shows the belief again, but from the perspective of the buggy, and also shows matchings between lane-lines from the map and those observed by the vision system. Pay attention to the cloud surrounding the buggy, which represents the belief. It is interesting to watch the belief spread out, which indicates the buggy is less confident about location (higher belief variance), when there are no lane-lines in the field of view, and then shrink back down when landmarks are available.