Hey, welcome to the first algorithms post from the guy writing the software that will drive RoboBuggy. For now, I'll give a rough introduction to the big ideas at the heart of the project. Even for those less technically inclined, don't miss the flashy video at the end of the post.
The central problem is localization, the Buggy's ability to know it's location on the course at any moment in time. Once the buggy knows where it is, steering becomes easy. Of course, even a human driver has difficulty determining her location exactly. We must settle for a "guess" of the location, called a belief, in the form of a probability distribution over possible positions for the Buggy. You can think of the belief as a heatmap, the brigher a location on the map, the more likely the buggy is to be there. This belief encodes both the buggy's most likely position (the distribution mode) and it's confidence in this position (the distribution variance).
To localize, we use a recursive filter. Heres the idea: at a given time t, the buggy considers the following:
* The buggy's belief at time t-1, where the buggy thought it was last time around
* The buggy's estimated velocity and steering angle, which offers a guess at how the buggy's position has changed
* The image captured by the buggy's camera, which we will use to refine our belief about location
The filter uses this information to create a belief about the Buggy's location at time t. This simple concept is extremely powerful, it allows the buggy to consider all of the information it has gathered over the entire run in an efficient manner. The alternative, trying to guess location based on a single frame, is extremely difficult!
Intuitively, if we know where we were, then there are only so many placed we could possibly be now, which makes determining where we now are ever so much easier.
I'll give a rough outline of how filtering is accomplished. First we take the old belief and shift it to respect the buggy's movement, if we were near position x at time t-1 and moved at a certain velocity and direction, we are somewhere near x' at time t. Next we consider the lane-lines that the robot sees and compare them to what the Buggy expects to see from the map. Based on the difference, we can tweak the belief to better match up observed lane lines with those on the map.
Now for a quick demo. Here is a early test simulation from roughly two weeks into the project. In the top left is the video captured by the Buggy's camera (during a guerilla roll in light traffic). In the bottom left are the lane-lines that vision algorithms have extracted from the video. The middle shows the output from the filter: the buggy's belief about location projected onto an overhead map of the course. The right shows the belief again, but from the perspective of the buggy, and also shows matchings between lane-lines from the map and those observed by the vision system. Pay attention to the cloud surrounding the buggy, which represents the belief. It is interesting to watch the belief spread out, which indicates the buggy is less confident about location (higher belief variance), when there are no lane-lines in the field of view, and then shrink back down when landmarks are available.
Monday, March 21, 2011
Monday, February 21, 2011
First Rolls of the Semester
After two days of all-nighters, RoboBuggy was ready for rolls!.... or so we thought.
The primary goal of the first rolls was to gather data from the camera plus the odometry. The camera would provide the useful line data, the quadrature encoder would tell us our turning status, and the hall effect sensor would tell us how fast we were going. I would be driving the buggy via RC controller as it went down the hill.
The first night we stayed up, we were planning on getting the buggy ready for "capes". For those of you who are new to buggy, "capes" is short for "Capability" and basically would test whether the buggy could brake in time in case of an emergency, and whether or not it would swerve whilst it was braking. There are a few more specifications for "capes", but most of them were reserved for a human driver, so we didn't have to worry about them. (Someone from the Buggy universe feel free to correct me on my definition of "capes" ).
Here's a video of RoboBuggy caping:
The swerve at the end was my fault. I was trying to overcompensate for the braking maneuver. Most of the other trials we ran were pretty smooth.
The actual rolls were not as successful. Our main goal for them was to gather data from the camera and the sensors. We found qjuickly found out that RoboBuggy was far too light to go fast enough in the free roll. We will be correcting this problem by placing some lead shot in the tail. In addition, Nate and I wanted to take a path directly in the center of the street in order to maximize our data collection, but according all the drivers we've talked to, this is a horrible line to take. Instead, the Buggy should follow closer to the bike line, and then switch bike lanes at a given time. In the end, the buggy rolled to a stop right near the end of the free roll and had to be taken off of the course, because it was taking up too much time.
In addition, as we were taking the buggy back, we noticed some strange quirks that it was having. The IC controlling the brake was jostled out of position and caused the brake to engage randomly. The computer shut itself down too, right before we started rolling. The cause of the computer shutting down seemed to be a result of issues with power consumption. The batteries were tested before rolls, and seemed to supply enough power to the components. After some additional testing, it was determined that the batteries need to be FULLY charged in order for Robobuggy to perform to its fullest potential, rather than adequately charged.
Though, there is a bright side. There is a brand new RoboBuggy computer on the way, its a very expensive piece of equipment that's primarily used on trains. Its also very well encased in a black box, which should make it much more robust that the computer that we are currently using. In addition, the hardware board was laid out using Eagle CAD and ordered through a company. This will make our hardware much more robust.
We will try to roll RoboBuggy at rolls next weekend, even if the new hardware doesn't show up. We'll continue to run it on RC until Nate is comfortable with giving the software a test run.
Stay Tuned!
The primary goal of the first rolls was to gather data from the camera plus the odometry. The camera would provide the useful line data, the quadrature encoder would tell us our turning status, and the hall effect sensor would tell us how fast we were going. I would be driving the buggy via RC controller as it went down the hill.
The first night we stayed up, we were planning on getting the buggy ready for "capes". For those of you who are new to buggy, "capes" is short for "Capability" and basically would test whether the buggy could brake in time in case of an emergency, and whether or not it would swerve whilst it was braking. There are a few more specifications for "capes", but most of them were reserved for a human driver, so we didn't have to worry about them. (Someone from the Buggy universe feel free to correct me on my definition of "capes" ).
Here's a video of RoboBuggy caping:
The swerve at the end was my fault. I was trying to overcompensate for the braking maneuver. Most of the other trials we ran were pretty smooth.
The actual rolls were not as successful. Our main goal for them was to gather data from the camera and the sensors. We found qjuickly found out that RoboBuggy was far too light to go fast enough in the free roll. We will be correcting this problem by placing some lead shot in the tail. In addition, Nate and I wanted to take a path directly in the center of the street in order to maximize our data collection, but according all the drivers we've talked to, this is a horrible line to take. Instead, the Buggy should follow closer to the bike line, and then switch bike lanes at a given time. In the end, the buggy rolled to a stop right near the end of the free roll and had to be taken off of the course, because it was taking up too much time.
In addition, as we were taking the buggy back, we noticed some strange quirks that it was having. The IC controlling the brake was jostled out of position and caused the brake to engage randomly. The computer shut itself down too, right before we started rolling. The cause of the computer shutting down seemed to be a result of issues with power consumption. The batteries were tested before rolls, and seemed to supply enough power to the components. After some additional testing, it was determined that the batteries need to be FULLY charged in order for Robobuggy to perform to its fullest potential, rather than adequately charged.
Though, there is a bright side. There is a brand new RoboBuggy computer on the way, its a very expensive piece of equipment that's primarily used on trains. Its also very well encased in a black box, which should make it much more robust that the computer that we are currently using. In addition, the hardware board was laid out using Eagle CAD and ordered through a company. This will make our hardware much more robust.
We will try to roll RoboBuggy at rolls next weekend, even if the new hardware doesn't show up. We'll continue to run it on RC until Nate is comfortable with giving the software a test run.
Stay Tuned!
Subscribe to:
Posts (Atom)