Monday, March 21, 2011

Software Demo

Hey, welcome to the first algorithms post from the guy writing the software that will drive RoboBuggy. For now, I'll give a rough introduction to the big ideas at the heart of the project. Even for those less technically inclined, don't miss the flashy video at the end of the post.

The central problem is localization, the Buggy's ability to know it's location on the course at any moment in time. Once the buggy knows where it is, steering becomes easy. Of course, even a human driver has difficulty determining her location exactly. We must settle for a "guess" of the location, called a belief, in the form of a probability distribution over possible positions for the Buggy. You can think of the belief as a heatmap, the brigher a location on the map, the more likely the buggy is to be there. This belief encodes both the buggy's most likely position (the distribution mode) and it's confidence in this position (the distribution variance).

To localize, we use a recursive filter. Heres the idea: at a given time t, the buggy considers the following:
* The buggy's belief at time t-1, where the buggy thought it was last time around
* The buggy's estimated velocity and steering angle, which offers a guess at how the buggy's position has changed
* The image captured by the buggy's camera, which we will use to refine our belief about location
The filter uses this information to create a belief about the Buggy's location at time t. This simple concept is extremely powerful, it allows the buggy to consider all of the information it has gathered over the entire run in an efficient manner. The alternative, trying to guess location based on a single frame, is extremely difficult!
Intuitively, if we know where we were, then there are only so many placed we could possibly be now, which makes determining where we now are ever so much easier.

I'll give a rough outline of how filtering is accomplished. First we take the old belief and shift it to respect the buggy's movement, if we were near position x at time t-1 and moved at a certain velocity and direction, we are somewhere near x' at time t. Next we consider the lane-lines that the robot sees and compare them to what the Buggy expects to see from the map. Based on the difference, we can tweak the belief to better match up observed lane lines with those on the map.

Now for a quick demo. Here is a early test simulation from roughly two weeks into the project. In the top left is the video captured by the Buggy's camera (during a guerilla roll in light traffic). In the bottom left are the lane-lines that vision algorithms have extracted from the video. The middle shows the output from the filter: the buggy's belief about location projected onto an overhead map of the course. The right shows the belief again, but from the perspective of the buggy, and also shows matchings between lane-lines from the map and those observed by the vision system. Pay attention to the cloud surrounding the buggy, which represents the belief. It is interesting to watch the belief spread out, which indicates the buggy is less confident about location (higher belief variance), when there are no lane-lines in the field of view, and then shrink back down when landmarks are available.

Monday, February 21, 2011

First Rolls of the Semester

After two days of all-nighters, RoboBuggy was ready for rolls!.... or so we thought.

The primary goal of the first rolls was to gather data from the camera plus the odometry. The camera would provide the useful line data, the quadrature encoder would tell us our turning status, and the hall effect sensor would tell us how fast we were going. I would be driving the buggy via RC controller as it went down the hill.

The first night we stayed up, we were planning on getting the buggy ready for "capes". For those of you who are new to buggy, "capes" is short for "Capability" and basically would test whether the buggy could brake in time in case of an emergency, and whether or not it would swerve whilst it was braking. There are a few more specifications for "capes", but most of them were reserved for a human driver, so we didn't have to worry about them. (Someone from the Buggy universe feel free to correct me on my definition of "capes" ).

Here's a video of RoboBuggy caping:


The swerve at the end was my fault. I was trying to overcompensate for the braking maneuver. Most of the other trials we ran were pretty smooth.

The actual rolls were not as successful. Our main goal for them was to gather data from the camera and the sensors. We found qjuickly found out that RoboBuggy was far too light to go fast enough in the free roll. We will be correcting this problem by placing some lead shot in the tail. In addition, Nate and I wanted to take a path directly in the center of the street in order to maximize our data collection, but according all the drivers we've talked to, this is a horrible line to take. Instead, the Buggy should follow closer to the bike line, and then switch bike lanes at a given time. In the end, the buggy rolled to a stop right near the end of the free roll and had to be taken off of the course, because it was taking up too much time.

In addition, as we were taking the buggy back, we noticed some strange quirks that it was having. The IC controlling the brake was jostled out of position and caused the brake to engage randomly. The computer shut itself down too, right before we started rolling. The cause of the computer shutting down seemed to be a result of issues with power consumption. The batteries were tested before rolls, and seemed to supply enough power to the components. After some additional testing, it was determined that the batteries need to be FULLY charged in order for Robobuggy to perform to its fullest potential, rather than adequately charged.

Though, there is a bright side. There is a brand new RoboBuggy computer on the way, its a very expensive piece of equipment that's primarily used on trains. Its also very well encased in a black box, which should make it much more robust that the computer that we are currently using. In addition, the hardware board was laid out using Eagle CAD and ordered through a company. This will make our hardware much more robust.

We will try to roll RoboBuggy at rolls next weekend, even if the new hardware doesn't show up. We'll continue to run it on RC until Nate is comfortable with giving the software a test run.

Stay Tuned!

Friday, December 24, 2010

Progress!

Its been an extremely productive couple of weeks!

Most of the hardware of the Robobuggy is complete,  but we're still waiting for interface boards in the mail. Luckily, I have it set up so that you can simply just plug in the component where its needed and it'll be ready to go!

We have been actually road testing the Robobuggy. We walked out onto Forbes and Morewood at around 2AM EST to test the camera's vision in the dark. People were definitely confused, but the testing went extremely well. Nate's algorithms responded well, even with little light.

In addition, we actually pushed the Robobuggy down the free roll, with me and Nate close behind it. We gained some valuable sensor data that Nate was able to use to test to the algorithms.

After going through at least 6 sensors, tonight we finally got a hall effect sensor tested and mounted, and we're ready to play in traffic tomorrow. This time we'll be around Maggie Mo, to test the different types of street lines and the camera's response to them.

AEPi's an absolute mess, but we'll be cleaning that up tomorrow as well. In addition, here's a snap from our sketchy night run:



Also, we'd love to hear from the buggy community! Leave us feedback.

-Alex

Friday, December 17, 2010

Winter Break Update

Nate and I have been working until around 4AM every day this week on Robobuggy. Its been pretty stressful, but its still a lot of fun. We've accomplished a lot so far, but we still have a bit of ways to go.

A few nights ago, we took the buggy out on Forbes and on Morewood at around 2AM. Nate needed to gather data to ensure his algorithms were robust enough to work in the dark as well. There weren't many cars on the road, but the drivers that did see us pushing a buggy in the middle of the street must have been extremely confused. The drivers familiar with the area probably realized that this was typical CMU behavior.

The hardware aspect of Robobuggy is fairly small. There's one main computer that sits right in the middle of Robobuggy that controls all of its peripherals. A "phidgets" I/O Board hangs off of the main computer that controls and reads the data from the hardware components in parallel. The steering encoder runs through the I/O board as well as the hall effect sensor on the front wheel. In addition to the I/O board, there is a Servo controller that also plugs into the main computer via USB.

Although we do realize the analog camera that was on the old Robobuggy is still very good, its difficult to interface it with with our new computer brains. So instead, in true Nate Barshay image processing style, we replaced that camera with a Playstation Eye Toy (The Newer Version for PS3) . I could go into specifics of why we chose that camera, but the bottom line is that it was made for image processing. In addition, Playstation wants to push games instead of the EyeToy itself, so its pretty cheap for its value.

Two relays will control the braking valves. The relays are attached almost directly to the RC receiver, so we don't have to run the brake through the computer. In addition, the Robobuggy is set up in a way, that we can literally plug the steering servo into the RC receiver and steer it by remote control, pretty easily. We may do this in order to gather data more effectively.

3 6V Sealed Lead Acid Batteries will be powering the Robobuggy. I built an enclosure for them last night.

Some set-backs we've had:

We thought we could get away with building our own simple circuit for the quadrature decoder, turns out, its not that simple, and we need to buy specialized hardware in order to make that work.

The Hall Effect Sensor broke. I got new ones, all I have to do is attach them.

Nate's been chugging away with the code, and in our next update I'll provide a more detailed view on how the algorithms might work. Nate's having a bit of trouble with the methods being too slow, and he's working to fix that. In addition, I'll try to provide some stills from our sketchy night run.

Until next time,

Alex

Sunday, December 12, 2010

The Next Two Weeks.....

As of right now, Nate has a pretty clear idea on how the navigation algorithm is going to work. Its essentially image processing algorithm coupled with navigation algorithms based on position, time and velocity, where the two algorithms play off each other depending which data is readily available. The buggy will rely on clues from its previous position and velocity, and make a guess as to where it is with a given confidence interval. If there is a white line available in the road, the confidence will increase. The buggy will compare its projected position on the road with a "map" of buggy course already in its brain. We predict that map will be represented as a string of GPS coordinates that trace a line through the track.

I have been working on the hardware and getting it all to interface with each other. There are circuit boards to print, and components to solder, so I definitely have all my work cut out for me. Right now I'm working on setting up the emergency brake to stop the buggy with an RC transmitter and integrating an I/O board to interact with a computer.

Nate and I will be in Pittsburgh working on Robobuggy until Christmas day. If you're in Pittsburgh - Stop by! We're at the corner house on Forbes and Morewood, the AEPi house.

-Alex

Friday, November 19, 2010

And we're off!

The SURG committee approved the RoboBuggy grant I wrote, so now we have $1000 to spend!

We have been busy making system diagrams of all the components we need, and we have begun formulating a list of various other electrical components, in addition to setting up our Zotac motherboard.

Here is the system diagram for the entire RoboBuggy:



The only change is that the power supply is at around 18 Volts, as we bought 3 6V 5aH SLA batteries. The Voltage Regulator Board will be adjusted accordingly.

In addition, a R/C controller will be used as an emergency stop for the RoboBuggy if it goes off of the course. The circuit for that setup is below:


Ubuntu is now installed on the motherboard! The GUI will be removed, and startup scripts will be implemented in order for it to function reliably. We will be accessing this computer via SSH in the meantime.

Tomorrow Nate and I are bussing to the hardware store bright and early in order to make a casing for the computer!

Stay tuned!

-Alex


Monday, October 25, 2010

Breaking News!

AEPi has been looking for a brother to drive for awhile now. Just in case, for whatever reason we do not have drivers one weekend, we want to be able to have a brother that we know we'll be able to drag out there. I personally have been able to fit inside Kamikaze, but have been unable to steer effectively. Our current buggy chair, Jake Yosafat, was able to fit inside Kamikaze, but wasn't able to see well.

I'm not sure why we didn't think of it sooner, but Nathaniel Barshay, is the perfect candidate. Sure enough he slides into Kamikaze perfectly, and steers and sees really well.

Since he'll be heading up the vision algorithms for RoboBuggy, driving an actual buggy will give him a perfect idea of the course.

Stay tuned for added hilarity.