Well, hurray for submerging and staying submerged anyway.

A small portion of the crew responsible for the 2012 RoboSub entry.

I recently competed in the 2012 RoboSub Competition held in the temperate Sand Diego, California as the lead programmer for the MSU team. You can read an excellent summary of the team experience from the MSU News Service. We were able to overcome some rather severe difficulties to make it into semi-finals this year. What follows is an overview of some of the issues the competition foists upon the participants.

The requirements for the RoboSub events are straight forward:

  • Control position and attitude
  • Color and Glyph recognition
  • Fire torpedoes at targets
  • Drop depth charges on targets
  • Bump a small object off a target
  • Manipulate a pipework object
    • Grasp
    • Lift
    • Carry
    • Release
  • Home on SONAR signal

Of course this all becomes far more challenging when two factors come into play. First, the vehicle must be completely autonomous, it must perform all of these tasks without human intervention or communication and without the aid of active sonar mapping. It must also perform these tasks under changing lighting and unusual magnetic conditions.

The actual competition takes place on Point Loma, a peninsula jutting out into the ocean like something made for jutting. The lighting changes hourly from this:

There's too much fog to see the lighthouse! We need another lighthouse to point out this one.

to this:

It's too bright! Can't we get some sort of shade so that lighthouse isn't so distracting?

Here’s what the buoys looked like this year when it was sunny:

Here, let me help you out.

Notice that most of both of those buoys are essentially white, and the green (or is it the yellow?) one has so many shades in it that it’s really hard to decided what shade is the right one to go for. When the fog or clouds roll in, the red one becomes almost the same color as the green one underwater and the tasks at the bottom of the pool all but disappear in darkness.

There’s a few strategies adopted to deal with the changing lighting conditions. Some teams race to the time-slot sign ups and get the same time every day, hoping the lighting conditions will be the same for the competition runs as they are during trial runs. This works well if you get an optimal bright lighting time, but the strategy can backfire if it’s suddenly cloudy on competition day, or even worse cloudy for just a few minutes out of your fifteen minute run. Some teams opt for the more predictable morning fog time slots, which are very consistent in lighting, but unfortunately that’s the worst lighting conditions possible…

I chose to adopt an adaptive plan. The color recognition component of the software only relies on a very very small segment of the total view being the sought after color (less than 0.1%) and will adjust its filtering parameters to find that “purest” color. It does this independently in the three RGB color channels to counter the color diffusing properties of the pool. Some of the teams do this with HSV but in my testing I found the computer to find the correct color more reliably with RGB split than HSV. This may have to do with the hardware. Our team uses the Microsoft LifeCam Studio camera retailing for about $100 whereas a lot of teams use something like the Guppy which goes for about $800. Note that the LifeCam has a much higher resolution and a widescreen image, quite nice for the tasks at hand which is why I chose it over the Guppy after testing both.

Oh, did I mention that the scummy side of the pool makes a great camouflage?

The pool got a sunburn the first day of the event and spent the rest of the time peeling.

Okay, vision hard, got it. What about bearing? Surely that’s straightforward, right?

Nope.

Point Loma is where the U.S. Navy degausses their submarines and warships.

All right men, let's get those magnets off the hull!

How does one degauss (remove the magnetic signature from) a large chunk of metal? With a massive electromagnet or charged power cable dragged through the water. This is only speculation, but the pool the competition takes place in is very near where this process happens, and the walls of the pool are filled with rebar. Net result: get too close to the edge of the pool and any instruments that rely on magnetism go haywire.

This doesn’t have an inexpensive solution, as was seen in this year’s competition. University of Maryland usually does very well and the placement of the objects in the finals round this year was too close to the wall of the pool for UM’s sub to compensate. In fact the only team that didn’t have an issue with the magnetic fields this year was Cornell who used multiple IMU (Inertial Measurement Unit) devices, including one custom built for the competition. Our choice was a simple $150 unit that gave us a best-effort usability and worked well enough to get us through the qualification gate.

This is called a "Phidget" because it doesn't give a constant reading.

This year’s Phidget 9DOF IMU performed better than last year’s Razor 9DOF IMU, but only marginally so due to the magnetic anomalies at Point Loma. We scrapped the compass and magnetometer entirely and relied on the one gyro that gave consistent data to dead reckon through the gate.

Score so far, camera for less than $100 awesome, IMU for less than $20,000 terrible.