Computer Vision on the Trevor Rover

Author:Mathew Kallada

Change Record

2014.06.24 - Document updated.
2014.05.21 - Document created.

Introduction

Purpose

Computer vision capabilities are crucial in a rover helping it to analyze and understand its enviroment. This document outlines the key features of the computer vision features of the Italian Mars Society’s Trevor Rover.

Glossary

CV
Computer Vision
API
Application Programming Interface
ERAS
European Mars Analog Station
IMS
Italian Mars Society

Overview

This module provides a series of computer vision operations designed for navigation and exploration with the Trevor Rover. These operations include target recognition and hazard detection ultimately satisfing the requirements for the Marscape scenario described in [5].

Design Considerations

Hardware Requirements

We will be using the Minoru3D Webcam and the RaspberryPi. In case of fast moving objects, we will need to optimize the speed of the Minoru+RPi.

Software Requirements

Object Recognition and Tracking

We will use scikit-learn for machine learning (predicting which objects it has previously seen), and OpenCV2 for image analysis (creating disparity fields, locating images).

Interface Requirements

Hardware Interfaces

We will be using the Minoru 3D webcam and the RaspberryPi for computer vision processing and AI planning and scheduling.

User Interfaces

To add human reasoning into the rover’s decision making abilities, there will be an interface to allow operators to specify properties of previously seen objects.

Software Interfaces

https://bytebucket.org/italianmarssociety/eras/raw/9d44b4992114703c17d527b2299413f5641ca9db/servers/vision/doc/Images/SA.png

An inputted image is sent to several tasks for processing. These tasks include object recognition and depth detection. Once we retrieve this information, we can infer conclusions such as hazards nearby, and finally send this data to the EUROPA system ([10]).

Performance Requirements

Ideally, the rover will want to interact and respond to it’s enviroment in real time.

Software Design

https://bytebucket.org/italianmarssociety/eras/raw/9d44b4992114703c17d527b2299413f5641ca9db/servers/vision/doc/Images/CD.png

High-level view of Object Recognition

https://bytebucket.org/italianmarssociety/eras/raw/a6a9815420161a89065421be5786981300a74be5/servers/vision/doc/Images/IR.png
This module takes a HOG representation ([11]) of each object on screen. Below, I
have collected a series of objects and have shown ([12]) the dataset to two-dimensions (with PCA).
https://bytebucket.org/italianmarssociety/eras/raw/4afa68b5bec747daa40b1cc18420f806cb6f1d74/servers/vision/doc/Images/IR_data.png

Each color represents a different cluster (found by DBSCAN as described in [13]). Each cluster represents an object on screen. This way, we can recognize objects we have seen earlier (the triangle is an object we are trying to predict).

Development and Progression

Standards Compliance

The guidelines defined in [3] should be followed.

Planning

A high level schedule is shown below.

  • Milestone I: Finish Object Recognition & Target Tracking
  • Milestone II: Enviroment Analysis

[Midterm Evaluation]

  • Milestone II: Integrate with pyEUROPA
  • Milestone IV: Integrate with the Waldo interface