Computer Vision on the Trevor Rover¶
Author: | Mathew Kallada |
---|
Change Record¶
Introduction¶
Purpose¶
Computer vision capabilities are crucial in a rover helping it to analyze and understand its enviroment. This document outlines the key features of the computer vision features of the Italian Mars Society’s Trevor Rover.
Applicable Documents¶
- [1] – C3 Prototype document v.4
- [2] – OpenCV
- [3] – Software Engineering Practices Guidelines for the ERAS Project
- [4] – ERAS 2013 GSoC Strategic Plan
- [5] – Marscape Scenario
- [6] – TANGO distributed control system
- [7] – `PyTANGO - Python bindings for TANGO`_
- [8] – Minoru 3D Webcam
- [9] – V-ERAS
- [10] – EUROPA Planning Software
- [11] – Histogram of Oriented Gradients
- [12] – Principal Component Analysis
- [13] – Denisty-based scan
Glossary¶
CV
- Computer Vision
API
- Application Programming Interface
ERAS
- European Mars Analog Station
IMS
- Italian Mars Society
Overview¶
This module provides a series of computer vision operations designed for navigation and exploration with the Trevor Rover. These operations include target recognition and hazard detection ultimately satisfing the requirements for the Marscape scenario described in [5].
Design Considerations¶
Hardware Requirements¶
We will be using the Minoru3D Webcam and the RaspberryPi. In case of fast moving objects, we will need to optimize the speed of the Minoru+RPi.
Software Requirements¶
Object Recognition and Tracking¶
We will use scikit-learn for machine learning (predicting which objects it has previously seen), and OpenCV2 for image analysis (creating disparity fields, locating images).
Interface Requirements¶
Hardware Interfaces¶
We will be using the Minoru 3D webcam and the RaspberryPi for computer vision processing and AI planning and scheduling.
User Interfaces¶
To add human reasoning into the rover’s decision making abilities, there will be an interface to allow operators to specify properties of previously seen objects.
Software Interfaces¶

An inputted image is sent to several tasks for processing. These tasks include object recognition and depth detection. Once we retrieve this information, we can infer conclusions such as hazards nearby, and finally send this data to the EUROPA system ([10]).
Performance Requirements¶
Ideally, the rover will want to interact and respond to it’s enviroment in real time.
Software Design¶

High-level view of Object Recognition

- This module takes a HOG representation ([11]) of each object on screen. Below, I
- have collected a series of objects and have shown ([12]) the dataset to two-dimensions (with PCA).

Each color represents a different cluster (found by DBSCAN as described in [13]). Each cluster represents an object on screen. This way, we can recognize objects we have seen earlier (the triangle is an object we are trying to predict).
Development and Progression¶
Standards Compliance¶
The guidelines defined in [3] should be followed.
Planning¶
A high level schedule is shown below.
- Milestone I: Finish Object Recognition & Target Tracking
- Milestone II: Enviroment Analysis
[Midterm Evaluation]
- Milestone II: Integrate with pyEUROPA
- Milestone IV: Integrate with the Waldo interface