|About the Benchmark|
|Obtaining the Benchmark|
|Building the Benchmark|
|The Directory Structure|
|Running the Benchmark|
|Creating New Benchmarks|
|Publishing the Results|
Specialized Parallel Architectures Research Group Department of Computer Science University of Massachusetts Amherst, MA 01003
Permission is hereby granted for research and educational use only.A paper describing the benchmark is available.
You may not transfer this software to any other organization or individual without the expressed, written permission of the Department of Computer Science.
This software is made available on an as-is basis. No warranty of correctness is either expressed or implied by its release. Neither the University of Massachusetts nor the authors shall be held liable for any damages resulting from its use.
The following people worked on this project:
|Sunit Bhalla||Jim Burrill||Steve Dropsho||Martin Herbordt|
|Rohan Kumar||Mike Rudenko||Mike Scudder||Glen Weaver|
The task performed by this benchmark is the recognition and tracking of an approximately specified 2 1/2 dimensional "mobile" sculpture that is moving in a cluttered environment, given a series of synthetic images from simulated intensity and range sensors.
These scenes follow the same pattern as the static version of the DARPA IU Benchmark, but in the dynamic benchmark, the mobile and chaff are blown around the scene by an idealized wind to produce predictable motion. The motion involves movement of the entire mobile as a unit, and movement of its individual components. The motions are both translational and rotational, and they are controlled by reasonably realistic physical constraint models.
The dynamic benchmark is meant to supplement, rather than replace, the static benchmark, which tests system performance at the kernel operation level within the framework of a larger task. We recommend that developers begin by implementing the static benchmark on their computers, and then the motion benchmark can be more easily constructed by reusing the code modules from the static benchmark.
The goal of the dynamic benchmark is to extend the testing of system performance for a longer period of time so that, for example, caches and page tables will be filled and achieve steady-state behavior. The benchmark also explores I/O and real-time capabilities of the systems under test, and involves more high-level processing. Thus, the combination of the two benchmarks allows developers to analyze the performance and behavior of systems at both a fine level of granularity on a single burst of processing, and at a coarser granularity under a sustained load.
Unlike the static benchmark, there are no fixed data sets (except for a small test set called "sample"). Given the number of frames that must be processed in a single test, it is too unwieldy to prepare the input data for distribution. Instead, we have developed a data set generator that can be used to repeatably produce the same image sequence from a set of input parameters.
Release 2 of version 2 of the dynamic benchmark represents a major re-organization of the system code and major changes in the tracking logic.
You may ftp the benchmark from
Its size is approximately .5 MB compressed. Un-compress the file using the Unix uncompress utility and then use the Unix tar utility to extract the files.
To build the benchmark,
Your system must be ANSI C and POSIX 1003.1 compliant.
make compile(You may need to use Gnu make which is avaialble from prep.ai.mit.edu using anonymous ftp.) This builds the executable file "Benchmark" using the C compiler you specified.
Except for the sample directory, these directories are empty. The GenSeq program is used to create the necessary files for each benchmark.
The "sample" benchmark does not have a generator file. The GenSeq program is used to create the necessary files for each benchmark from these "generator" files
To run the "sample" benchmark, in directory "./xxx/bin" execute
./Benchmark ../../benchmarks/sample/sample.setup -t 1The images that come with this distribution work with the included set up file "sample.setup".
The benchmark goes through three phases. First, it searches for a mobile in successive images until a mobile is identified. This is essentially a static image interpretation task. The code is nearly identical to the first version of the benchmark. After a mobile is identified, the benchmark uses the next intensity and depth images to bootstrap its velocity vectors.
After the first two frames the benchmark tracks the mobile for the remaining frames. After each frame is processed, the benchmark prints out the number of rectangles found and the number of rectangles hallucinated. If you select the visual X window display, you will also see the found rectangles outlined in green.
./Benchmark [-p n] [-t trace] [-r rect] setup_file
During the searching and bootstrapping phases, the benchmark image display uses orange to indicate rectangles extracted from strong cues, and blue for rectangles on the probe list. The tracking phase uses green for identified rectangles, orange for hallucinated rectangles, and blue for lost rectangles. By default, the benchmark uses single pixel width lines to outline rectangles, but the SLIW environment variable can be used to control the thickness of lines.
The benchmark writes to the files specified in the setup file. The sample.setup file supplied does not cause any result images to be written. The timing information is appended to a file called "../sample.data" when trace is 0. A human-readable version is written to "../sample.log".
Five different benchmark data sets are supplied with this benchmark. If you wish to use any but the "sample" benchmark, you will first have to build the set of images and other data files for that benchmark. This is accomplished by running the GenSeq program. For example,
./GenSeq ../../generators/twist.gen -Gwill generate the image files, model file, and setup file for the "twist" benchmark. (You may also execute make make_twist to accomplish this.)
If you leave off the -G parameter, GenSeq will bring up an X window display and allow you to select the images that should be part of the image sequence for the benchmark. It is recommended that you NOT change any of the five existing benchmarks.
To create a new benchmark, make a copy of one of the files in the "generators" directory and then edit this new file. You will want to change the pathnames for the files that the benchmark will use. Make sure that these pathnames reference existing directories.
To generate a different benchmark, change the "chaff_state" and "rect_state" values. These values control the random number generator. You may also change any of the other values such as "rigid_pendulum" or "rectangle_twist_max_degrees".
Note - changing these values may result in mobile that can not be found or in mobile motion that the benchmark is not able to track.
Once you have created the "generator" file, use the GenSeq program interactively to generate and select the images. It will also create the model file and setup file for the new benchmark.
./GenSeq [[-h] | [-l] [-G]] generator_fileThe parameters for GenSeq are
If -G is not specified, the program runs interactively to allow you to specify which images should be included in the image sequence.
The generator file will be created if it doesn't exist.
We would like to publish the results of this benchmark for many different architectures. We have run it already on nine different systems at the University of Massachusetts. If you would like your system included, run the long_hard benchmark with a trace value of 0. E-mail the resulting long_hard.data file to email@example.com. Your participation will be appreciated.
Two makefile-include files provide the various operations supported. The makefile/compile.mak file controls compilation and linking. The makefile/images.mak file controls benchmark generation and execution. All make operations should be performed in your ./xxx/bin directory.
make compile make make_benchmarks make run_benchmarks