PDA

View Full Version : rigid body tracking (X-post)



Ton Van Den Bogert
02-01-1993, 07:30 AM
Dear Biomch-L subscribers,

The item below is cross-posted from the Usenet newsgroup
sci.image.processing. This technology for 3-D tracking of rigid
bodies from gray-scale video images seems to be developing very
rapidly. I sure hope that Oxford Metrics, Motion Analysis
Corporation and their likes are paying attention. But, they
probably are already (in secret) working on even more
sophisticated methods... :-).

-- Ton van den Bogert, Biomch-L moderator
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Newsgroups: sci.image.processing
From: sjreeves@eng.auburn.edu (Stan Reeves)
Subject: object tracking (summary)
Organization: Auburn University Engineering
Date: Tue, 26 Jan 1993 17:07:14 GMT


I recently posted the following question:

Beginning in January, I will be leading a senior design project that
involves tracking a single rigid object in an image sequence. Can
anyone suggest some standard algorithms for accomplishing this? I
could dream up several approaches, but I would like to point the
students to some references on techniques that are commonly used for
this type of problem. I'm hoping to have them check out more than
one approach, so a variety of responses would be great. Any help is
appreciated.

Here is a summary of the responses I got:

>From olli@ee.oulu.fi Wed Dec 30 00:07:12 1992

You leave many questions open.
Do you want to do the tracking with 6-dof or just 2-dof?
How much computation is allowed? Is the camera system
stationary? What kind of camera system is used?
Is the object known?
Anyway, the 2-dof problem is trivial, the 6-dof case is
much more than 3-times more difficult.

Obviously, your purpose is not to produce a working system for
a real environment, but a paper of some sort.
Thus, take a look at the following paper:
Broida, Chandrashekhar, Chellappa: Recursive 3-D Motion Estimation...
IEEE Transactions on Aerospace and Electronic Systems, vol 26, no 4, 1990.
The references give you an idea of the published work in this area,
in particular, read the reference 15 (Dickmanns) and its companion article.

You should be aware of that the approaches of most papers in this area
do not work very well...


>From whb@castle.edinburgh.ac.uk Wed Dec 30 05:38:11 1992

I read your article with interest. We have implemented bespoke object tracking
algorithms with some success here. The objects are represented by rectangular
bounding boxes and then matched by proximity, direction, speed etc. to a
list of "historical objects". The matched objects are then fed to an object
tracking stage.
The actual tracking is simple compared to identifying the change on which
the objects should be based. A simple pixel difference is easily fooled by
changes in ambient lighting, shadows etc. so we have devised a more
sophisticated method. I feel that this stage is much harder than the object
tracking itself.


>From makrisna@convex1.TCS.Tulane.EDU Wed Dec 30 09:07:50 1992

a better newsgroup will be comp.ai.vision. but i happen to have a lot of
reference on this subject. a good place to start will be IEEE PAMI. there
are in general two approaches. the first one is based on computing the
optical flow and the second one is a model based approach. it will depend
on what you are trying to do. the general problem of computing position,
velocity, acceleration etc. for all six degrees of freedom is quite
difficult. but if you are just interested in the x, y, and z position of
a simple polygon or polyhedra, there are quite a few ways of doing it.
i am trying to locate a survey on this subject by Thompson in one of the
PAMI issues. i will send you the reference as soon as i find it. if you
come across work done by aggarwal at UT Austin, Huang and Illinois, Horn
at MIT and Chellappa and Broida at USC that will be a good start. best of
luck. this is a friend's account but you can send any further questions
here.

>From olli@ee.oulu.fi Wed Dec 30 09:31:40 1992

>>My follow-up clarification via email:
> I only need two directions --
> the horizontal component parallel to the image plane and the component
> in the direction of the camera

You are going to find m-a-n-y references!
And your problem can be solved quite
straightforwardly and reliably (as
long as the camera is stationary with
respect to the background).
A good source to start is the
proceedings of the IEEE workshop on
visual motion, 1991.
(I don't want to give too accurate
pointers, as this seems to be a somekind
of student project. However, simple
ideas work best...).

And there are working systems (almost?) for your
purpose. One of the best I have seen is made by
Imago Machine Vision Inc,
1750 Courtwood Crescent, Suite 300
Ottawa, Ontario K2C 2B5 Canada
Fax: (613) 226-7743
Tel: (613) 226-7890

(you could ask for their brochure and
video tape)

>From tom@vexcel.com Wed Dec 30 10:49:11 1992
I have been working on a project to track arctic ice in image
pairs for about 3 years now. We have developed an automated system
to perform this task.

The algorithms that we use for tracking of the ice are of 2
different variations. The first is a Psi-S algorithm which
first thresholds the images, finds the boudaries of features
within the thresholded images, and then plots the feature boundaries
using a Psi-S convention. The rigid bodies can be found by correllating
the features Psi-S curves which will match in shape but have an
amplitude variation which indicates rotation of the feature.

The second, (and more commonly used) technique is an area
correllation algorthm which finds correllation peaks between
patches of the two images. This technique breaks down when
there is a lot of rotation of the features.

>From hallinan@hrl.harvard.edu Wed Dec 30 12:14:21 1992

Horn and Schunk (sp?) have an article in "Artificial Intelligence',
1981,
that is probably the basic reference for computing optical flow, the
first step in your project.

>From danm@cs.ubc.ca Wed Dec 30 15:32:20 1992

(Quoting a previous request for references)

Date: Fri, 10 Apr 92 10:35:34 SST
From: atreyi@iss.nus.sg (Atreyi Kankanhalli)
Subject: Object Tracking References

I had asked for references on "Object Tracking in Image Sequences" a while
ago on this list. I am now posting a compiled set of references which I
gathered from the responses.

***** References on object tracking *****

1. S.M. Haynes, Ramesh Jain, "A Qualitative Approach for Recovering Relative
Depths in Dynamic Scenes", Proc. of Workshop on Computer Vision, Miami Beach,
FL, Nov.30-Dec.2, 1987.

2. I.K. Sethi, Ramesh Jain, "Finding Trajectories of Feature Points in a
Monocular Image Sequence", IEEE Trans. on PAMI, Vol.9, No.1, 1987, pp.56-73.

3. Michal Irani, Benny Rousso, Shmuel Peleg, "Detecting and Tracking Multiple
Moving Objects Using Temporal Integration", to appear in European Conference
on Computer Vision, 1992.

4. I.K. Sethi, H. Cheung, N. Ramesh, Y.K. Chung, "Automatic Detection of Motion
of Interest for Surveillance", Proc. International Conference on Automation,
Robotics and Computer Vision, Sept. 1990, pp.227-231.

I found some additional references in the two volumes

"Computer Vision: Principles" and "Computer Vision: Advances and Applications"
ed. Rangachar Kasturi, Ramesh Jain, IEEE Computer Society Press Tutorial, 1991.

I would appreciate any updates to this list.

Atreyi Kankanhalli
Institute of Systems Science
National University of Singapore
Kent Ridge, Singapore 0511

Email: atreyi@iss.nus.sg


----------------------------------------------------------------------

My supervisor David Lowe has published his work in model based motion
tracking. See "Fitting Parameterized Three-Dimensional Models to Images",
David G. Lowe, in IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol.13,No.5,May 1991.

Donald Gennery also published results for model based tracking in
"Visual Tracking of Known Three-Dimensional Objects", International Journal
of Computer Vision,7:3, 243-270 (1992).

See also Azriel Rosenfeld's survey of computer vision published annually
in CVGIP:Image Understanding, usually in May.


Hope this helps.

Dan McReynolds
University of British Columbia
Dept. of Computer Science

>From: spl@szechuan.ucsd.edu (Steve Lamont)

See Jain, Anil K., _Fundamentals of Image Processing_, ISBN 0-13-336165-9

Jain covers this subject quite well in Chapter 9 "Image Analysis and
Computer Vision." See, in particular, section 9.12, pp. 400-406,
Scene Matching and Detection.

I use an adaptation of the correlational techniques described to track
objects (cells) through a series of images at reasonable rates. The
technique works fairly well even when the cells are undergoing
moderately radical morphological changes.


>From vision@iro.umontreal.ca Mon Jan 4 09:36:13 1993

I am working on an application of optical flow algorithm on
non-rigid coronary artery bifurcation (tracking). I found the section
9.12 of Jain's "Fundamentals of Digital image processing" labeled
"scene matching and detection".


>From paik@mlo.dec.com Mon Jan 4 17:05:55 1993

Possibly this bibliography may be of use (from a project in motion
tracking from a computer vision class).

[Anandan89] P. Anandan. A computation framework and an algorithm for
the measurement of visual motion. International Journal of Computer
Vision, 2(3):283-310, January 1989.

[Braccini86] C. Braccini, G. Gambardella, A. Grattarola, L. Massone,
P. Morasso, G. Sandini, and M. Tistarelli. Object reconstruction from
motion: comparison and integration of different methods. Proceedings
of the Intenational Workshop on Time-Varying Image Processing and
Moving Object Recognition, September 1986

[Cornelius83] N. Cornelius and T. Kanade. Adapting optical-flow to
measure object motion in reflectance and X-ray image sequences.
Technical Report CMU-CS-83-119, Carnegie Mellon University, 1983.

[Horn81] B. K. P. Horn and B. G. Schunck. Determining Optical Flow.
Artificial Intelligence, 17:185-203, 1981.

[Lucas81] B. D. Lucas and T. Kanade. An iterative image registration
technique with an application to stereo vision. Proceedings of the
7th International Joint Conference on Artificial Intelligence, 1981.

[Rehg91] J. M. Rehg and A. P. Witkin. Visual tracking with
deformation models. Proceedings of the IEEE Conference on Robotics
and Automation, April 1991.

[Tomasi91] C. Tomasi and T. Kanade. The factorization method for the
recovery of shape and motion from image streams. DARPA Image
Understanding Workshop, 1991.

[Tomasi91b] C. Tomasi. Personal communication, November 1991.

[Tomasi92] C. Tomasi and T. Kanade. Selecting and tracking features
for image sequence analysis. Submitted to Robotics & Automation,
1992.


>From donohoe@jemez.eece.unm.edu Tue Jan 5 10:07:34 1993

I've done some work in object tracking and can send you some references.


>From mww@eng.cam.ac.uk Wed Jan 6 09:21:30 1993

One popular technique is called active contours or "snakes".
The seminal reference would be:
Kass, Witkin,Terzopoulos
Snakes: Active contour models
1st Int Conf on Computer Vision 1987, pp259-268

Check out simpler and more computationally efficient
implimentations using b-splines eg:

Curwen, Blake and Cipolla
Parallel impimentation of Lagrangian Dynamics for real time
snakes.
British Machine Vision Conference 1991,pp29-35

Other techniques:
The simplest would probably be frame differencing
(assuming static camera), Autocorrelation,
corner tracking.


---------------------------------------------------------------------------
---------------------------------------------------------------------------
---------------------------------------------------------------------------

Thanks to everyone who contributed ideas and references. They
were very helpful!



--
Stan Reeves
Auburn University, Department of Electrical Engineering, Auburn, AL 36849
INTERNET: sjreeves@eng.auburn.edu