Below is my original question, responses from BIOMCH-L and finally our own
findings (very briefly) relating to camera positioning. I would like to
thank everyone for their response.
===============Original Question==========================================
We are doing some 3D video work using two cameras that requires the cameras
to be unusually close together; i.e. about 30 degrees apart. I can find
plenty of literature and manuals that state that the cameras should be 90
degrees apart or between 60 and 120 degrees apart, but all of these appear
to be rule of thumb. I can't find any literature that has scientifically
monitored what happens at various angles and how far you can go as the sine
of angle between the cameras approaches zero.
While we are testing our specific 30 degree problem, if anyone has any
experimental information relating to this topic or can lead me to some
articles I'd be grateful if you could contact me.
I will put useful responses on BIOMCH-L for your perusal.
Thanks
Russell
===============BIOMCH-L Response==========================================
>From YUB@rcf.mayo.edu Tue Nov 1 03:23:14 1994
Return-Path:
Hi,
When we were testing a special DLT procedure with panning cameras (J.
Biomechanics, 26: 741-751, 1993), the angle between two cameras was about
15 degrees, and the mean calibration errors were lower than 20 mm. In our
application of this procedure, we tried to maximize the the angle between
the two cameras. The maximum angle we had might be about 30 degrees and the
mean calibration errors were about 10 mm. If you are going to use DLT
procedure for stationary cameras, the mean calibration error could be well
below 10 mm if the angle between the two cameras is no less than 30 degrees.
Good luck.
Bing Yu, Ph.D.
Orthopedic Biomechanics Laboratory
Mayo Clinic
Rochester, MN 55905
U.S.A.
--------------------------------------------------------------
>From rosa@cs.indiana.edu Tue Nov 1 03:23:27 1994
Return-Path:
I am a Ph. D. student and I have quite alarge experience with 3D data
collection (my masters degree is in biomechanics). We are currently
collecting data with cameras at about 50 degrees. The main problem is
that the error in the "depth" direction is twice as large as the other
two directions. My guess is that as you close more and more the angle
the error in the "depth" direction becomes larger and larger.
In our case, the "depth" direction has little relevance for the type of
data we will be looking at. However, if that is also your case, you may
want to consider to collect 2D data instead.
Sincerely,
Rosa M. Angulo-Kinzler
--------------------------------------------------------------
>From laragon@cariari.ucr.ac.cr Tue Nov 1 07:33:10 1994
Return-Path:
I do not have any references, but I used cameras that were rather close
in my dissertation experiments last year. The main problem (using Motion
Analysis sistem) was that the algorithms for identifying the 3-d position
of some points in space got confused by the two cameras that were the
closest.
One posible solution (it worked for me): consider not only the angles
among cameras on the horizontal plane, but on other planes as well (i.e.,
use one camera higher than others, pointing at a different vertical angle
to the target).
Luis Fernando Aragon-Vargas, PhD Phone & Fax +506-227-9392
School of Physical Education e-mail: laragon@cariari.ucr.ac.cr
Universidad de Costa Rica
--------------------------------------------------------------
>From @QUCDN.QueensU.CA:anglin@conn.ME.QueensU.CA Tue Nov 1 07:35:04 1994
Return-Path:
We worked with two cameras about 50 degrees apart to monitor arm motions
for daily-living activities. We were also constrained in our angle in
order to keep the markers in view at all times for the 22 different
actions (although there was some provision for missed markers on each
frame). For the larger angles, the cosine is still reasonably close to
one; as you said this gets worse for the smaller angles. However, all
you are really concerned about is the accuracy, so if you can demonstrate
through calibration tests that your accuracy is sufficient for your
purposes, then 30 degrees should be fine. The other solution could be to
add a third (overhead) camera.
Best of luck,
Carolyn Anglin
anglin@me.queensu.ca
(P.S. The work was done at the University of British Columbia, Mechanical
Engineering).
--------------------------------------------------------------
>From paul@gaitlab1.uwaterloo.ca Tue Nov 1 07:56:31 1994
Return-Path:
Hi Russell,
There is a paper in 'Photogrammetic Engineering' by Y.I. Abdel-Aziz,
1974,p1341-1346 called 'Expected Accuracy of Convergent Photos'. In it,
he discusses your problem. If you consider the axes to be Y vertical, X
horizontal, and Z toward the camera, Y accuracy stays pretty much the
same with angle (you'd expect that). Z accuracy degrades to about 1/4 of
the Y accuracy at 30 degrees between the two cameras. X accuracy is
about the same as the Y accuracy. As the angle increases, you must
compromise between X and Z accuracy.
Since the depth measurement is determined by the difference between
the images of the two cameras, it is intuitive that the more identical
the two images are (ie, smaller angle), the more difficult it is to
extract the differences accurately. With ideal cameras there's no
problem....but with video cameras and their shortcomings there will be a
lot of trouble when you look at the difference signals.
Another paper is : 'N.A. Borghese and G. Ferrigno',"an Algorithm for
3-D Automatic Movement Detection by Means of Standard TV Cameras",IEEE
Transactions on Biomedical Engineering",Vol 37,No.12,p1221-1225,Dec.
1990. Their data suggests about 3 times the error you would get at the
optimum 90 degrees.
I would think that there should be a fair number of papers out there
by now, as this subject has pretty well been beaten to death.
-Paul
Paul J Guy work phone:519-885-1211 ext 6371
paul@gaitlab1.uwaterloo.ca home/FAX/:519-576-3090
pguy@healthy.uwaterloo.ca 64 Mt.Hope St.,Kitchener,Ontario,Canada
--------------------------------------------------------------
>From egmjp@cc.flinders.edu.au Tue Nov 1 09:27:46 1994
Return-Path:
Dear Russell
Ian Stokes at Vermont should be able to help.
I think Manohar Panjabi published something in J Biomech a few years ago.
I've only just moved to Flinders and haven't got all my papers here with me
yet but there is something in the literature on this subject that shows that
accuracy drops off if you have an angle below 60 degrees. Good luck in your
search. Please let me know what you find out.
Mark Pearcy
Associate Professor of Biomedical Engineering
Email egmjp@cc.flinders.edu.au
School of Engineering
Flinders University of South Australia
GPO Box 2100
Adelaide
South Australia 5001
Australia
Phone: (+61) 8 201 3612
Fax: (+61) 8 201 3618
--------------------------------------------------------------
>From deleva@risccics.ing.uniroma1.it Tue Nov 1 10:00:01 1994
Return-Path:
I would say that you don't need experimental data about your
specific problem. All what you need is an estimate (based on the
literature, of course) of the error you can have on each of the
two separate 2D views. Also, you need to determine how large
your field of view will be, for each camera (in degrees). In fact,
in your case the error will be larger for points located farther
away from both cameras than the intersection of the two optical
axes (I hope I was clear enough).
Knowing the above data, it is quite easy to calculate the
statistical probability of a given error, or the amount of error
associated with a given probaility level (for example, 5%). The
problem should be solved, in my opinion, calculating the error
on the horizontal plane (assuming that both your cameras are
oriented with their optical axes horizontally).
I was not explicit, but you can probably guess what I mean.
Please let me know if you agree, or if you want more details.
Paolo
Paolo de Leva
Istituto Superiore di Educazione Fisica
Biomechanics Lab
Via di Villa Pepoli, 4
00153 ROME
ITALY
Tel: 39-6-575.40.81
FAX: 39-6-575.40.81 (or 39-6-361.30.65)
e-mail address: DELEVA@RISCcics.ing.uniRoma1.IT
--------------------------------------------------------------
>From GTSBAR@anat.uct.ac.za Tue Nov 1 23:24:01 1994
Return-Path:
Dear Russell
Suggest you contact Harvey Mitchell at E-mail:
cehlm@cc.newcastle.edu.au.
He lives in your part of the world and is an expert on
Stereo video technology, (sometimes called digital photogrammetry).
Barbara
Barbara van Geems
Department of Biomedical Engineering
University of Cape Town Medical School
Observatory 7925
South Africa
Tel: (021) 406-6547
Fax: (021) 448-3291
Pmail: GTSBAR@ANAT.UCT.AC.ZA
--------------------------------------------------------------
>From CRISCO@BIOMED.MED.YALE.EDU Tue Nov 1 23:30:56 1994
Return-Path:
For two camera systems I am pretty sure that the optimal angle is close to
30 degrees when using such code as DLT for 3D reconstruction. I can not
rember where I read this but you might try: Marzan and Karara (1975) A
computer program for direct linear transformation of the colinearity
condition, Symposium on close range photogrammetric systems, july28-aug1,
Champaign, Illinois.
Trey Crisco
--------------------------------------------------------------
>From @PSUORVM.CC.PDX.EDU:KAREN@emp.pdx.edu Wed Nov 2 15:31:48 1994
Return-Path:
Russell,
I spent 7 years conducting 3D analyses with video and optoelectric
cameras. If your cameras are only 30 degrees apart, you may
have a large error in the depth dimension. The two cameras
will track horizontal and vertical spacial coordinates fairly
accurately, however, look closely at the third coordianate.
Good Luck
Karen
--------------------------------------------------------------
>From ferrigno@elet.polimi.it Thu Nov 3 02:58:45 1994
Return-Path:
please, give a look of our paper:
Borghese NA, Ferrigno G - An Algorithm for 3-D Automatic Movement Detection by
Means of Standard TVCAmeras - IEEE Trans. of BME vol. 37, NO.12, 1990.
At pag. 1224 we report what happen varying the angle bwteen the cameras by
numerical simulations, at 30 degrees you should expect a decrease in depth accuracy of a
factor of 2.
Nunzio Alberto Borghese
--------------------------------------------------------------
>From KOLLJO1@VM.AKH-WIEN.AC.AT Fri Nov 4 19:13:31 1994
Return-Path:
Hallo from Vinna-Austria
We are a group running a Gaitlab with videometry 6 camera system.
There is no way out for dealing with rule over the thumb for cameraposition.
It means much more trials with experience than a theoretical approach. If you
deal with two cameras the theoretical optimum is 90 degrees between the
lensaxis. This would give you least 3D errors because of digitalisation error
or lens distortion.
But you should have in mind that the most difficult thing to manage with two
cameras are covered or merging markers which results in loosing the track.
So the best cameraposition depends on your marker set. If you are
flexible with the marker set you should choose one where no hiding or merging
occours for camera postions that are as near to 90 degrees as possible.
If, for some other reason, you are restricted to 30 degrees between the
cameraaxes, you have to deal with doubled 3D location errors for markers in the
depth (i.e. the direction your cameras look and oposit).
Kollmitzer J.,ERRORS AND PRACTICAL TESTS FOR 3D GAITANALYSIS SYSTEM SETUPS WITH
VIDEOMETRY AND FORCEPLATES; Proc. Third International Symposium on 3-D Analyis
of Human Movement, Stockholm Sweden 1994:49
--------------------------------------------------------------
>From fourd@crl.com Mon Dec 12 18:39:00 1994
Return-Path:
Russell:
I adressed the topic in my doctoral dissertation ...
Walton, James S. "Close-Range Cine-Photogrammetry: A Generalized
Technique for Quantifying Gross Human Motion. Doctor's Dissertation,
The Pennsylvania State University, 1981.
Jim WALTON
************************************************** **********************
************************************************** **********************
* * *
* JAMES S. WALTON, Ph.D. * *
* President * INTERNET : Fourd@crl.com --or-- *
* 4D VIDEO * Jim.Walton@Forsythe.Stanford.Edu *
* 3136 Pauline Drive * *
* SEBASTOPOL, CA 95472 * BITNET : Jim.Walton@Stanford.Bitnet *
* * *
* PHONE: 707/829-8883 * COMPUSERVE : 72644,2773 *
* FAX : 707/829-3527 * *
* * *
************************************************** **********************
************************************************** **********************
===============Our Findings (in brief)====================================
We tried to break down where the errors were coming from, ie. small angle
between cameras, digitiser repeatability, camera resolution, calibration
frame.
Our greatest error came initially from the calibration frame. We were using
an open ended frame (sometimes called a Christmas tree or sputnik) that we
bought from a motion analysis company about two years ago. The frame was
factory measured to millimetre accuracy. When we re-measured the frame, we
found that some coordinates of some points were out by up to 10mm. This,
resulted (when combined with all other errors) in marker accuracies after
3D reconstruction of up to 20-30mm (3D distance). The markers we used were
a fixed distance apart (700mm), auto-digitised and the field of view was 2m.
Rather than re-measuring our calibration frame, we decided to build our own,
closed (cuboid) calibration. We did this for three reasons. First, because the
open frame would continue to deteriorate over time and the closed system was
less likely to change over time. Second, building our own ended up being
cheaper and quicker than getting surveyors to remeasure the open frame.
Third, a closed system would be easier to remeasure at a later date and easier
for us to control placement of makers in whatever orientation we needed.
We got very satisfactory results from our cuboid (we didn't compare the cuboid
with the open frame because we didn't have accurate measurements for this
system). We had a field of view of about 1.5m (about 3mm per pixel), manually
digitised, markers were fixed distances apart and were 3mm diameter ball
bearings covered with 3M reflective tape. The reflection from the bearing was
obviously much greater than one pixel and the centroid calculation actually
improved the resolution of the system. The errors were often around 2-3mm and
never greater than 6mm. This 6mm error occurred along the depth direction,
which was always worse, while the other directions amounted to a maximum error
of 4mm. This is more than satisfactory and we decided not to take it any
further at this stage.
A colleague of mine, Rezaul Begg (rezaul=begg@vut.edu.au), did most of this
work and is currently conducting more detailed experiments relating to camera
angles and distances from the centre of field view. We are currently getting
our open calibration frame remeasured because the cuboid is not easily
transportable (it is one piece).
findings (very briefly) relating to camera positioning. I would like to
thank everyone for their response.
===============Original Question==========================================
We are doing some 3D video work using two cameras that requires the cameras
to be unusually close together; i.e. about 30 degrees apart. I can find
plenty of literature and manuals that state that the cameras should be 90
degrees apart or between 60 and 120 degrees apart, but all of these appear
to be rule of thumb. I can't find any literature that has scientifically
monitored what happens at various angles and how far you can go as the sine
of angle between the cameras approaches zero.
While we are testing our specific 30 degree problem, if anyone has any
experimental information relating to this topic or can lead me to some
articles I'd be grateful if you could contact me.
I will put useful responses on BIOMCH-L for your perusal.
Thanks
Russell
===============BIOMCH-L Response==========================================
>From YUB@rcf.mayo.edu Tue Nov 1 03:23:14 1994
Return-Path:
Hi,
When we were testing a special DLT procedure with panning cameras (J.
Biomechanics, 26: 741-751, 1993), the angle between two cameras was about
15 degrees, and the mean calibration errors were lower than 20 mm. In our
application of this procedure, we tried to maximize the the angle between
the two cameras. The maximum angle we had might be about 30 degrees and the
mean calibration errors were about 10 mm. If you are going to use DLT
procedure for stationary cameras, the mean calibration error could be well
below 10 mm if the angle between the two cameras is no less than 30 degrees.
Good luck.
Bing Yu, Ph.D.
Orthopedic Biomechanics Laboratory
Mayo Clinic
Rochester, MN 55905
U.S.A.
--------------------------------------------------------------
>From rosa@cs.indiana.edu Tue Nov 1 03:23:27 1994
Return-Path:
I am a Ph. D. student and I have quite alarge experience with 3D data
collection (my masters degree is in biomechanics). We are currently
collecting data with cameras at about 50 degrees. The main problem is
that the error in the "depth" direction is twice as large as the other
two directions. My guess is that as you close more and more the angle
the error in the "depth" direction becomes larger and larger.
In our case, the "depth" direction has little relevance for the type of
data we will be looking at. However, if that is also your case, you may
want to consider to collect 2D data instead.
Sincerely,
Rosa M. Angulo-Kinzler
--------------------------------------------------------------
>From laragon@cariari.ucr.ac.cr Tue Nov 1 07:33:10 1994
Return-Path:
I do not have any references, but I used cameras that were rather close
in my dissertation experiments last year. The main problem (using Motion
Analysis sistem) was that the algorithms for identifying the 3-d position
of some points in space got confused by the two cameras that were the
closest.
One posible solution (it worked for me): consider not only the angles
among cameras on the horizontal plane, but on other planes as well (i.e.,
use one camera higher than others, pointing at a different vertical angle
to the target).
Luis Fernando Aragon-Vargas, PhD Phone & Fax +506-227-9392
School of Physical Education e-mail: laragon@cariari.ucr.ac.cr
Universidad de Costa Rica
--------------------------------------------------------------
>From @QUCDN.QueensU.CA:anglin@conn.ME.QueensU.CA Tue Nov 1 07:35:04 1994
Return-Path:
We worked with two cameras about 50 degrees apart to monitor arm motions
for daily-living activities. We were also constrained in our angle in
order to keep the markers in view at all times for the 22 different
actions (although there was some provision for missed markers on each
frame). For the larger angles, the cosine is still reasonably close to
one; as you said this gets worse for the smaller angles. However, all
you are really concerned about is the accuracy, so if you can demonstrate
through calibration tests that your accuracy is sufficient for your
purposes, then 30 degrees should be fine. The other solution could be to
add a third (overhead) camera.
Best of luck,
Carolyn Anglin
anglin@me.queensu.ca
(P.S. The work was done at the University of British Columbia, Mechanical
Engineering).
--------------------------------------------------------------
>From paul@gaitlab1.uwaterloo.ca Tue Nov 1 07:56:31 1994
Return-Path:
Hi Russell,
There is a paper in 'Photogrammetic Engineering' by Y.I. Abdel-Aziz,
1974,p1341-1346 called 'Expected Accuracy of Convergent Photos'. In it,
he discusses your problem. If you consider the axes to be Y vertical, X
horizontal, and Z toward the camera, Y accuracy stays pretty much the
same with angle (you'd expect that). Z accuracy degrades to about 1/4 of
the Y accuracy at 30 degrees between the two cameras. X accuracy is
about the same as the Y accuracy. As the angle increases, you must
compromise between X and Z accuracy.
Since the depth measurement is determined by the difference between
the images of the two cameras, it is intuitive that the more identical
the two images are (ie, smaller angle), the more difficult it is to
extract the differences accurately. With ideal cameras there's no
problem....but with video cameras and their shortcomings there will be a
lot of trouble when you look at the difference signals.
Another paper is : 'N.A. Borghese and G. Ferrigno',"an Algorithm for
3-D Automatic Movement Detection by Means of Standard TV Cameras",IEEE
Transactions on Biomedical Engineering",Vol 37,No.12,p1221-1225,Dec.
1990. Their data suggests about 3 times the error you would get at the
optimum 90 degrees.
I would think that there should be a fair number of papers out there
by now, as this subject has pretty well been beaten to death.
-Paul
Paul J Guy work phone:519-885-1211 ext 6371
paul@gaitlab1.uwaterloo.ca home/FAX/:519-576-3090
pguy@healthy.uwaterloo.ca 64 Mt.Hope St.,Kitchener,Ontario,Canada
--------------------------------------------------------------
>From egmjp@cc.flinders.edu.au Tue Nov 1 09:27:46 1994
Return-Path:
Dear Russell
Ian Stokes at Vermont should be able to help.
I think Manohar Panjabi published something in J Biomech a few years ago.
I've only just moved to Flinders and haven't got all my papers here with me
yet but there is something in the literature on this subject that shows that
accuracy drops off if you have an angle below 60 degrees. Good luck in your
search. Please let me know what you find out.
Mark Pearcy
Associate Professor of Biomedical Engineering
Email egmjp@cc.flinders.edu.au
School of Engineering
Flinders University of South Australia
GPO Box 2100
Adelaide
South Australia 5001
Australia
Phone: (+61) 8 201 3612
Fax: (+61) 8 201 3618
--------------------------------------------------------------
>From deleva@risccics.ing.uniroma1.it Tue Nov 1 10:00:01 1994
Return-Path:
I would say that you don't need experimental data about your
specific problem. All what you need is an estimate (based on the
literature, of course) of the error you can have on each of the
two separate 2D views. Also, you need to determine how large
your field of view will be, for each camera (in degrees). In fact,
in your case the error will be larger for points located farther
away from both cameras than the intersection of the two optical
axes (I hope I was clear enough).
Knowing the above data, it is quite easy to calculate the
statistical probability of a given error, or the amount of error
associated with a given probaility level (for example, 5%). The
problem should be solved, in my opinion, calculating the error
on the horizontal plane (assuming that both your cameras are
oriented with their optical axes horizontally).
I was not explicit, but you can probably guess what I mean.
Please let me know if you agree, or if you want more details.
Paolo
Paolo de Leva
Istituto Superiore di Educazione Fisica
Biomechanics Lab
Via di Villa Pepoli, 4
00153 ROME
ITALY
Tel: 39-6-575.40.81
FAX: 39-6-575.40.81 (or 39-6-361.30.65)
e-mail address: DELEVA@RISCcics.ing.uniRoma1.IT
--------------------------------------------------------------
>From GTSBAR@anat.uct.ac.za Tue Nov 1 23:24:01 1994
Return-Path:
Dear Russell
Suggest you contact Harvey Mitchell at E-mail:
cehlm@cc.newcastle.edu.au.
He lives in your part of the world and is an expert on
Stereo video technology, (sometimes called digital photogrammetry).
Barbara
Barbara van Geems
Department of Biomedical Engineering
University of Cape Town Medical School
Observatory 7925
South Africa
Tel: (021) 406-6547
Fax: (021) 448-3291
Pmail: GTSBAR@ANAT.UCT.AC.ZA
--------------------------------------------------------------
>From CRISCO@BIOMED.MED.YALE.EDU Tue Nov 1 23:30:56 1994
Return-Path:
For two camera systems I am pretty sure that the optimal angle is close to
30 degrees when using such code as DLT for 3D reconstruction. I can not
rember where I read this but you might try: Marzan and Karara (1975) A
computer program for direct linear transformation of the colinearity
condition, Symposium on close range photogrammetric systems, july28-aug1,
Champaign, Illinois.
Trey Crisco
--------------------------------------------------------------
>From @PSUORVM.CC.PDX.EDU:KAREN@emp.pdx.edu Wed Nov 2 15:31:48 1994
Return-Path:
Russell,
I spent 7 years conducting 3D analyses with video and optoelectric
cameras. If your cameras are only 30 degrees apart, you may
have a large error in the depth dimension. The two cameras
will track horizontal and vertical spacial coordinates fairly
accurately, however, look closely at the third coordianate.
Good Luck
Karen
--------------------------------------------------------------
>From ferrigno@elet.polimi.it Thu Nov 3 02:58:45 1994
Return-Path:
please, give a look of our paper:
Borghese NA, Ferrigno G - An Algorithm for 3-D Automatic Movement Detection by
Means of Standard TVCAmeras - IEEE Trans. of BME vol. 37, NO.12, 1990.
At pag. 1224 we report what happen varying the angle bwteen the cameras by
numerical simulations, at 30 degrees you should expect a decrease in depth accuracy of a
factor of 2.
Nunzio Alberto Borghese
--------------------------------------------------------------
>From KOLLJO1@VM.AKH-WIEN.AC.AT Fri Nov 4 19:13:31 1994
Return-Path:
Hallo from Vinna-Austria
We are a group running a Gaitlab with videometry 6 camera system.
There is no way out for dealing with rule over the thumb for cameraposition.
It means much more trials with experience than a theoretical approach. If you
deal with two cameras the theoretical optimum is 90 degrees between the
lensaxis. This would give you least 3D errors because of digitalisation error
or lens distortion.
But you should have in mind that the most difficult thing to manage with two
cameras are covered or merging markers which results in loosing the track.
So the best cameraposition depends on your marker set. If you are
flexible with the marker set you should choose one where no hiding or merging
occours for camera postions that are as near to 90 degrees as possible.
If, for some other reason, you are restricted to 30 degrees between the
cameraaxes, you have to deal with doubled 3D location errors for markers in the
depth (i.e. the direction your cameras look and oposit).
Kollmitzer J.,ERRORS AND PRACTICAL TESTS FOR 3D GAITANALYSIS SYSTEM SETUPS WITH
VIDEOMETRY AND FORCEPLATES; Proc. Third International Symposium on 3-D Analyis
of Human Movement, Stockholm Sweden 1994:49
--------------------------------------------------------------
>From fourd@crl.com Mon Dec 12 18:39:00 1994
Return-Path:
Russell:
I adressed the topic in my doctoral dissertation ...
Walton, James S. "Close-Range Cine-Photogrammetry: A Generalized
Technique for Quantifying Gross Human Motion. Doctor's Dissertation,
The Pennsylvania State University, 1981.
Jim WALTON
************************************************** **********************
************************************************** **********************
* * *
* JAMES S. WALTON, Ph.D. * *
* President * INTERNET : Fourd@crl.com --or-- *
* 4D VIDEO * Jim.Walton@Forsythe.Stanford.Edu *
* 3136 Pauline Drive * *
* SEBASTOPOL, CA 95472 * BITNET : Jim.Walton@Stanford.Bitnet *
* * *
* PHONE: 707/829-8883 * COMPUSERVE : 72644,2773 *
* FAX : 707/829-3527 * *
* * *
************************************************** **********************
************************************************** **********************
===============Our Findings (in brief)====================================
We tried to break down where the errors were coming from, ie. small angle
between cameras, digitiser repeatability, camera resolution, calibration
frame.
Our greatest error came initially from the calibration frame. We were using
an open ended frame (sometimes called a Christmas tree or sputnik) that we
bought from a motion analysis company about two years ago. The frame was
factory measured to millimetre accuracy. When we re-measured the frame, we
found that some coordinates of some points were out by up to 10mm. This,
resulted (when combined with all other errors) in marker accuracies after
3D reconstruction of up to 20-30mm (3D distance). The markers we used were
a fixed distance apart (700mm), auto-digitised and the field of view was 2m.
Rather than re-measuring our calibration frame, we decided to build our own,
closed (cuboid) calibration. We did this for three reasons. First, because the
open frame would continue to deteriorate over time and the closed system was
less likely to change over time. Second, building our own ended up being
cheaper and quicker than getting surveyors to remeasure the open frame.
Third, a closed system would be easier to remeasure at a later date and easier
for us to control placement of makers in whatever orientation we needed.
We got very satisfactory results from our cuboid (we didn't compare the cuboid
with the open frame because we didn't have accurate measurements for this
system). We had a field of view of about 1.5m (about 3mm per pixel), manually
digitised, markers were fixed distances apart and were 3mm diameter ball
bearings covered with 3M reflective tape. The reflection from the bearing was
obviously much greater than one pixel and the centroid calculation actually
improved the resolution of the system. The errors were often around 2-3mm and
never greater than 6mm. This 6mm error occurred along the depth direction,
which was always worse, while the other directions amounted to a maximum error
of 4mm. This is more than satisfactory and we decided not to take it any
further at this stage.
A colleague of mine, Rezaul Begg (rezaul=begg@vut.edu.au), did most of this
work and is currently conducting more detailed experiments relating to camera
angles and distances from the centre of field view. We are currently getting
our open calibration frame remeasured because the cuboid is not easily
transportable (it is one piece).