Biomch-l net:
I originally asked about lens distortion correction, here are the
9 responses I received. Thank-you to all who responded.
My original question was:
I am presently attempting to correct for lens distortion via a
mapping of the known coordinates in a field and their screen
coordinates. This analysis is for a video camera and a Peak
Performance system of motion analysis. I am doing the corrections
with a matrix of regression equations. Has anyone else addressed
this issue? What findings did you/they have? Any suggestions for
other methods? I see the mathematics as a topology problem. Is
this correct? Any suggestions from this approach? Of course I
will summarize and post the responses. Thank-you for your help.
Jon Fewster
Biomechanics Lab
Oregon State University
fewsterj@ucs.orst.edu
Responses:
From: ATESHIAN@CUORMA.ORL.COLUMBIA.EDU
A few years ago I implemented the method suggested by Faig and
Moniwa ["Convergence Photos for Close Range", Photogrammetric
Engineering, 1973, pp. 605-610] and used it with our close-range
stereophotogrammetry system. Their equations account for radial-
symmetric and decentering lens distortions. In our application, the
calibration points were located at the periphery of the workspace, and
thus appeared at the periphery of the photographs where lens
distortion is presumably greatest. (Since we repeat the
calibration procedure for each stereogram in an experiment, it is not
possible to locate calibration points everywhere in the workspace
since it would not leave room for our test piece.) I found that
compensating for lens distortion at the periphery degraded the
coordinate accuracy of object points located more centrally in the
workspace. Therefore I only used corrections for distortions
modeled by an affine transformation. Nevertheless, in your
application, you may find that Faig and Moniwa's method might
significantly improve your results; my experience simply confirms
that calibration points used for lens distortion corrections should be
located within -- and not just around -- your workspace. I hope this
helps.
Sincerely,
Gerard A. Ateshian, Ph.D.
Dept. of Mechanical Engineering
Columbia University
New York, NY
>From BXX1@PSUVM.PSU.EDU Thu Sep 22 13:30:17 1994
From: Bin XIA
The group of J. Poliner, R.P. Wilmington and G. K. Klute, which locates
at Houston TX had a poster at last ASB meeting addressing that issue.
Good luck,
Bin XIA
Center for Locomotion Studies,
Penn State University
>From PRC@ecl.psu.edu Thu Sep 22 13:30:25 1994
You will find a good account of higher order
effects in James S. Walton's
PhD dissertaion from Penn State (circ 1976) which
should availbale from Oregon Microfilms.
Peter Cavanagh
>From dainis%bmlvax.dnet@dxi.nih.gov Thu Sep 22
I have been successfully using a lens correction technique for quite a
few years. I originally developed it for a Selspot II system. This same
technique has been an integral part of the AMASS software system
supplied with most VICON (Oxford Metrics Ltd.) systems since 1988.
AMASS, and the linearization
technique is also now available for Motion Analysis Corporation system
(contact me if you need details).
The technique, which I usually call "linearization", typically
provides a tenfold improvement in accuracy, and compensates for all kinds
of systematic errors without the requirement that the errors conform to a
model (as in the DLT and its variations).
Each camera is placed in front of a linearity "grid" consisting of
300 points typically. An aligment tool enables the camera to be placed
exaxtly on the axis of the grid. The current implementation first does a
least-squares-fit of the iamge points to a perfect grid, and then by 2D
interpolation, determines where the perfect grid points lie with respect to
the image points. This process essentialy provides a map from the image
coordinates to "true" coordinates through the use of 2D linear interpolation
of the resulting lookup table. Unfortunately, to this point in time, I have not
gotten around to publishing or writing up this information, hence I am not
in a position to supply more detailed instructions. However, you may be
assured that a corrections technique such as this does provide major
improvements in system accuracy. Also, if you utilize the camera-to-grid-
distance, the lookup tables implicity contain all the internal camera
parameters.
I have done some work with using regression type of approach.
Programing wise, the technique is cleaner. It will also have the effect of
smoothing the input data. However, I have not really had a chance to
implement it yet, or test it to any degree.
Andy Dainis: dainis%bmlvax.dnet@dxi.nih.gov
>From CRISCO@BIOMED.MED.YALE.EDU Thu Sep 22 13:30:44 1994
Jon,
If I understand your question and your are seeking 3D
coordinates, I belive that such distortion compensation is available in
codes such as the Direct Linear Transformation (Marzan GT and Karara
HM, A computer program for Direct Linear Transformation of the
Colinearity Condition, and some applications of it, Symp. on Close
Range Photogrammetry Systems, July 28-August1, Champaign,
Illinois, 1975). There is also some newer nonlinear code but I can not
remember the references.
Good luck, Trey.
>From buczek%bmlvax.dnet@dxi.nih.gov Thu Sep 22 13:31:04 1994
Advanced Biomechanics Inc.
4927 Fayann Street
Orlando, Florida 32812 USA
Telephone Facsimile
(407) 384-7464 (407) 384-7168
September 12, 1994
Dear Jon,
My master's thesis at Indiana University may be of some help to you
in correcting lense distortions. I studied distortions and corrections
for both fixed focal length and zoom cine lenses. My thesis is
available from Microform Publications, and includes FORTRAN code
for the correction algorithms. In addition to serving as my thesis,
these routines later helped correct lens distortions present in films
from the Space Shuttle, analyzed while I was a graduate student at
Penn State.
"Calculation of Distortion Parameters for Specific
Normal and Zoom Cine Lenses"
Buczek, FL
Eugene: Microform Publications, College of Health and Human
Performance, University of Oregon
1987
Please feel free to contact me if I can be of further assistance.
Sincerely,
Frank L. Buczek, Jr., Ph.D. President
BUCZEK%BMLVAX.DNET@DXI.NIH.GOV
>From Tec.Serv@Latrobe.edu.au Thu Sep 22 13:31:11 1994
I have investigated lens distortion, using a system of known co-
ordinates in the video space and measuring a moving target co-
incident with them. The results were quite good and demonstrated the
classic distortion/distance curve.
The distortion was calculated as the variation of the point measured
from a theoretical grid, expressed as a % of the distance of the point
from optical centre.
The conclusion reached was that there was less than 1% distortion
within a radius of the optical centre of the lens equal to 70% of the
centre to corner distance. Maximum distortion measured was 2.5%, in
the extreme top left corner.
Since the Peak system has a quoted error of 1% anyway, due to pixel
errors and noise in the video signal, we decided against further
correction and advised users to keep the movement within the 70%
radius circle. This is, by the way, approximately the side to side
distance anyway. The circle is drawn on the face of the 21" TV that
is used to set up the cameras.
The variation from wide angle to full zoom was quite small. The 1%
error radius was similar at all lens focal lengths. Greatest error was
at full zoom (85mm), not wide angle as expected. The lens used was a
Panasonic 6X zoom lenses on an F15 camera. This lens is designed as
a low price lens for domestic/educational use and should not be
expected to deliver maximum performance. The lens distortion from a
better quality fixed lens would be expected to be less than 1% over
the whole field.
When assessing error using the Peak digitiser, I suggest that
experiments should be done both with and without the Video recorder
in the path. I suspect that the Time-base errors of the VCR may
contribute to pixel jitter. These errors will be random and dependant
on tape type, temperature and wear in the VCR.
Is it really worth trying to correct for distortion? Unless there is a
need to use very wide angle lenses or other special optics, I suspect
that other errors, predominately digitising error, will dominate.
The search continues....
This research is continuing and was presented at a Faculty seminar in
1993. John Yelland
>From neil@isgtec.com Thu Sep 22 13:30:50 1994
Jon,
Try looking at the Tsai correction code available free from the
visionlist. Let me know if you need other info - I have a feeling 10
other people have told you the same thing!
Neil
--
N. Glossop, Ph.D.,
ISG Technologies
Toronto, Canada
neil@isgtec.com
>From neil@isgtec.com Thu Sep 22
Here is the Readme:
If it doesnt have an ftp site in it, let me know and I'll mail you the
latest. It works quite well, I might add.
Cheers,
Neil
--
N. Glossop, Ph.D.,
ISG Technologies
Toronto, Canada
neil@isgtec.com
[This is the last message]
-------------------------------------CUT HERE------------------
-------------
From: Reg Willson
Subject: Camera Calibration using Tsai's Method - revision 2.1
This revision includes a fully self contained implementation of Roger
Tsai's camera calibration algorithm using *public domain* MINPACK
optimization routines (the code may also be built using commercial
IMSL optimization routines). Also included is a fix for a bug that
reduced the accuracy of full coplanar calibration and increased the
convergence time of full non-coplanar calibration. Finally, generic
macros have been added for three less common math routines used in
the code.
Thanks to Torfi Thorhallsson (torfit@verk.hi.is) at the University of
Iceland who provided the self contained MINPACK version of the code.
Torfi also identified the coplanar calibration bug.
Thanks also to Frederic Devernay
who also submitted a unified MINPACK/IMSL/NAG version of the
calibration code. Future code revisions will likely use Fred's macros
for isolating and simplifying the interfaces to the various
optimization packages. Also in
the works is a PC compatible version of the code.
Comments, suggestions, and bug reports can be directed to me at
Reg Willson .
Reg Willson, 04-Jun-94
---------------------------------------------------------------
-------------
From: Reg Willson
Subject: Camera Calibration using Tsai's Method - revision 2.0
This software release represents a set of updates to the software
placed in the VISLIST ARCHIVE in 1993 by Jon Owen. The release
contains a bug fix, improvements to several routines, and new code for
exterior orientation calibration. The code should also be much easier
to compile than the previous release.
The bug fix occurs in the routines ncc_compute_R and
ncc_compute_better_R. In the corrected routines the r4, r5, and r6
terms are not divided by cp.sx. This bug was reported by Volker
Rodehorst .
Included in this release is Frederic Devernay's
significantly improved routine for
converting from undistorted to distorted sensor coordinates. Rather
than iteratively solving a system of two non-linear equations to
perform the conversion, the new routine algebraically solves a cubic
polynomial in Rd (using the Cardan method).
This release also contains improved routines for calculating
calibration error statistics, including the new routines:
object_space_error_stats ()
and
normalized_calibration_error ()
The first routine calculates the statistics for the magnitude of the
distance of closest approach (i.e. 3D error) between points in object
space and the line of sight formed by back projecting the measured 2D
coordinates out through the camera model. The second routine is based
on an error measure proposed by Weng in IEEE PAMI, October 1992.
Finally this release contains new checks for coordinate handedness
problems in the calibration data.
This release uses optimization routines from the IMSL commercial
software package. An updated version of the code set up for the NAG
commercial software package will hopefully be available soon. Bug
reports can be directed to either Jon or myself.
Reg Willson, 17-Feb-94
---------------------------------------------------------------
-------------
From: jcowen@cs.utah.edu
Subject: Camera Calibration using Tsai's Method
Several months ago, I posted to the vision list asking for camera
calibration help. One response led me to contact Reg Willson, who was
extremely helpful, and sent me his implementation
of Roger Tsai's calibration algorithm. He said I could re-distribute it
as needed, and we've made it ftp'able from cs.utah.edu. It's in the
pub/ReverseEngineering/src/CameraCalibration directory.
I'd like to get bug reports, so I can filter them and pass them on to
Reg. Also, if anybody replaces the IMSL stuff w/public domain
routines, I'd like to know.
Thanks,
Jon
---------------------------------------------------------------
-------------
The code in this directory includes an implementation of Roger Tsai's
camera calibration algorithm. Tsai's algorithm is documented in
several places including:
"A Versatile Camera Calibration Technique for High-Accuracy 3D
Machine
Vision Metrology Using Off-the-Shelf TV Cameras and Lenses", Roger
Y. Tsai,
Radiometry -- (Physics-Based Vision), L. Wolff, S. Shafer, G. Healey,
eds.,
Jones and Bartlett, 1992,
"A versatile Camera Calibration Technique for High-Accuracy 3D
Machine
Vision Metrology Using Off-the-Shelf TV Cameras and Lenses", Roger
Y. Tsai,
IEEE Journal of Robotics and Automation, Vol. RA-3, No. 4, August
1987,
pages 323-344.
"An Efficient and Accurate Camera Calibration Technique for 3D
Machine
Vision", Roger Y. Tsai, Proceedings of IEEE Conference on Computer
Vision
and Pattern Recognition, Miami Beach, FL, 1986, pages 364-374.
Note that these routines use the IMSL library, which is available at
many institutions. Many of the IMSL routines are derived from routines
that are available as public domain software. If you do not have IMSL,
your best bet is to get them and modify them for use here. If you do
this, we would like a copy to place here so this code can become more
generic.
---------------------------------------------------------------
-------------
The actual calibration code is contained in the files:
camera_calibration.c
camera_calibration.h
extrinsic_calibration.c
cc_utils.c
matrix.c
The camera_calibration.c and camera_calibration.h files contain
macros to substitute more common (but less efficient) math routines
for sincos() and cbrt(), should your math library not include them.
The subdirectory minpack contains the fortran source code for the
MINPACK routines:
dpmpar.f
enorm.f
fdjac2.f
lmdif.f
lmpar.f
qrfac.f
qrsolv.f
Five test programs that make use of the calibration code are:
c_cal.c basic coplanar calibration
c_cal_fo.c coplanar calibration, full optimization
n_cal.c basic noncoplanar calibration
n_cal_fo.c noncoplanar calibration, full optimization
ep_cal.c extrinsic parameter calibration
Two programs are included to generate synthetic data to test the
calibration code:
c_synthetic.c coplanar calibration data generation
program
n_synthetic.c noncoplanar calibration data generation
program
gasdev.c random number routine for above
Four test data files are also included:
c_test.cd.data synthetic coplanar test data
n_test.cd.data synthetic noncoplanar test data
c_test.cpcc.data calibrated camera model from coplanar test
data n_test.cpcc.data calibrated camera model from
noncoplanar test data
Three log files illustrate the output of the synthetic data
generation programs and the results for each of the
calibration programs:
cc.log coplanar calibration log
nc.log noncoplanar calibration log
ep.log exterior orientation calibration log
Two makefiles are provided to build the code:
makefile.IMSL to use with the IMSL optimization routines
makefile.MINPACK to use with the MINPACK optimization routines
Three other miscellaneous test programs:
xfd_to_xfu.c convert from distorted to undistorted image
coordinates
world_to_image.c convert from 3D world to 2D image coordinates
image_to_world.c convert from 2D image to 3D world coordinates
All of the above files are contained in the compressed tar file Tsai-method-
v2.1.tar.Z. To extract the files use the commands:
uncompress Tsai-method-v2.1.tar.Z
tar xvf Tsai-method-v2.1.tar
Please feel free to redistribute the code.
I originally asked about lens distortion correction, here are the
9 responses I received. Thank-you to all who responded.
My original question was:
I am presently attempting to correct for lens distortion via a
mapping of the known coordinates in a field and their screen
coordinates. This analysis is for a video camera and a Peak
Performance system of motion analysis. I am doing the corrections
with a matrix of regression equations. Has anyone else addressed
this issue? What findings did you/they have? Any suggestions for
other methods? I see the mathematics as a topology problem. Is
this correct? Any suggestions from this approach? Of course I
will summarize and post the responses. Thank-you for your help.
Jon Fewster
Biomechanics Lab
Oregon State University
fewsterj@ucs.orst.edu
Responses:
From: ATESHIAN@CUORMA.ORL.COLUMBIA.EDU
A few years ago I implemented the method suggested by Faig and
Moniwa ["Convergence Photos for Close Range", Photogrammetric
Engineering, 1973, pp. 605-610] and used it with our close-range
stereophotogrammetry system. Their equations account for radial-
symmetric and decentering lens distortions. In our application, the
calibration points were located at the periphery of the workspace, and
thus appeared at the periphery of the photographs where lens
distortion is presumably greatest. (Since we repeat the
calibration procedure for each stereogram in an experiment, it is not
possible to locate calibration points everywhere in the workspace
since it would not leave room for our test piece.) I found that
compensating for lens distortion at the periphery degraded the
coordinate accuracy of object points located more centrally in the
workspace. Therefore I only used corrections for distortions
modeled by an affine transformation. Nevertheless, in your
application, you may find that Faig and Moniwa's method might
significantly improve your results; my experience simply confirms
that calibration points used for lens distortion corrections should be
located within -- and not just around -- your workspace. I hope this
helps.
Sincerely,
Gerard A. Ateshian, Ph.D.
Dept. of Mechanical Engineering
Columbia University
New York, NY
>From BXX1@PSUVM.PSU.EDU Thu Sep 22 13:30:17 1994
From: Bin XIA
The group of J. Poliner, R.P. Wilmington and G. K. Klute, which locates
at Houston TX had a poster at last ASB meeting addressing that issue.
Good luck,
Bin XIA
Center for Locomotion Studies,
Penn State University
>From PRC@ecl.psu.edu Thu Sep 22 13:30:25 1994
You will find a good account of higher order
effects in James S. Walton's
PhD dissertaion from Penn State (circ 1976) which
should availbale from Oregon Microfilms.
Peter Cavanagh
>From dainis%bmlvax.dnet@dxi.nih.gov Thu Sep 22
I have been successfully using a lens correction technique for quite a
few years. I originally developed it for a Selspot II system. This same
technique has been an integral part of the AMASS software system
supplied with most VICON (Oxford Metrics Ltd.) systems since 1988.
AMASS, and the linearization
technique is also now available for Motion Analysis Corporation system
(contact me if you need details).
The technique, which I usually call "linearization", typically
provides a tenfold improvement in accuracy, and compensates for all kinds
of systematic errors without the requirement that the errors conform to a
model (as in the DLT and its variations).
Each camera is placed in front of a linearity "grid" consisting of
300 points typically. An aligment tool enables the camera to be placed
exaxtly on the axis of the grid. The current implementation first does a
least-squares-fit of the iamge points to a perfect grid, and then by 2D
interpolation, determines where the perfect grid points lie with respect to
the image points. This process essentialy provides a map from the image
coordinates to "true" coordinates through the use of 2D linear interpolation
of the resulting lookup table. Unfortunately, to this point in time, I have not
gotten around to publishing or writing up this information, hence I am not
in a position to supply more detailed instructions. However, you may be
assured that a corrections technique such as this does provide major
improvements in system accuracy. Also, if you utilize the camera-to-grid-
distance, the lookup tables implicity contain all the internal camera
parameters.
I have done some work with using regression type of approach.
Programing wise, the technique is cleaner. It will also have the effect of
smoothing the input data. However, I have not really had a chance to
implement it yet, or test it to any degree.
Andy Dainis: dainis%bmlvax.dnet@dxi.nih.gov
>From CRISCO@BIOMED.MED.YALE.EDU Thu Sep 22 13:30:44 1994
Jon,
If I understand your question and your are seeking 3D
coordinates, I belive that such distortion compensation is available in
codes such as the Direct Linear Transformation (Marzan GT and Karara
HM, A computer program for Direct Linear Transformation of the
Colinearity Condition, and some applications of it, Symp. on Close
Range Photogrammetry Systems, July 28-August1, Champaign,
Illinois, 1975). There is also some newer nonlinear code but I can not
remember the references.
Good luck, Trey.
>From buczek%bmlvax.dnet@dxi.nih.gov Thu Sep 22 13:31:04 1994
Advanced Biomechanics Inc.
4927 Fayann Street
Orlando, Florida 32812 USA
Telephone Facsimile
(407) 384-7464 (407) 384-7168
September 12, 1994
Dear Jon,
My master's thesis at Indiana University may be of some help to you
in correcting lense distortions. I studied distortions and corrections
for both fixed focal length and zoom cine lenses. My thesis is
available from Microform Publications, and includes FORTRAN code
for the correction algorithms. In addition to serving as my thesis,
these routines later helped correct lens distortions present in films
from the Space Shuttle, analyzed while I was a graduate student at
Penn State.
"Calculation of Distortion Parameters for Specific
Normal and Zoom Cine Lenses"
Buczek, FL
Eugene: Microform Publications, College of Health and Human
Performance, University of Oregon
1987
Please feel free to contact me if I can be of further assistance.
Sincerely,
Frank L. Buczek, Jr., Ph.D. President
BUCZEK%BMLVAX.DNET@DXI.NIH.GOV
>From Tec.Serv@Latrobe.edu.au Thu Sep 22 13:31:11 1994
I have investigated lens distortion, using a system of known co-
ordinates in the video space and measuring a moving target co-
incident with them. The results were quite good and demonstrated the
classic distortion/distance curve.
The distortion was calculated as the variation of the point measured
from a theoretical grid, expressed as a % of the distance of the point
from optical centre.
The conclusion reached was that there was less than 1% distortion
within a radius of the optical centre of the lens equal to 70% of the
centre to corner distance. Maximum distortion measured was 2.5%, in
the extreme top left corner.
Since the Peak system has a quoted error of 1% anyway, due to pixel
errors and noise in the video signal, we decided against further
correction and advised users to keep the movement within the 70%
radius circle. This is, by the way, approximately the side to side
distance anyway. The circle is drawn on the face of the 21" TV that
is used to set up the cameras.
The variation from wide angle to full zoom was quite small. The 1%
error radius was similar at all lens focal lengths. Greatest error was
at full zoom (85mm), not wide angle as expected. The lens used was a
Panasonic 6X zoom lenses on an F15 camera. This lens is designed as
a low price lens for domestic/educational use and should not be
expected to deliver maximum performance. The lens distortion from a
better quality fixed lens would be expected to be less than 1% over
the whole field.
When assessing error using the Peak digitiser, I suggest that
experiments should be done both with and without the Video recorder
in the path. I suspect that the Time-base errors of the VCR may
contribute to pixel jitter. These errors will be random and dependant
on tape type, temperature and wear in the VCR.
Is it really worth trying to correct for distortion? Unless there is a
need to use very wide angle lenses or other special optics, I suspect
that other errors, predominately digitising error, will dominate.
The search continues....
This research is continuing and was presented at a Faculty seminar in
1993. John Yelland
>From neil@isgtec.com Thu Sep 22 13:30:50 1994
Jon,
Try looking at the Tsai correction code available free from the
visionlist. Let me know if you need other info - I have a feeling 10
other people have told you the same thing!
Neil
--
N. Glossop, Ph.D.,
ISG Technologies
Toronto, Canada
neil@isgtec.com
>From neil@isgtec.com Thu Sep 22
Here is the Readme:
If it doesnt have an ftp site in it, let me know and I'll mail you the
latest. It works quite well, I might add.
Cheers,
Neil
--
N. Glossop, Ph.D.,
ISG Technologies
Toronto, Canada
neil@isgtec.com
[This is the last message]
-------------------------------------CUT HERE------------------
-------------
From: Reg Willson
Subject: Camera Calibration using Tsai's Method - revision 2.1
This revision includes a fully self contained implementation of Roger
Tsai's camera calibration algorithm using *public domain* MINPACK
optimization routines (the code may also be built using commercial
IMSL optimization routines). Also included is a fix for a bug that
reduced the accuracy of full coplanar calibration and increased the
convergence time of full non-coplanar calibration. Finally, generic
macros have been added for three less common math routines used in
the code.
Thanks to Torfi Thorhallsson (torfit@verk.hi.is) at the University of
Iceland who provided the self contained MINPACK version of the code.
Torfi also identified the coplanar calibration bug.
Thanks also to Frederic Devernay
who also submitted a unified MINPACK/IMSL/NAG version of the
calibration code. Future code revisions will likely use Fred's macros
for isolating and simplifying the interfaces to the various
optimization packages. Also in
the works is a PC compatible version of the code.
Comments, suggestions, and bug reports can be directed to me at
Reg Willson .
Reg Willson, 04-Jun-94
---------------------------------------------------------------
-------------
From: Reg Willson
Subject: Camera Calibration using Tsai's Method - revision 2.0
This software release represents a set of updates to the software
placed in the VISLIST ARCHIVE in 1993 by Jon Owen. The release
contains a bug fix, improvements to several routines, and new code for
exterior orientation calibration. The code should also be much easier
to compile than the previous release.
The bug fix occurs in the routines ncc_compute_R and
ncc_compute_better_R. In the corrected routines the r4, r5, and r6
terms are not divided by cp.sx. This bug was reported by Volker
Rodehorst .
Included in this release is Frederic Devernay's
significantly improved routine for
converting from undistorted to distorted sensor coordinates. Rather
than iteratively solving a system of two non-linear equations to
perform the conversion, the new routine algebraically solves a cubic
polynomial in Rd (using the Cardan method).
This release also contains improved routines for calculating
calibration error statistics, including the new routines:
object_space_error_stats ()
and
normalized_calibration_error ()
The first routine calculates the statistics for the magnitude of the
distance of closest approach (i.e. 3D error) between points in object
space and the line of sight formed by back projecting the measured 2D
coordinates out through the camera model. The second routine is based
on an error measure proposed by Weng in IEEE PAMI, October 1992.
Finally this release contains new checks for coordinate handedness
problems in the calibration data.
This release uses optimization routines from the IMSL commercial
software package. An updated version of the code set up for the NAG
commercial software package will hopefully be available soon. Bug
reports can be directed to either Jon or myself.
Reg Willson, 17-Feb-94
---------------------------------------------------------------
-------------
From: jcowen@cs.utah.edu
Subject: Camera Calibration using Tsai's Method
Several months ago, I posted to the vision list asking for camera
calibration help. One response led me to contact Reg Willson, who was
extremely helpful, and sent me his implementation
of Roger Tsai's calibration algorithm. He said I could re-distribute it
as needed, and we've made it ftp'able from cs.utah.edu. It's in the
pub/ReverseEngineering/src/CameraCalibration directory.
I'd like to get bug reports, so I can filter them and pass them on to
Reg. Also, if anybody replaces the IMSL stuff w/public domain
routines, I'd like to know.
Thanks,
Jon
---------------------------------------------------------------
-------------
The code in this directory includes an implementation of Roger Tsai's
camera calibration algorithm. Tsai's algorithm is documented in
several places including:
"A Versatile Camera Calibration Technique for High-Accuracy 3D
Machine
Vision Metrology Using Off-the-Shelf TV Cameras and Lenses", Roger
Y. Tsai,
Radiometry -- (Physics-Based Vision), L. Wolff, S. Shafer, G. Healey,
eds.,
Jones and Bartlett, 1992,
"A versatile Camera Calibration Technique for High-Accuracy 3D
Machine
Vision Metrology Using Off-the-Shelf TV Cameras and Lenses", Roger
Y. Tsai,
IEEE Journal of Robotics and Automation, Vol. RA-3, No. 4, August
1987,
pages 323-344.
"An Efficient and Accurate Camera Calibration Technique for 3D
Machine
Vision", Roger Y. Tsai, Proceedings of IEEE Conference on Computer
Vision
and Pattern Recognition, Miami Beach, FL, 1986, pages 364-374.
Note that these routines use the IMSL library, which is available at
many institutions. Many of the IMSL routines are derived from routines
that are available as public domain software. If you do not have IMSL,
your best bet is to get them and modify them for use here. If you do
this, we would like a copy to place here so this code can become more
generic.
---------------------------------------------------------------
-------------
The actual calibration code is contained in the files:
camera_calibration.c
camera_calibration.h
extrinsic_calibration.c
cc_utils.c
matrix.c
The camera_calibration.c and camera_calibration.h files contain
macros to substitute more common (but less efficient) math routines
for sincos() and cbrt(), should your math library not include them.
The subdirectory minpack contains the fortran source code for the
MINPACK routines:
dpmpar.f
enorm.f
fdjac2.f
lmdif.f
lmpar.f
qrfac.f
qrsolv.f
Five test programs that make use of the calibration code are:
c_cal.c basic coplanar calibration
c_cal_fo.c coplanar calibration, full optimization
n_cal.c basic noncoplanar calibration
n_cal_fo.c noncoplanar calibration, full optimization
ep_cal.c extrinsic parameter calibration
Two programs are included to generate synthetic data to test the
calibration code:
c_synthetic.c coplanar calibration data generation
program
n_synthetic.c noncoplanar calibration data generation
program
gasdev.c random number routine for above
Four test data files are also included:
c_test.cd.data synthetic coplanar test data
n_test.cd.data synthetic noncoplanar test data
c_test.cpcc.data calibrated camera model from coplanar test
data n_test.cpcc.data calibrated camera model from
noncoplanar test data
Three log files illustrate the output of the synthetic data
generation programs and the results for each of the
calibration programs:
cc.log coplanar calibration log
nc.log noncoplanar calibration log
ep.log exterior orientation calibration log
Two makefiles are provided to build the code:
makefile.IMSL to use with the IMSL optimization routines
makefile.MINPACK to use with the MINPACK optimization routines
Three other miscellaneous test programs:
xfd_to_xfu.c convert from distorted to undistorted image
coordinates
world_to_image.c convert from 3D world to 2D image coordinates
image_to_world.c convert from 2D image to 3D world coordinates
All of the above files are contained in the compressed tar file Tsai-method-
v2.1.tar.Z. To extract the files use the commands:
uncompress Tsai-method-v2.1.tar.Z
tar xvf Tsai-method-v2.1.tar
Please feel free to redistribute the code.