I generally agree with Rick Hinrichs' assessment. The calibration of
individual cameras is a critical step to achieving the accuracy necessary
to make possible automatic 3D tracking from unindentified video camera
coordinates. The 11 parameter standard DLT is generally not capable of
correcting the distortions present in standard video hardware, and there
have been few implemetations of the classical DLT with more than 11
parameters. In the past number of people (or should I say companies) have
tried to use DLT calibrations and then do automatic marker tracking
(hands-off identification of camera images by searching for ray
intersections in space) with little success. Now they all use some form of
individual camera correction in addition to the DLT or other method.
Jesus Depena's NLT technique, and my "wand" technique which is in
principle similar, does not surffer the model inaccuracies of the standard
DLT, i.e., the physical parameters such as the camera rotation matrix are
not compromised. Both of these methods do away with the need for an
accurately measured set of control points which, if you have ever tried to
build or measure a set, invariably present considerable difficulties. These
two programs have eliminated the need for measured control points, but to
achieve any degree of accuracy the camera internal parameters must be
determined by a separate procedure. I believe that the ultimate calibration
method will also solve for the internal camera parameters (including lens
distortions) using just the data from moving markers imaged by all cameras.
The effects of such a solution will be more far reaching than the
elimination of the step required to determine the internal camera
parameters that include distortion. It will allow the determination of
these parameters at the distances in space where we are making
measurements, rather than at some convenient distance we might place our
linerization grid or other instrument for measuring distortions, etc.
Recent improvements in accuracy and resolution of 3D video measurement
systems are pushing the limits of the currently implemented analysis
software, and more sophisticated approaches are needed. I have recently
been devoting some time to this problem.
But perhaps I have digressed. I just thought that Biomech-L readers might
be interested in things yet to come. Finally, I might beg to differ on the
the statement by Rick that "the vast majority of researchers who do 3-D
motion analyses today use the DLT (not these other methods)". I do not
believe that anybody has real figures, but many commercial systems out
there use alternative approaches to the standard DLT.
Andy Dainis
individual cameras is a critical step to achieving the accuracy necessary
to make possible automatic 3D tracking from unindentified video camera
coordinates. The 11 parameter standard DLT is generally not capable of
correcting the distortions present in standard video hardware, and there
have been few implemetations of the classical DLT with more than 11
parameters. In the past number of people (or should I say companies) have
tried to use DLT calibrations and then do automatic marker tracking
(hands-off identification of camera images by searching for ray
intersections in space) with little success. Now they all use some form of
individual camera correction in addition to the DLT or other method.
Jesus Depena's NLT technique, and my "wand" technique which is in
principle similar, does not surffer the model inaccuracies of the standard
DLT, i.e., the physical parameters such as the camera rotation matrix are
not compromised. Both of these methods do away with the need for an
accurately measured set of control points which, if you have ever tried to
build or measure a set, invariably present considerable difficulties. These
two programs have eliminated the need for measured control points, but to
achieve any degree of accuracy the camera internal parameters must be
determined by a separate procedure. I believe that the ultimate calibration
method will also solve for the internal camera parameters (including lens
distortions) using just the data from moving markers imaged by all cameras.
The effects of such a solution will be more far reaching than the
elimination of the step required to determine the internal camera
parameters that include distortion. It will allow the determination of
these parameters at the distances in space where we are making
measurements, rather than at some convenient distance we might place our
linerization grid or other instrument for measuring distortions, etc.
Recent improvements in accuracy and resolution of 3D video measurement
systems are pushing the limits of the currently implemented analysis
software, and more sophisticated approaches are needed. I have recently
been devoting some time to this problem.
But perhaps I have digressed. I just thought that Biomech-L readers might
be interested in things yet to come. Finally, I might beg to differ on the
the statement by Rick that "the vast majority of researchers who do 3-D
motion analyses today use the DLT (not these other methods)". I do not
believe that anybody has real figures, but many commercial systems out
there use alternative approaches to the standard DLT.
Andy Dainis