Announcement

Collapse
No announcement yet.

Software GenLock

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Software GenLock

    Hi Biomch-Ler:

    I would like to have a reaction to the following scenario of synchronizing multiple video cameras. Of course, there is no question that the use of GenLock will produce the optimal results. However, for GenLock you must purchase the GenLock hardware and you must have cameras connected with cables to VCRs on the field. I am using this method for years. For example in NASA we are using 5 Panasonic cameras with 5 VCRs with GenLock to send a synch signal which appeared in all 5 video sequences and the accuracy is 100 percent. Few years ago we had a discussion when I reported that even under this condition I found a "drift" between cameras after 100 fields or so passed. This statement came under criticism and some of the responses attributed this "drift" to other factors.

    The purpose of this message is to introduce a method that we are using in the lately where number of Camcorders can be used utilizing what I called "software GenLock" where a vertical displacement of any object can be used in the digitizing process to synchronize the video sequences to within half a millisecond accuracy. The system work as follows:


    When dealing with 2 or more unsynchronized cameras that are used to

    simultaneously collect video data for 3D analysis, it is possible to employ a

    software algorithm which will calculate the relative time offset of the

    cameras in question.

    The simplest case to consider is a two camera setup with a single point in the field of view. The two camera projection centers along with the object point define a plane. If there is considerable motion out of this plane, then any time base error will translate into an increase in the residual when

    transforming to 3D, as will be shown below. One can interpolate between frames

    & find the time shift of one vs the other which minimizes the residual to find

    the "best" time offset.

    Consider the example of a ball falling to the ground with two horizontally

    pointing cameras. Each camera determines a ray from the camera principal point

    through the object point. In a perfect world the two such rays, one for each

    camera, would intersect at the object. Now if one imagines introducing a time

    shift of one of the cameras, then the ray for that camera would be aiming

    higher or lower than the ray for the other camera and the two lines would not

    intersect. Rather there would be some "distance of closest approach" for the

    two lines. This distance is related to the residual in 3D calculation. In this

    example, the greater the time offset, the greater this distance would be. Then

    by minimizing this distance versus time shift one can calculate the real time

    difference between the unsynchronized cameras.

    Since this method relies on minimizing the residual of a point moving out of the plane defined by the cameras & object, any other effect which has the same result will incorrectly be interpretted as a time offset between the cameras.

    Possible such effects might include systematic incorrect digitizing. For

    example if one camera view was systematically digitized low and the other high

    one might incorrectly interpret the residual as a time shift. However,

    this is rarely happened and in normal situation only random error exist. Another example would include camera distortion. For the algorithm to work successfully the error due to time offset between the cameras must be larger than the other contributing errors. If one considers a ball dropped from a height of 2m & video taped at 60 Hz, the ball would have a velocity of 6.2 m/s resulting in motion of 10.3 cm between frames. The motion of the ball in .1 to .2 frame times should be larger than the other errors mentioned. When one considers this residual summed over all frames, it is reasonable to be able to calculate the relative time offset of multiple cameras to .01 frame time.

    The more cameras one use, for example we used 5 cameras in the study

    Presented at the 3D conference of Human Movement, the better the GenLock

    Estimation is. When comparing the results to GenLock system, no differences were noticed.

    This method save approximately $10,000 in hardware and software cost.

    You can download this program which I incorporated in the Digi4.exe program

    And also down load data files to "play with". Of course, it is for free and you can find It on my web site at:


    http://www.arielnet.com


    I will appreciate any comments about this method.

    Have a great day

    Gideon Ariel, Ph.D.





    =============================================

    The World best Motion Analysis Systems and

    Computerized Exercise Systems. Download for free at:


    http://www.arielnet.com


    Gideon Ariel, Ph.D.


    West Cost Headquarters:

    Ariel Dynamics, Inc.

    6 Alicante St.

    Trabuco Canyon, CA 92679 U.S.A.

    E-Mail: ariel1@ix.netcom.com

    ------------------------------

    Office in San Diego:

    Ariel Dynamics, Inc.

    4891 Ronson Court

    Suite F

    San Diego, California 92111 USA

    (619) 874-2547 Tel

    (619) 874-2549 Fax

    Email: ARIEL1@ix.netcom.com


    Web Site:
    0000,0000,fefehttp://www.arielnet.com

    --------------------------------------------------

    -------------------------------------------------------------------
    To unsubscribe send UNSUBSCRIBE BIOMCH-L to LISTSERV@nic.surfnet.nl
    For information and archives: http://www.bme.ccf.org/isb/biomch-l
    -------------------------------------------------------------------
Working...
X