Announcement

Collapse
No announcement yet.

2D Panning Algorithm Summary

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 2D Panning Algorithm Summary

    Following is a summary of replies from my 2D panning algorithm query last
    week. I am grateful for all of the responses I received.

    Original Message:
    I am in the process of developing a system to determine an T&F athlete's
    foot position along the run-way using a single panning camera. The system
    would be used for competition analysis and hence will be non-invasive in
    nature (ie. can not have any markings on the runway during competition).
    The system will have to be able to accurately calculate the foot position
    along the length of the runway irrespective of where the foot lands along
    the width of the run-way. Panning angle information would be available via
    a potentiometer in the tripod which can be pre-calibrated with markers on
    the runway prior to competition. I am looking for an 2D algorithm or method
    of calibration to define the object space along the length of the runway so
    that the angle information from the tripod and digitised coordinates of the
    foot can be combined with the previously collected calibration file. The
    only 2D panning algorithm that I can find is from Chow, J. (International
    Journal of Sport Biomechanics: 1987, Vol 3. pp.110-127), although this
    requires markers along the runway during competition. I am also aware of
    systems using an instrumented tripod to provide real-time analysis (such as
    used in swimming at the Australian Institute of Sport), however, these
    require the athlete to be directly in the centre of the screen during the
    whole pan. I have also searched through the Biomech-L archives for 2D
    panning algorithms with no success.

    Summary:
    Most of the replies I recieved provided references related to methods of
    calibrating 2D or 3D panning cameras which involved control markers
    remaining in the field of view throughout filming and/or the precise survey
    location of the camera to be known. Ideally, given that this sort of
    testing would be done in competition settings, we would prefer this type of
    analysis to be as non-invasive as possible (hence we would like to calibrate
    the run-way prior to comp and then remove all of the markers from the
    field). Dr. Jim Walton and Dr. Young-Hoo Kwon were particuarly helpful in
    providing information on panning techniques that could be accomplished
    without markers remaining in the field of view during filming. The main
    issue of perspective error problems in 2D or 1D analyses remain and need to
    be accounted for if possible. This problem is greatly reduced using 2
    cameras.

    Replies:
    ----------------------------------------------------------------------------
    --------
    Jim Walton:
    Have you given any thought to using IR lighting to create calibration
    markers? Your camera could "see" it, but nobody else could ... except
    perhaps, ABC :-)
    If you want a simple demonstration of this "phenomenon", point a TV remote
    at the lens of your camera ... you can see the "little lights" blinking
    their codes out as the buttons are pushed.
    Expand on this concept ... draw lines with IR lighting andyou can "project"
    a "permanent" calibration grid onto your object-space that others can't
    "see".

    I described how to calibrate a two-dimensional object-space in my doctoral
    work ...
    Walton, J.S. "Close-Range Cine-Photogrammetry: A Generalized Technique for
    Quantifying Gross Human Motion." Penn State, 1981.
    Basically, I described how to reduce the DLT to a 2-D algorithm that can be
    used to calibrate and track motion in a plane with a single camera. [This
    can also be found in the Proceedings of the International Congress of Sports
    Sciences. Edmonton, Canada, August, 1978 under the title of "Close-range
    Cine-photogrammetry: Another approach to motion analysis"]

    ************************************************** **********
    * JAMES (Jim) S. WALTON, Ph.D., President, 4DVIDEO *
    * 825 Gravenstein Highway North, Suite 4 *
    * SEBASTOPOL, California 95472 USA *
    ************************************************** **********
    * PHONE: (707) 829-8883 FAX: (707) 829-3527 *
    * INTERNET: Jim@4DVideo.com *
    ************************************************** **********
    ----------------------------------------------------------------------------
    -------

    Young-Hoo Kwon:
    2-D panning I believe is no different from 3-D panning. As long as you do a
    series of calibrations and express the DLT parameters as functions of the
    panning position. Based on the observation from the simulated calibrations,
    the parameter does not change radically as the panning position changes.
    Cubic spline interpolation of the DLT parameters for panning position would
    be sufficient. Obtaining the panning position from the instrumented tripod
    makes the whole process much simpler.

    The key is the use of 2-D DLT method. I would put several poles of known
    length (range poles) at different locations along the track. As long as I
    know where I put the poles, I will be able come up with the real-life
    coordinates of the control points marked on the poles. With these, I can
    perform a series of 2-D DLT calibrations and develop a set of paramenter
    predictions equations. The only problem is how to sync the video images and
    the panning position signal from the tripod.
    If you deal with the foot only, you may even use the 1-D DLT instead of the
    2-D. Combining the foot and hip definitely require s 2-D DLT-based approach.
    Anyway, the process explained above is one of the standard features of my
    motion analysis software, Kwon3D 3.0. I am finishing up the upgrade now. It
    will be really interesting if I can have a chance to test the program with
    your data.

    - Young-Hoo Kwon, Ph.D.
    - Biomechanics Lab, PL 202
    - Ball State University
    - Muncie, IN 47306 USA
    - Phone: +1 (765) 285-5126
    - Fax: +1 (765) 285-8596
    - Email: ykwon@bsu.edu
    - Homepage: http://kwon3d.com
    - Korean kwon3d eGroup: http://kwon3d.com/korean/eGroup_kr.html

    - Int'l kwon3d eGroup: http://kwon3d.com/eGroup_i.html

    ----------------------------------------------------------------------------
    --

    Gideon Ariel / John Probe:
    The APAS system supports two different methods for the panning cameras.
    Both methods are used for 3-D analysis. The original method consisted of a
    panning head that mounted to the tripod
    (between the camcorder and the tripod). This panning head had a cable that
    connected to the character generator port of the camcorder and was used to
    superimpose a horizontal line on the video image. The length of this
    horizontal line was proportional to the panning angle of the camera. During
    the digitizing process, instead of digitizing the "fixed point" (as with a
    stationary camera) the user was required to digitize the endpoint on the
    "paning bar." Information on this algorithm was first presented at the
    1993 ISB Congress in Paris. The reference information is listed below.
    Stivers, K.A.; Ariel, G.B.; Vorobiev, A.; Penny, M.A.; Gouskov, A.;
    Yakunin, N.; "Photogrammetric Transformation With Panning"; XIV ISB
    Congress, Paris, France, July 4-8, 1993.
    While the "panning head" method was very functional, Ariel Dynamics research
    and development improved on the panning method and eliminated the need for
    the panning head hardware. The new algorithm handles this task entirely
    within the software, thus allowing any camera to be used for the panning.
    The software algorithm requires that there be two calibration cubes. Each
    cube must have a minimum of 8 control points (though 12 or more are highly
    recommended) and ALL points are still measured relative to a single origin.
    In essence, we are telling the software that there is one very large
    calibration cube. In between the two calibration fixtures, we use "panning
    points" instead of the fixed point. The user has the option of specifying
    (and then digitizing) any of the panning points as they come into and go out
    of the field of view.

    Additional information on the panning procedures are listed in the pull-down
    help menus of the Digitize software module. Open the Digitize module and
    select HELP, INDEX, PANNING CAMERAS to access this.

    John Probe
    Ariel Dynamics, Inc.
    ARIEL1@ix.netcom.com
    ----------------------------------------------------------------------------
    ---------------

    David Rath:
    A couple of 2D papers you may want to follow up, this paper
    http://www.orst.edu/hhp/exss/research/labs/BioMech/abstracts/panning.html
    and the one referenced by Hay and Koh should be useful. Another 2D
    articleof interest is Gervais, et al. (1989), Kinematic Measurement from
    Panned Cinematography, Canadian Journal of Sport Science. 14:2 107-111.
    APAS have a panning head with their system which uses a pot and interfaces
    with the viewfinder jack on some cameras (works with Panasonic MS4 and 5
    from memory) and outputs a white bar onto the recorded video, the length of
    which is relative to camera angle, this point is digitised instead of a
    fixed point and negates the need for track markers. We have this unit but
    have never got reliable data from it, it's 3D not 2, but the pot side of
    things may be relevant.

    David Rath
    AIS Biomechanics
    RathD@ausport.gov.au
    ----------------------------------------------------------------------------
    ------

    Michael Feltner:
    Jesus Dapena and I used 2d panning in this research.
    Dapena, J. & Feltner, M. E. (1987). The effects of wind and altitude on the
    times of 100 meter sprint races. International Journal of Sports
    Biomechanics, 3(1), 6-39.
    If you check the references in the manuscript, a paper that Jesus authored
    previously in Sciences et Motricite, "Three-dimensional cinematography with
    horizontally panning cameras" (1978, 1(3), 3-15) is listed.
    Together both papers should answer your questions.

    Michael Feltner
    Michael.Feltner@pepperdine.edu
    ----------------------------------------------------------------------------
    ---

    Andrew Lyttle
    Sports Biomechanist
    Western Australian Institute of Sport
    Stephenson Ave, Mt Claremont WA 6910
    Australia

    Tel: +618 9387 8166
    Fax: +618 9383 7344
    Email: alyttle@wais.org.au

    ---------------------------------------------------------------
    To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
    For information and archives: http://isb.ri.ccf.org/biomch-l
    ---------------------------------------------------------------
Working...
X