Announcement

Collapse
No announcement yet.

Summary: Visible Female CT data calibration factors

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Summary: Visible Female CT data calibration factors

    Biomch-L subscribers:

    I am attaching below my original query and summary of responses about
    problems with the Visible Female CT data changing calibration factors
    along the length of the scan. It appears this problem is not unique to
    the neck; researchers working with the pelvis and lower limb have also
    found these problems. Solutions fell into two categories: scale the
    images and resample the data to get the correct resolution, and then
    segment the data; or segment the images first and then scale those data.

    For those who are interested, our solution was to segment the CT images,
    export the the boundary data for each slice and scale in Matlab, read
    them back into the imaging software (3D-doctor) and create DXF files for
    each bone. One problem is that although we know how much the field of
    view changed because we know the pixel calibration values, there is no
    information about whether the field of view also translated. For the
    cervical spine, it doesn't appear to have translated (by visual
    inspection), but others have had to shift the images to get proper
    alignment. In general, there is a consensus that there is a lack of
    technical information available about the Visible Human project, which
    makes it difficult for researchers to use this immense volume of
    anatomical data.

    Thanks to all who responded!
    Anita Vasavada

    ************************************
    Original posting:

    Biomch-l subscribers:

    We are segmenting the Visible Female CT dataset for the bones of the
    head and neck region and found that the pixel calibration factor
    (mm/pixel) changes as we look at images from the skull down the neck.
    Specifically, it is 0.488 mm/pixel from the skull to mid-C4 (slice
    1209), then becomes 0.723 for the next 17 slices, changes to 0.859 for
    the next 20 slices and finally becomes 0.938 down to slice 1400 (this is
    as far as we've examined). Presumably this was done to change the field
    of view as the body cross-sectional area increases caudally.

    Our problem is that the image analysis software we use (3D-doctor, Able
    Software) does not allow for changing calibration factors in one stack
    of images. (That is, the calibration factor given in the first image
    header will be used for all images in the stack). The best solution we
    have come up with seems to be to define different image stacks with
    different calibration factors, segment the bones separately, export to
    another format, scale and re-combine the data for 3-D rendering.

    We would like to know if anyone else working with the Visible Female or
    other dataset has come across this problem and has any more elegant
    solutions. Or, are there other software packages that allow different
    calibration factors for different images in one stack? As always I will
    post a summary of responses.

    Thanks,
    Anita Vasavada

    ********************************
    From Markus Heller (markus.heller@charite.de):

    Hi there,

    in addition to the method you suggested, you could
    also try to resample the data to achieve a common
    resolution. This should be possible using either
    commercial software (e.g. AMIRA, available from
    www.tgs.com/?CFID=812970&CFTOKEN=47282892) or
    through open source projects such as VTK / ITK
    (see: public.kitware.com).

    There might also be the possibility to use
    a different data set - I think there has been
    a project focussed on head, which created data
    that has a higher and more importantly an uniform
    resolution. I think the data is also hosted on
    the NLM servers keeping the visible human data sets.

    Hope this helps,

    Markus

    ***********************************************
    >From Robert Funnel (robert.funnell@mcgill.ca):

    Anita Vasavada -

    My group has done some segmentation of the head from the VH physical
    slices (http://sprojects.mmi.mcgill.ca/anatomy3d/ and .../tmj/) but
    not yet from the CT data, so I found your message very interesting.

    Multiple calibration factors are on the to-do list for my locally
    developed segmentation software, but I don't have the feature yet.

    In the meantime, it might be more elegant to use graphics software
    (e.g., ImageMagick or the GIMP) to rescale the images before
    segmentation.

    What do you plan to do with the 3-D model?

    - Robert


    ***********************************************
    >From Cheryl Riegger-Krugh (Cheryl.Riegger-Krugh@UCHSC.edu):


    The Center for Human Simulation at the University of Colorado should be
    Able to help you

    Vic Spitzer and Lee Granas and Greg Spitzer

    Cheryl Riegger-Krugh

    ***********************************************
    >From Kate Rudman (k.rudman@abdn.ac.uk)

    Hello there. We had the same problem!!! and then some....What we ended
    up doing was resizing the images so that they were all the SAME
    resolution / size and then manually imported the new images as .raw into
    our analysis software (Mimics). We used ImageJ (free download), to work
    with the initial .raw images, wrote Macros to resize images (to get the correct resolution), add extra blank pixels if necessary at the edge
    (as Mimics wants all images to be the same size for importing) and if
    necessary shift the centre of the image so they line up again. Then
    saved the new images as .raw and manually import into mimics. Hope this
    helps, let me know if you need me to clarify. Also, I have a
    counter-question. We have been trying to get information from the VHP
    regarding some details of the Visible Female such as height, and Body
    weight. Overall we found the data and the fact that the VHP (quite a
    large project) kept changing the resolution very frustrating and added a
    lot of unnecessary time and work. In case your interested, we're
    working on the pelvis/femur and investigating stress analysis. Well,
    best of luck on your project and I would be interested to see your other
    responses!

    Sincerely,

    Kate Rudman
    University of Aberdeen
    Scotland, UK
    k.rudman@abdn.ac.uk


    ***********************************************
    >From Andrzej Przybyla (A.Przybyla@bristol.ac.uk):

    Regarding your problem, ImageJ (http://rsb.info.nih.gov/ij/) is a nice
    piece of software, which I am sure will solve it. You might find a
    plugin ready for it or get one done easily if you a Java user. I am not
    a Java user but got help from ImageJ mailing list. The author and
    developer Wayne Rasband also helped me and he did it so quick that it
    was almost like online help .
    I am sorry I can't help you more than that but I am sure ImageJ is a
    good tool for it and it is available for free.
    Good luck

    All the best,
    Andrzej

    ***********************************************
    >From David Feldstein (df46@drexel.edu)

    Anita,

    I have been using Materialize MIMICS software for 3D reconstruction of CT scan data. I am not sure if it will solve your problem, however you can contact them and look into it.

    David Feldstein

    ***********************************************
    >From Michelle Sabick (MSabick@boisestate.edu)

    Anita-


    We found the same problem that you did, but our solution was definitely
    not elegant. Our commercial software (Mimics by Materialise) also was
    not able to handle the changes in pixel dimension automatically. We
    created solid models from each of the separate image stacks and joined
    them together in an STL-editing program. We were aided in the alignment
    of our separate STL files by the existence of a fluid-filled tube
    running down the anterior aspect of the subject's body. This tube is
    visible in nearly every slice, and we used it as a guide to ensure our
    separate solid models were aligned properly before joining them.

    Hope this is useful, but I am hopeful that someone came up with a better
    solution.

    Regards,

    Michelle

    ***********************************************
    >From Bjorn Olsen (Bjorn.Olsen@MEMcenter.unibe.ch):

    Hi,

    Amira (www.tgs.com/amira) should be
    able to do this.

    Björn

    --
    Anita Vasavada, Ph.D.
    Assistant Professor
    Programs in Bioengineering and Neuroscience
    Washington State University
    Pullman, WA 99164-6520
    voice: (509) 335-7533
    fax: (509) 335-4650
    vasavada@wsu.edu
    http://www.bioengineering.wsu.edu/alias/?AnitaVasavada
Working...
X