Announcement

Collapse
No announcement yet.

Re: Converting CT or MRI scans into 3D solid models

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Re: Converting CT or MRI scans into 3D solid models

    Please find below a discussion on mesh generation techniques from 3D
    imaging data which I hope will be useful.

    I must state that I am one of the founding members of Simpleware Ltd
    which markets the software package ScanFE for the generation of finite
    element (and CFD) meshes from 3D image data - however I am also an
    academic with a long standing interest in this area and I would like to
    think that the comments below provide, as far as possible, an impartial
    assessment of the different routes available to converting CT data to FE
    mesh.

    There are essentially two distinct approaches to the generation of
    finite element meshes from 3D imaging data (as obtained from MRI, CT,
    micro-CT for example): CAD based and "voxel" based approaches.

    Both approaches require the segmentation of the 3D data; in other words
    the identification of volumes of interest within the image.

    Step 1: Segmentation

    Segmentation techniques can range from simply defining a threshold (e.g.
    anything above a certain greyscale is bone) to sophisticated
    combinations of complex image processing algorithms. Segmented images
    consist of one or more volumes of interest (flagged voxels within the 3D
    image representing bone, ligaments, muscle, fat..) which are often
    called masks. Segmentation tools are provided by a wide range of
    software packages including, in addition to those listed in previous
    emails, Analyze (from Mayo clinic) and ScanFE.

    Once masks/volumes of interest have been obtained using segmentation
    tools two distinct routes to generating meshes are possible.

    "CAD based" approaches

    Step 2 for "CAD based" approaches: convert the segmented masks (volumes
    of interest) into geometric surface representations bounding the volumes
    of interest: examples of such representations include tessellated
    surfaces (described by primitives such as triangles), stacks of contours
    which then need to be lofted into surfaces, or NURBS type patches
    (higher order polynomial representations) of the boundaries. The
    conversion from volume data to surface representation is often fraught
    with difficulties:

    PROBLEMS
    i) gaps frequently appear in surface description which
    need to be painstakingly corrected manually,
    ii) where two or more volumes are in contact their surface
    description are usually non-conforming (gaps/overlaps)- this is a
    significant problem for modeling interfaces particularly contact
    surfaces,
    iii) stacking two dimensional contours (essentially a so-called
    21/2 D approach) is a particularly poor approach as it cannot
    satisfactorily/automatically handle bifurcations
    iv) the generation of NURBS surfaces (and equally the contour
    stacking approach) almost inevitably engenders approximations and the
    loss of features.


    Step 3 for "CAD based" approaches: The geometric surface representations
    of the volumes of interest generated in step 2 are imported into a
    commercial mesher. For straightforward geometric surfaces akin to those
    typically obtained from CAD designs most commercial meshers provide good
    tools. However for problems more typically seen in applications from 3D
    imaging meshing approach in adopted commercial packages suffer a number
    of drawbacks

    PROBLEMS

    v) Difficulty in meshing complex volumes successfully
    except with very small element base length sizes
    vi) where two or more structures are in contact typically you
    will not get a properly conforming interface (gaps/overlaps)
    vii) The connection between the mesh and the original greyscale
    data that spawned it is lost - this connection can be useful for
    assigning inhomogeneous material properties throughout a structure (say
    bone) based on the signal strength in the parent 3D image (say
    Hounsfield number to density/Young's modulus) Admittedly as a way around
    this one can re-reference a posteriori the 3D data to assign material
    properties to mesh elements although this requires the use of yet
    another non-integrated software tool.

    "Voxel based" approaches

    Step 2 "Voxel based" approaches: "Voxel based" approaches convert the
    segmented data (masks) directly into finite element meshes bypassing the
    conversion step to geometric surface description.

    By bypassing the CAD surface generation step the problems ((i) to (vii))
    listed above are avoided and a very robust and automated approach can be
    implemented. However early implementations did suffer from a number of
    distinct drawbacks including (1) the generation of unsmooth interfaces
    (originally voxels were simply converted into brick elements leading to
    'lego' like models with stepped boundaries (2) Lack of adaptive meshing
    - models consisted of elements of the same base length - the mesh
    density was therefore constant throughout volume. These problems have
    been successfully addressed by our academic group and the algorithms
    developed have been implemented into a commercially available software
    suite ScanFE.

    The techniques developed provide, I believe unarguably, both
    quantitatively and qualitatively better meshes from 3D data than
    previously possible for a large class of problems.

    The meshing techniques developed:

    (1) can be applied to masks/volumes of interest of arbitrary
    complexity
    (2) To any number of masks simultaneously
    (3) Generate smooth interfaces
    (4) The process is automated, fast (typically 5 minutes on a pc) and
    robust (guaranteed high element qualities)

    In addition

    (5) Topological/morphological accuracy of models only limited by
    imaging accuracy - in other words model is as faithful as you can get
    from the image.
    (6) Conforming at contact surfaces/interfaces. (where two or more
    masks are in contact in the segmented image the meshes generated will be
    in contact with node to node correspondence across interface-no gaps or
    overlaps)
    (7) Adaptive meshing techniques are integrated within the meshing
    (8) Material properties can be assigned throughout any of the meshed
    masks based on parent voxel signal strength
    (9) An RP model can be generated which is an exact replica of the FE
    model (useful for experimental corroboration)
    (10)Any ad hoc modifications to segmentation can be reflected
    straightforwardly and automatically in the mesh as the process from
    image segmentation to mesh generation is integrated.

    The meshing algorithms to date provide either pure tet meshes or mixed
    hex tet meshes but not pure hex meshes. (Both linear and mid-side noded
    elements can be generated).

    We have run a number of case studies which demonstrate the versatility
    and robustness of the approach including:

    (1) Mesh generation of a foam. Very convoluted shape (mask) was
    meshed in minutes and a large deformation analysis was carried out in
    Abaqus on a pc. Note shown deformation in avi is unscaled (true
    deformation-courtesy www.firstnumerics.com
    ). We have also modeled air flow
    through foam using FLUENT (CFD). (Both interstitial spaces and foam cell
    walls can be meshed simultaneously for fluid-structure interaction
    problems).
    ( http://www.simpleware.com/applications/casestudies/foam.php )

    (2) Generation of a hip model based on in vivo CT data. This is a
    model which includes a number of parts and a contact surface at cup
    -implant head interface all generated within ScanFE and solved on a pc.
    ( http://www.simpleware.com/applications/casestudies/hip.php )

    (3) Mesh generation of Beetle mandible - this is an impressive model
    generated literally in minutes based on micro-ct data provided by Dr
    Thomas Hornschmeyer. (
    http://www.simpleware.com/applications/casestudies/beetle.php )


    I hope this is a useful overview of possible routes to mesh generation
    from 3D data. I have deliberately omitted template meshing as it is not
    speaking a generic meshing technique. (Template meshing can be useful
    where a large number of very similar structures need to be meshed - a
    template/reference mesh is generated manually and then scaled/morphed
    usually based on landmarks in image to provide patient specific versions
    of the template mesh)


    Philippe


    Dr. Philippe G. Young
    Senior Lecturer
    School of Engineering and Computer Science
    University of Exeter
    Harrison Building
    North Park Road
    Exeter, EX4 4QF, UK

    Phone: +44 (0)1392 263684
    Fax: +44 (0)1392 263620

    also

    Simpleware Ltd.
    Innovation Centre, University of Exeter
    Rennes Drive, Exeter, EX4 4RN, UK
    Website: www.simpleware.com



    -----------------------------------------------------------------
    To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
    For information and archives: http://isb.ri.ccf.org/biomch-l
    Please consider posting your message to the Biomch-L Web-based
    Discussion Forum: http://movement-analysis.com/biomch_l
    -----------------------------------------------------------------
Working...
X