View Full Version : FW: Summary of Responses for Measurements from X-Ray Films

#ang Lian Ann#
08-23-1998, 01:02 PM
> Here is the consolidated results from all the contributors. For detailed
> description, please scroll down to the reply written by the contributors.
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> My message
> I am a graduate student of the Nanyang Technological University
> (Singapore),
> I am working on a project to study the effects of traction forces on the
> cervical spine.
> I have performed traction tests on cadaveric specimens at various loads
> and
> angles. A lateral radiograph was taken at each configuration.
> The problem here is to measure the anterior and posterior intervertebral
> separation radiographs accurately. I thought digitising the x-ray film
> would
> be a good idea since I could calculate the distances using the pixels
> location and I know the actual distance between two points on the
> radiograph. However, there is loss of information in the scanning process
> and some of the dark regions are indistinguishable from the background.
> If you have any solution to my problem, I would appreciate to hear more
> from
> you.
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Summary
> Scanning of the X-ray
> 1) Develop the x-ray films into black and white photo for
> scanning.(Goodwin Lawlor)
> 2) Use of backlit digitizing tablet and take a picture of it using digital
> camera, or video recorder with frame grabber, etc.(Nathaniel Ordway) and
> (Raymond R Brodeur)
> 3) Use of scanner transparency adapter (Thomas Zacharias)
> 4) Scan the image with an appropriate intensity, and with a medical image
> quality scanner. Altering the screen brightness control can change the
> visibility of contrasting section in both the bright and dark regions of
> the image (Tim Ferris)
> Processing of scanned images
> 1) Subtracting the digitized images (Clay Wilson).
> 2) Perform histogram stetch on the data (Roland Andrag)
> 3) Edge detection alorigthm (Tim Ferris)
> Other points
> 2) Calibration points at all depths in the x ray image because of the
> closeness of x ray point source to the x ray film. (Barry D Wilson)
> References cited
> 1) Frobin W, Brinckmann P, Biggemann M, Tillotson M, Burton K. Precision
> measurement of discheight, vertebral height and sagittal plane
> displacement from lateral radiographic views of the lumbar spine. Clin
> Biomech 1997: 12 (Suppl 1):S4-S63. (Dr Kim Burton)
> 2) Grondahl, H., K. Grondahl, and R. Webber, A digital subtraction
> technique for
> dental radiography. Oral Surg Oral Med Oral Pathol, 1983. 55(1): p.
> 96-102. (Clay Wilson)
> 3) Hausmann, E., et al., A reliable computerized method to determine the
> level
> of the radiographic alveolar crest. Journal of Periodontal Research, 1989.
> 24:
> p. 368-369. (Clay Wilson)
> 4) Jeffcoat, M.K., et al., Quantitative digital subtraction radiography
> for the
> assessment of peri-implant bone change. Clin Oral Implants Res, 1992.
> 3(1): p.
> 22-7. (Clay Wilson)
> 5) Stewien JC, Ferris TLJ. The Asterisk Operator -An edge detection
> operator addressing the problem of clean edges in bone x-ray images.(Tim
> Ferris) (Article attached with this mail)
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> You might want to look at a recent supplement to Clinical Biomechanics:
> Frobin W, Brinckmann P,
> Biggemann M, Tillotson M, Burton K. Precision measurement of discheight,
> vertebral height and
> sagittal plane displacement from lateral radiographic views of the lumbar
> spine. Clin Biomech
> 1997: 12 (Suppl 1):S4-S63.
> If your library does not have a copy, I can send one.
> Dr Kim Burton
> Editor-in-Chief, Clinical Biomechanics
> 30 Queen Street, Huddersfield HD1 2SP, UK
> Voice: +44 1484 535200
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Hiya,
> Did you scan the x-ray chart directly, with a flat-bed scanner? Try
> using the x-ray chart as a black and white photographic negative...then
> develop a b&w print from the chart. All the detail should be preserved.
> Now scan the b&w print.
> Hope this helps,
> Goodwin Lawlor
> Mechanical Engineering Dept
> University College Dublin
> Ireland
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Hello,
> I don't know your scanning procedure and what kind of scanner you are
> using, but the source of your bad scans seems to be the missing scanner
> transparency adapter. You need a transparent x-ray during scanning
> process and you can get it with such adapter (perhaps a second light
> from above could be enough).
> Thomas
> --
> Thomas Zacharias
> Universitaet Rostock
> Ernst-Heydemannstr.6
> D-18055 Rostock
> thomas.zacharias@uni-rostock.de
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Rather than digitizing the x-rays, try using a backlit digitizing tablet.
> This is what we have used to collect data points. Not sure how you
> digitized the x-ray film, but we have done this quite successfully with
> either a digital camera (SONY) or a flatbed scanner setup to scan x-rays.
> ************************************
> Nathaniel Ordway, MS, PE
> Assistant Professor
> Department of Orthopedic Surgery
> SUNY Health Science Center
> 750 E. Adams St
> Syracuse, New York 13210
> mailto:ordwayn@hscsyr.edu
> voice: (315) 464-6462
> fax: (315) 464-6638
> www: http://www.ec.hscsyr.edu/ortho/
> ************************************
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> If your radiographs are at least somewhat standardized with respect to
> projection geometry and contrast you may want to try subtracting the
> dizitized
> images. In brief, the procedure is as follows:
> 1) Following digitization the images can be corrected for small
> differences in
> contrast and geometry.
> 2) The images are registered and arithmetically subtracted pixel-by-pixel
> with
> an offset of half the gray scale range.
> The resultant subtraction image reveals the changes between the two
> radiographs,
> even changes that may be indistinguishable before subtraction. You can
> further
> highlight the area of change by binarizing the image with proper
> thresholding.
> You should the be able to make your measurements as before. The offset
> simply
> sets the background in the subtraction image at half the gray scale range
> instead of at zero. I use Image-Pro for this but almost any imaging
> software
> should work. If this sounds like some thing that you would like to try and
> you
> need more info please contact me. I also cited some references below
> regarding
> subtraction radiography in dentistry. Good luck.
> Clay Wilson
> --
> Graduate Student Phone: (518) 276-6967
> Rensselaer Polytechnic Institute Fax: (518) 276-3035
> Biomedical Engineering Dept.
> Jonsson Engineering Center, Room 7049
> 110 8th Street
> Troy, NY 12180-3590
> 1. Grondahl, H., K. Grondahl, and R. Webber, A digital subtraction
> technique for
> dental radiography. Oral Surg Oral Med Oral Pathol, 1983. 55(1): p.
> 96-102.
> 2. Hausmann, E., et al., A reliable computerized method to determine the
> level
> of the radiographic alveolar crest. Journal of Periodontal Research, 1989.
> 24:
> p. 368-369.
> 3. Jeffcoat, M.K., et al., Quantitative digital subtraction radiography
> for the
> assessment of peri-implant bone change. Clin Oral Implants Res, 1992.
> 3(1): p.
> 22-7.
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Your problem sounds interesting. Why do you need to digitize the x-rays
> to
> measure them? Why not just physically measure from the x-ray image? In
> the past I also tried scanning x-rays, with little success. Too much
> detail was lost. The best images we obtained were from a video camera and
> frame grabber, taking a video image of the x-ray on a view box (with the
> view box light on). We found we could get much better detail with a light
> box than when we used a scanner.
> Raymond R. Brodeur, DC, PhD
> Ergonomics Research Laboratory
> Michigan State University
> 742 Merrill Street
> Lansing, MI 48912, USA
> 517-487-1702 (voice)
> 517-487-2023 (fax)
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> From: "M Swanepoel"
> > To: witsmech2/randrag
> > Date: Wed, 19 Aug 1998 16:11:11 +2:00
> > Subject: (Fwd) Measurements from X-ray Films
> Hello...
> This message was forwarded to me by Dr. M. Swanepoel since I have
> some experience in working with digital images of very low
> contrast..
> I suggest as a first measure to perform a historgam stetch on the
> data. This has the effect of reassigning colours (or shades of gray)
> to the image based on how many pixels of of each colour are present
> in the picture. It is usually used to adjust the shading such that
> there are approximately equal numbers of pixels of each colour in the
> image, although it can be used to make the distribution fit any
> desired curve.
> Which image processing package are you using? Most packages will have
> this function - I know that Paint Shop Pro 5 and the Matlab image
> processing toolkits both do. (Matlab function 'Histeq(image
> matrix)').
> I hope that helps, any more questions are welcome..
> Roland Andrag
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Hi Ang Lian Ann
> You will need calibration points at all depths in the Xray image because
> of the
> closeness of the Xray Point source to the xray film. There is the
> potential for
> a large perspective error problem.
> *
> *
> * Barry D. Wilson, Ph. D. Phone: 64 (03) 479 8987
> *
> * School of Physical Education Fax: 64 (03) 479 8309 *
> * University of Otago Email: bwilson@pooka.otago.ac.nz *
> * Dunedin *
> * New Zealand *
> * *
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Lian Ann,
> I have been working on a similar type of problem, measurements from X-ray
> studies of the knee. I have been using scanned images. The first images
> were of a 6 year old child (hence indistinct bone boundaries) and scanned
> on
> an office quality scanner. There are problems of variable contrast and
> density across the image, which seems to be what you noted.
> We applied a new type of edge detector, which we developed, described in
> enclosed paper, published in a conference in Adelaide in April this year.
> D
> Mital of Singapore (NUS?) was at the conference and should have a
> proceedings.
> This approach to edge detection reduces the image to an outline image
> which
> then overcomes at least many of the problems you encountered.
> Are you sure that you have scanned the image with an appropriate
> intensity,
> and with a medical image quality scanner. I am now using some other
> images
> scanned more appropriately and these look easier to use, but still have
> the
> visual problem of display on the screen. Another trick I have found is
> that
> by altering the screen brightness control I can change the visibility of
> contrasting section in both the bright and dark regions of the image.
> Tim Ferris
> ++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
> Regards
> Ang Lian Ann
> Nanyang Technological University (Singapore)
> School of Mechnical and Production Engineering
> Nanyang Avenue
> Singapore 639798
> Office : N2-1c-117
> Tel (O) : +65 7996398
> E-mail p7222295d@ntu.edu.sg

The Asterisk Operator - An Edge Detection Operator Addressing the Problem of Clean Edges in Bone X-ray Images
John C. Stewien, Timothy L.J. Ferris
School of Physics and Electronic Systems Engineering
University of South Australia
Warrendi Road, The Levels, 5095, Australia, t.ferris@unisa.edu.au

Keywords: Edge detection, X-ray images
Abstract - Edge detection is a fundamental step in many machine vision or image processing applications and systems. The importance of edge detection increases as one seeks to increase the level of automation in the image processing system. This paper reports a novel edge detection operator which is fast in application to grey scale images and is sensitive to 'difficult' edges but not sensitive to artifacts such as scanner interlacing artifacts. The edge detection operator presented is simple but provides good detection of edges and clean and continuous edges across most of the image.
Edge detection is a fundamental initial step in our knee joint X-ray analysis project because we have developed algorithms to determine dimensional measures of certain aspects of the knee joint from outline images. The outline images we have assumed have single pixel wide, continuous edges, so this becomes the specification required of the bone edge detection system we use. The present work was undertaken because of the failure of earlier experimental work using traditional edge detector operators to produce a satisfactory result. The traditional methods investigated included various combinations of linear and statistical filter, histogram equalisation and 2(2 and 3(3 edge detection operators, followed by threshold tests. These methods were presented by Sid-Ahmed [1].
These methods produce outlines containing most image edge or edge-like points. The images have visually pleasing outlines, but frequently the edges are defective, with either or both greater than single pixel width, or broken edges, which make automated use difficult. Further processing is demanded to fill gaps in the edges and to thin the edges obtained. Traditional edge detection methods also suffered serious defects resulting from false edge detection resulting from image artefacts derived from both film processing and scanning operations.
The images used in our project, figures 1 and 2, are scanned standard film, X-ray images, both of a 6 year old boy. The use of child images introduces further problems to edge detection, because child bones are not fully developed and so not as dense and clearly delimited as adult bones. The effect is that child bones will have less distinct edges than would adult bones. Consequently, images of a 6-year-old boy will be more difficult to find the edges than in an image of an adult. The film images we used included some watermark stains, resulting from the film processing. This type of artefact frequently occurs because of the high speed processing demanded for X-ray films. Our images were scanned by an office quality scanner, using an interlaced scanning process, which resulted in alternating lines of high and low intensity. Both artefacts result in false edge detections, with the more serious being the scanning artefact.
The benefit of the present research is that we have developed a means for edge determination, which is robust in the false edge artefact environment. This will provide the possibility of routine application of our image processing techniques in clinical settings with low cost demands for scanning equipment and no modification of existing film processing techniques.
Figure 1. The original X-ray image showing variable exposure, contrast, and some of the scanning artifacts.
Figure 2. The original X-ray image showing variable exposure, contrast, and some of the scanning artifacts.
Edge Operator
Standard edge detection operators, as described in Sid-Ahmed [1], were passed over the images of figures 1 and 2 and the results were found to be unsatisfactory for two reasons. The parallel line scanning artifact, resulting from the interlaced scanning process performed by the low cost scanner resulted in excessive detections of edges following the scanning lines. The effect of this was to produce an outline image with a clutter of spurious edges, most noticeable in the 'background' regions of the image. Figures 1 and 2 also show that the bone edges have different levels of contrast with the background, which results in difficulties with the use of threshold values to determine the 'true' edge pixel after passing an edge detection operator. The problems remained evident even when the images were pre-processed using various filters.
The problems described above are the result of the structure of standard edge detection operators. The Roberts edge detection operator passes a 2(2 matrix across the image, and the Sobel, Sobel Compass, Kirsch and Prewitt operators all pass a 3(3 matrix across the image. The effect of the small matrix exercised by the edge detector operator is that the output is highly sensitive to local effects, and so may produce an exaggerated number of edge detections where artifacts of the scanning line type are present.
The scanning line artifact effect may be removed by developing an operator which depends on a larger matrix, and so determines a change in average image grey scale level, rather than having a high sensitivity to local effects. This is the foundation concept of the present work.
The difficulty with using a large matrix for edge detection is the computation time required, involving execution of many arithmetic operations, usually multiplications, and the overhead activities of addressing pixels as stored in RAM. One aim of our work was to find a method, which would result in an execution time, which was tolerable for a medical practitioner to wait while processing was performed. This time test is important as a test of the perceived usefulness of the software developed.
The software produced to test the edge detection operator was written in C++ to obtain the advantages of object oriented use of the software in later work, execution speed, independence from proprietary software, and execution speed. The edge detection system was incorporated as a class with two public functions, 'Region' and 'RegionThin'. Both these functions use the same edge detection analysis method, but RegionThin yields a thinned version of Region's output, by an elimination process. The analysis method is as follows.
The function passes the image pixel by pixel in four different directions. At each pixel the function checks the pixels either side along the path that the image is being checked, and marks the pixel with a value indicating the minimum difference of the adjacent pixels, up to a user specified distance. Consider figure 3, the pixel being checked is marked with an X. Suppose the user has specified the check distance as five pixels either side of X. The function finds the difference between the pixels marked '1', then the ones marked '2', and so on. Pixel X is then marked with the "edge strength value", which is the minimum value of all the differences. Note that the edge detected is perpendicular to the orientation of the checking line. This is important to note for the explanation of how the RegionThin function works, provided later.
The structure of figure 3 is passed across the image in four orientations - horizontal, vertical, top left to bottom right diagonal, and top right to bottom left diagonal. Thus, rather than testing a block, as is required by traditional matrix based compass operators; the relatively sparse asterisk is passed across the image, saving both computation and


Figure 3. Illustration of the analysis structure for edge detection.

overhead contributions to the execution time.
The RegionThin function does additional processing. Consider a pass of the analysis structure in one of the above four orientations. If a line of pixels are all determined to be edge pixels, then their edge strength values are compared and all but the pixel with the highest edge strength value have their edge pixel status removed. For example, in figure 4, the RegionThin function proceeds from left to right, and finds a group of 5 edge pixels, defined by their edge strength value being greater than 0. The maximum edge strength value in the group, is eight, so all edge pixels in the group with edge strength value less than eight have their edge pixel status removed.
Figure 4. Illustration of a possible set of adjacent edge pixels, which must be thinned too one point by the function RegionThin.
Region, and RegionThin also take a threshold parameter specifying the minimum edge strength value for a pixel to be determined to be an edge pixel. This enables the removal of edge pixels with low edge strength values. Figure 5 shows a modified output of the Region function. This output has been modified for viewing, by negating the palette, so that value 0 appears as white, and the palette has been expanded so that the intensities in the image cover the full range from black to white. The input parameters for Region to generate this image defined the distance to check either side of a pixel, figure 3, as 8 pixels, and that only pixels with an edge strength value greater than 3 would be considered as edge pixels.
Figure 5. Modified output of function Region.
Figure 6. Output of function RegionThin using the same input parameters as were used in function Region in figure 5.
Figure 6 shows the output from RegionThin with the same input parameters.
To aid space efficiency and code eloquence, the Edge class contains two private member functions that are repeatedly called by functions Region and RegionThin. The best way of explaining these functions is to describe the procedure they perform when processing an image.
The Region function processes the input image in sequential byte order. At each pixel, the four directions, from 0 to 3, are sequentially passed to a function, CheckLineMin, which also takes the pixel location. Using class member variables for the input image, and the distance to check, CheckLineMin returns the minimum difference between the pixels either side of the pixel being checked. The Region function then assigns to the pixel, the maximum value obtained from CheckLineMin for the four different directions of checking. The CheckLineMin function initially checks either side of the pixel to find which pixel is highest in value. Should pixels further away, but within the specified distance, reverse which side is higher in value, then CheckLineMin will return zero. This has proved effective in stopping blank areas containing considerable noise, from having edges detected within them.
The RegionThin function processes the input image in a different way than the Region function. The Region function processes the whole image in one pass, checking in all four directions at each pixel, but function RegionThin passes the whole image four times, once for each direction of checking with the CheckLineMin function. Additionally, the RegionThin function does not necessarily process the pixels in byte order, but processes them in the order they appear when scanning the image in the direction that the pixels are being checked along. This is so that groups can be formed by setting a variable flagging either not inside a group, at the beginning of a group, inside the group, or at the end of a group. This is determined by the conditions:
1. If the pixel under test is an edge, and the previous pixel was not an edge then we are at the start of a group, and this position will be noted for when the end of the group is found. For this condition, the edge strength value is taken as the maximum edge strength value for the group.
2. If the pixel under test is an edge, and the previous pixel was an edge, then check to see if the edge strength value is greater than the maximum edge strength value for the group. If so, set the maximum edge strength value accordingly.
3. If the pixel under test is not an edge, and the previous pixel was an edge, then the end of a group has been found. In this situation the function the goes back to the start of the group, and sets the pixels to an edge strength value of 0, if their value is less than the group maximum.
The direction of the check line is perpendicular to the edge it detects. Typically, a group will have pixels of low edge strength value in the outer pixels of the group, and high edge strength value in the inner pixels. The thinning process eliminates the outer pixels. RegionThin calls a private member function PassPixel to do the thinning. PassPixel tests for all the possible conditions, outside of group, start of group, inside group, and at end of group, and acts accordingly.
Execution Times
It has been stated above that execution time is important in our application. The execution times of table 1 are sample times for an image of dimensions 675x849 pixels. Executables were compiled with Microsoft Visual C++ v4.0 using default debug options. Executables were run on a Pentium Classic 166MHz machine, with 32 Mbytes of EDORAM running Microsoft Windows NT version 4.0.
Table 1. Execution time required for edge detection functions with input parameters being image name, image width, image height, length of scan arm on each side of pixel of interest, edge detection threshold.
5 seconds
11 seconds
23 seconds
12 seconds
The execution times presented in table 1 show that execution time is constrained to something reasonable for a user to wait. These execution times are good compared with execution times achieved using 3(3 matrix compass type edge detection operators, although direct comparison is not possible because a different memory allocation structure was used for image representation.
The particular advantage of the asterisk edge detection operator is that edge detection is based on a wide span of the image rather than the eight neighbor pixels of the pixel of interest. This is advantageous because it overcomes the problems introduced by scanning interlacing artifacts and eliminates the effect of considerable noise in the image. The interlacing artifact is removed because of the matching of pixels the same distance either side of the pixel of interest, which will each appear on the same set of scanning lines. Scanning lines of the same pass have the same illumination intensity, and thus direct relation to each other, but scanning lines of separate passes, may suffer different intensity, and thus produce false edges.
The asterisk operator is thus recognized as performing an implied filtering operation of a statistical smoothing type without the computation cost of separate filtering and edge detection functions.
The asterisk operator, combined with the line thinning procedure described above, produces output which closely approximates the single pixel wide continuous edges required by our objective. Gaps in the edges are smaller than gaps produced by other edge operators, and appear in particularly difficult, low contrast, regions of the image. These gaps can only be effectively addressed by an adaptive process based on local image characteristics and/or edge prediction techniques.
The asterisk operator is a powerful tool for edge detection in images with noise or artefacts causing false edge detections. It is also effective for edge detection in images of uneven contrast or exposure because it may be used with very low threshold differences in pixels because it performs self-checking of errors arising from such problems. The asterisk operator allows fast edge detection using the smoothing effect of basing detection on a large block around each pixel of interest.
1 Sid-Ahmed, M.A., 1994, Image Processing Theory, Algorithms, and Architectures, McGraw Hill, New York.